✦ Free & No signup ✦
Find my AI
AI Tools AI Experiences AI Use Cases News

AI Glossary

Plain-English definitions for the terms you'll encounter when exploring AI tools.

AI Agent Concept
An AI system that can take actions autonomously to complete a goal — browsing the web, writing code, sending emails — without step-by-step human instruction. Different from a chatbot, which only responds to prompts.
API (Application Programming Interface) Technical
A way for software applications to communicate with each other. When a tool says it has an API, developers can build their own applications on top of it or connect it to other services programmatically.
Benchmark Evaluation
A standardised test used to compare AI models. Common benchmarks include MMLU (general knowledge), HumanEval (coding), and GPQA (graduate-level science). High benchmark scores don't always mean the model is best for your use case.
Context Window Technical
The maximum amount of text an AI model can "hold in memory" at one time during a conversation. A larger context window means you can feed it longer documents or have longer conversations without it forgetting what was said earlier. Measured in tokens.
Copilot Product Type
An AI assistant that works alongside you inside another tool — your code editor, word processor, or browser. It suggests completions, answers questions, and automates repetitive tasks in context. GitHub Copilot and Microsoft Copilot are the most well-known examples.
Diffusion Model Technology
The type of AI model behind most modern image generators (Midjourney, Stable Diffusion, DALL·E). It works by learning to "denoise" random noise into coherent images, guided by a text description.
Fine-tuning Technical
Training an existing AI model further on a specific dataset to specialise its behaviour. For example, fine-tuning a general language model on medical texts makes it better at medical conversations. Some tools offer fine-tuning for enterprise customers.
Foundation Model Technology
A large AI model trained on a massive general dataset, designed to be adapted for many different tasks. GPT-4, Claude, and Gemini are foundation models. Most AI tools you use are built on top of one.
Generative AI Concept
AI that creates new content — text, images, audio, video, code — rather than simply classifying or analysing existing content. ChatGPT, Midjourney, and ElevenLabs are all generative AI tools.
Hallucination Limitation
When an AI model confidently states something that is factually incorrect. It doesn't "know" it's wrong — it generates plausible-sounding text based on patterns, not verified facts. All current LLMs hallucinate to some degree. Always verify important information from AI outputs.
LLM (Large Language Model) Technology
A type of AI model trained on vast amounts of text to understand and generate human language. GPT-4 (OpenAI), Claude (Anthropic), and Gemini (Google) are LLMs. They power most AI writing, coding, and assistant tools.
MCP (Model Context Protocol) Technical
An open standard that allows AI models to connect to external tools, data sources, and services in a standardised way. Think of it as USB for AI — one protocol to plug in any tool.
Multimodal Capability
An AI model that can work with multiple types of input or output — text, images, audio, video. GPT-4o and Gemini are multimodal: you can send them a photo and ask a question about it, or have a voice conversation.
No-code / Low-code AI Product Type
AI tools designed for people without programming skills. You configure them through visual interfaces, drag-and-drop builders, or simple prompts rather than writing code. Tools like Zapier, Canva AI, and Descript are in this category.
Prompt Concept
The instruction or question you type into an AI tool. The quality of a prompt heavily influences the quality of the output. "Prompt engineering" is the practice of crafting effective prompts.
RAG (Retrieval-Augmented Generation) Technology
A technique that combines a language model with a search engine. Instead of relying solely on what the model learned during training, it retrieves relevant documents first and then generates a response based on them. This reduces hallucinations and allows access to up-to-date information.
Self-hosted Deployment
Running an AI model on your own servers or computer instead of using a cloud provider's API. Self-hosting gives you full data privacy and control, but requires technical expertise. Tools like n8n and Stable Diffusion support self-hosting.
System Prompt Technical
A hidden set of instructions given to an AI model before the conversation starts, used to define its persona, constraints, or behaviour. Most AI products use system prompts internally to customise how the model behaves for their specific use case.
Token Technical
The unit LLMs use to process text. A token is roughly 4 characters or 0.75 words in English. "Hello world" is about 3 tokens. API pricing is almost always per token (input + output). Context window sizes are also measured in tokens.
Tool Use / Function Calling Technical
The ability of an AI model to use external tools — run code, search the web, query a database — rather than just generating text. This is what makes AI agents practical for real-world tasks.
Workflow Automation Use Case
Using AI to automatically trigger actions across different apps based on rules or events. For example: "When I receive an invoice by email, extract the data and add it to my spreadsheet." Tools like Zapier, Make, and n8n specialise in this.