Digital platform on a laptop screen
You walk into a tech conference, meet a tech-savvy friend for coffee, scroll through LinkedIn, or sit down with a tech founder - and suddenly you are swimming in buzzwords. LLMs. Tokens. Agentic AI. RAG. Explainability. Machine Learning. Black Box. They are tossed around like everyone’s speaking the same language. But here’s the reality: unless you are from an IT background or you are literally reading everything to keep up with AI development (which, let’s be honest, is impossible at this pace), most of us do what we always do — nod along, smile politely, and then go home wondering what any of it actually means.

You can’t design ethical AI lost in vocabulary. You can’t advocate for fairness if bias feels like jargon, or build trust if black box sounds like sci-fi. You can’t make informed decisions about tools shaping your products and users’ lives if you are secretly Googling or GPT-ing or Gemini-ing (apparently AI companies still haven’t cracked the art of choosing names that make good verbs).

In our last article series, we explored what AI can (and can’t) do today. This week, we are cutting through the noise with something different but essential: breaking down words that actually matter for non-technical AI professionals but that are also shaping how AI gets built. Ready? Let’s decode.
Experience map, showing the summary of user interviews in several grid layers, including actors involved, experience feedback, and others.
The Essential AI Vocabulary: 15 Terms That You Should Know
Understanding AI doesn’t require memorizing every definition, but knowing the core concepts gives you the clarity to make better decisions and to design and build AI systems with confidence. These terms form the foundation you need to understand how AI works, how it’s built, and how it impacts people.

We have organized the concepts and terms into three groups:
  1. Foundational terms: Some of the general concepts you will encounter everywhere.
  2. Technical concepts: Some of the technical terms on how AI is actually developed and implemented.
  3. Design and ethical principles: Essential if you want to build AI that works for humans and stays aligned with human judgment and intent.
I. THE FOUNDATIONS
01
Machine Learning (ML)
What it is:
Systems that learn patterns from data instead of following programmed rules.
Why it matters:
Most of what people call “AI” today is actually machine learning — it powers everything from ChatGPT to recommendation engines.
What’s the catch:
ML can only learn from the data it’s trained on. If something isn’t represented (or is poorly represented), the system won’t understand it.
How it fits into design:
Designers must define the problem, the data inputs, and the user context, because ML models won’t magically fill the gaps. Clear boundaries and expectations protect the user experience.
What you should do:
Understand the basics of ML so you can ask the right questions about data quality, limitations, and user impact.
02
Large Language Models (LLMs)
What it is:
AI models trained on massive amounts of text to understand and generate human-like language. Examples are ChatGPT, Claude or Google Gemini.
Why it matters:
LLMs power most GenAI tools today, and that’s super relevant. They are pattern-matching systems, not sentient.
What’s the catch:
They learn from internet-scale data that can be biased, outdated, or incomplete. They don’t access real-time information unless specifically connected to external tools.
How it fits into design:
Designers must account for hallucinations, uncertainty, and lack of factual guarantees. Clear guardrails, validation steps, and user prompts are essential.
What you should do:
Use LLMs for idea generation, summarization, and language tasks, but don’t assume accuracy (please!). Always design workflows where humans can review or verify outputs.
03
Supervised Learning
What it is:
AI learns from labeled examples, for example, teaching it what “spam” and “not spam” look like. Nature
Why it matters:
Supervised learning powers a huge portion of today’s practical AI systems, from email filters to medical image analysis. It’s incredibly effective when you have good labeled data, but not so much when you don’t.
What’s the catch:
It requires large amounts of accurately labeled data, which is expensive and time-consuming to produce. The model can only learn what the labels represent.
How it fits into design:
Design decisions must consider whether the system has the right labeled data, what the data represents, and where it might fail. Poor labels = poor user experience.
What you should do:
Before adopting or commissioning a “custom AI,” confirm whether you have (or can create) high-quality labeled data and understand the cost of maintaining it.
iI. Technical Concepts
04
Tokens
What it is:
In LLMs, a “token” is the basic unit of text — a token may be a word, part of a word, punctuation, space or other sub-word chunk.
Why it matters:
Tokens limit how much text can be processed and often determine cost, affecting both performance and budget. Roughly, 1 token ≈ 4 characters or 0.75 words, though this varies by language and style.
What’s the catch:
Tokenization varies by language and model, so token counts don’t directly match word counts. Estimates like “100,000 tokens ≈ 75,000 words” are rough and differ across languages and text types.
How it fits into design:
Designers must manage token limits—prompt, input, and output size—to avoid context overflows and high costs.
What you should do:
Use tokenizer tools to estimate token use, design token-efficient prompts, check model pricing for input/output differences, and don’t assume a fixed token-to-word ratio.
05
Hallucination
What it is:
AI confidently generating false, fabricated information that sounds plausible. Not a bug — it’s how LLMs work.
Why it matters:
Hallucinations destroy trust. A fabricated client name, medical fact, or legal citation can have serious consequences.
What’s the catch:
Hallucinations occur more often with obscure topics or recent events outside the training data. They are hardest to detect when the AI sounds confident.
How it fits into design:
Design systems assuming errors will happen. Include verification steps, human review, and feedback loops. Consider RAG (more next!) or other mechanisms to ground outputs in trusted sources.
What you should do:
Never rely on AI outputs for sensitive decisions without safeguards. Build workflows that cross-check critical information and alert users to uncertainty.
06
Prompt Engineering
What it is:
The skill of writing effective instructions to get better AI outputs. How you ask matters as much as what you ask.
Why it matters:
The same AI can generate vastly different results depending on prompt quality.
What’s the catch:
Prompting requires iteration and refinement. Many organizations skip this, leading to inconsistent or disappointing outputs.
How it fits into design:
Designers and teams should treat prompts as part of the product experience. Good prompts can improve usability, reliability, and alignment with user goals.
What you should do:
Document effective prompts, create reusable templates, and train teams to iterate.
07
Open vs. Closed Source AI
What it is:
Open-source AI provides access to the code and models, allowing you to run, modify, or host them yourself. Closed-source AI is proprietary, controlled by a vendor, and often accessed via API.
Why it matters:
This choice affects control, privacy, cost, flexibility, and long-term strategy. Open-source gives independence; closed-source offers convenience and maintenance handled by the vendor.
What’s the catch:
Open-source requires technical expertise, infrastructure, and ongoing maintenance. Closed-source limits customization, and data may be processed on external servers.
How it fits into design:
Design decisions must consider where data lives, compliance requirements, and the level of control needed over outputs and behavior.
What you should do:
Choose based on your priorities: use open-source for full control and privacy, closed-source for speed and ease of integration. Always document your data handling practices.
08
Retrieval-Augmented Generation (RAG)
What it is:
A method where AI retrieves relevant information from a structured database or knowledge source to generate answers, rather than relying solely on pre-trained knowledge.
Why it matters:
RAG improves accuracy, grounding outputs in trusted sources, and can prevent hallucinations or misinformation.
What’s the catch:
Implementing RAG requires infrastructure, data management, and alignment between the AI model and the external knowledge base.
How it fits into design:
Designers must decide what sources AI can access, how updates are managed, and how results are verified to maintain user trust.
What you should do:
Use RAG when outputs need factual accuracy or up-to-date information. Ensure verification and human oversight where critical decisions are involved.
09
Agentic AI
What it is:
AI that autonomously plans, decides, uses tools, and acts toward goals without step-by-step human instructions.
Why it matters:
Agentic AI extends the capabilities of traditional LLMs, enabling automation of complex workflows and proactive problem-solving.
What’s the catch:
Autonomy increases risk: errors can propagate, and outputs may have unintended consequences if not carefully controlled.
How it fits into design:
Designers must define boundaries, permissions, and monitoring mechanisms to ensure AI actions align with human objectives.
What you should do:
Limit agentic AI to tasks with clear rules and safe environments. Monitor outputs, and always provide a human-in-the-loop for critical or sensitive operations.
“The goal isn’t technical mastery, it’s clarity on human impact, and the power to design AI rather than let AI design us."
iII. Design & Ethical Essentials
10
Bias
What it is:
Systematic unfairness when AI replicates or amplifies biases from training data.
Why it matters:
Bias has real-world consequences: facial recognition can fail 35% of the time on dark-skinned women versus <1% on light-skinned men. World Economic Forum and MIT
What’s the catch:
Bias can emerge at every stage - data collection, training, and deployment. Mitigating it requires proactive auditing and diverse datasets.
How it fits into design:
Designers should ensure inclusive data, question assumptions, and embed fairness checks throughout the AI lifecycle.
What you should do:
Audit datasets, test outputs for bias, and advocate for inclusive design. Make fairness a core part of product decision-making.
11
Black Box & Transparency
What it is:
AI models whose decision-making is hidden or incomprehensible. You see input and output but not the “why”.
Why it matters:
Lack of transparency erodes trust. Users are less likely to engage with systems they don’t understand.
What’s the catch:
Making complex AI understandable requires thoughtful UX: context, confidence indicators, and layered explanations for different audiences.
How it fits into design:
Provide multiple layers of explanation: simple rationale for general users, detailed insights for experts or auditors.
What you should do:
Incorporate transparency into interfaces. Explain why AI outputs occur and help users feel in control.
12
Explainability (XAI)
What it is:
Techniques that make AI decisions understandable by revealing the logic, factors, and confidence behind outputs.
Why it matters:
Explainability builds user trust and is increasingly expected or required in regulated industries such as finance and healthcare.
What’s the catch:
Too much technical detail can overwhelm users. The challenge is surfacing just enough information to clarify the decision.
How it fits into design:
Focus on explanation design: show why a decision was made without overloading users with complexity.
What you should do:
Provide clear, concise explanations. Users who understand AI reasoning are more confident and engaged.
13
AI-Based Accessibility
What it is:
Creating AI systems that are usable across abilities, languages, contexts, and devices.
Why it matters:
Without accessibility, AI can exclude disabled users, older adults, low-resource communities, or people with speech and motor differences. For example,  many interfaces lack keyboard navigation. World Economic Forum
What’s the catch:
Accessible AI requires multimodal inputs (voice, gesture, keyboard), adaptable interfaces (resizing, contrast), screen reader compatibility, and real-time adjustments.
How it fits into design:
Design for everyone: accessible features often improve the experience for all users, not just those with disabilities.
What you should do:
Include accessibility checks in AI products. Think beyond compliance: captions, voice commands, and adaptive layouts improve usability universally.
14
Accountability & Governance
What it is:
Assigning clear responsibility for how AI is built, deployed, monitored, and improved.
Why it matters:
Without accountability, problems go unaddressed and users bear consequences.
What’s the catch:
Accountability requires documentation, audits, feedback loops, and oversight from diverse perspectives — not just engineers.
How it fits into design:
Governance structures should embed ethics, audit trails, and stakeholder review throughout the AI lifecycle.
What you should do:
Ensure transparent processes, regular audits, and clear ownership. Advocate for structures that hold teams accountable while protecting users.
15
Human-Centered AI (HCAI)
What it is:
AI development that prioritizes human needs, values, and capabilities at its core. Interaction Design Foundation
Why it matters:
Human-centered AI ensures systems augment rather than replace humans, respect autonomy, and align with human values.
What’s the catch:
It requires active user involvement throughout development, careful balancing of AI capabilities with human oversight, and embedding ethical considerations — privacy, transparency, and fairness — from day one. IBM Research
How it fits into design:
Design processes must integrate human feedback, iterative testing, and ethical safeguards. The goal is AI that supports users, not just showcases technical power.
What you should do:
Focus on designing AI that serves real human needs. Prioritize user experience, ethical alignment, and human oversight to maximize impact and adoption.
These 15 terms — from foundational AI concepts to design-specific ethics — give you the vocabulary to ask better questions, spot red flags, and shape AI systems that serve humans; because the goal isn’t technical mastery, it’s clarity on human impact, and the power to design AI rather than let AI design us.
In our series “How We Design AI and Not Vice Versa: Conversations on AI, Ethics, and Designing Tomorrow”, we’ll journey beyond the hype to explore what AI can do today, uncover the ethics and biases behind algorithms, and reveal how human-centered design, UX, and responsible innovation come together each week to shape technology that truly serves real people.

Want to explore how AI, UX, and ethical design shape tech for real people? Visit www.yellowumbrella.design for our insights, practical resources, and inspiration for designing a better tomorrow.
Building something bold?
We’d love to help you shape it.