Foundations
How LLMs work, what they can and can't do, and the concepts that everything else builds on.
What is an LLM?
Large Language Models are stateless text-transformation functions: they take text in and return text out, with no memory between calls. Understanding this one fact shapes every architectural decision you'll make with AI.
How Prompts Work
A prompt is not a question: it's a structured program. Understanding its anatomy (system instruction, conversation history, user message) lets you communicate intent reliably and debug output failures systematically.
Models and Model Selection
Not every task needs the most capable model. Understanding the capability-cost-latency tradeoff lets you pick the right model for each job, and avoid paying frontier prices for work a smaller model handles just as well.
Hallucinations and Model Reliability
LLMs generate plausible text, not verified truth. Understanding why models hallucinate, and how to architect around it, is the single most important reliability concern in production AI systems.
Structured Output and Tool Use
Getting reliable, machine-readable output from an LLM requires more than asking nicely. Structured output and tool use turn a text generator into a component your application can depend on.
Context and Memory Management
LLMs are stateless: they have no memory between calls. Every form of 'memory' in an AI application is something your code explicitly puts into the context window. Understanding how to manage that window is the core engineering skill behind every reliable AI system.
Evaluating LLM Systems
LLM outputs are probabilistic and hard to unit-test. Building a systematic evaluation practice, before you ship, and continuously in production, is what separates AI features that stay reliable from ones that silently degrade.
Safety and Guardrails
Safety in AI systems is not a single feature: it is a layered architecture. Understanding what the model handles automatically, what you must build, and where the gaps are is essential before shipping anything user-facing.
Prototype to Production Checklist
A prototype that works in a demo is not a production system. This capstone synthesises every Foundations concept into a practical checklist: the gaps teams consistently miss when shipping their first AI feature.