Junior Developer Path
Starting from first principles. This path is deliberately paced: no conceptual gaps, no hand-waving. You'll write evals before features, understand why things work (not just that they do), and ship something real at the end.
What you'll come away with
- โ A clear mental model of what an LLM is and what makes it different from a database or API
- โ How to write prompts, structured outputs, and tool definitions that work reliably
- โ Your first working RAG pipeline and eval harness
- โ How MCP lets your agent interact with the real world securely
Your curriculum
What is an Agent
An agent is not a smarter chatbot: it is a different execution model. This module defines what makes something agentic, maps the spectrum from single call to autonomous agent, and gives you the decision matrix to know which approach fits your problem.
What Makes LLM Evaluation Hard
Learn why LLM eval is structurally different from traditional ML testing, what the three axes of eval design are, and how to build a mental model for the rest of the track.
What is an LLM?
Large Language Models are stateless text-transformation functions: they take text in and return text out, with no memory between calls. Understanding this one fact shapes every architectural decision you'll make with AI.
Protocol Landscape and MCP
Before a model can call a tool, both sides have to agree on the contract: what the tool is called, what arguments it accepts, and what it returns. This module maps the protocol landscape and shows where MCP fits.
What is RAG and Why
LLMs know a lot, but they don't know your data. Retrieval-Augmented Generation is the pattern that fixes this: not by training the model on your data, but by finding the relevant pieces at query time and handing them directly to the model.
The AI Threat Landscape
Every LLM application has a multi-layer attack surface: model, context, tools, memory, and outputs. Understanding what attackers want and what they can do is the prerequisite to building defences that actually hold. This module maps the threat landscape and establishes why defence in depth is not optional.
Memory and State
Memory is what separates a stateless chatbot from an agent that can work across sessions and build on past experience. This module covers the four memory types, how to manage the lifecycle of each, and the anti-patterns that cause agents to accumulate stale, conflicting, or poisoned state.
Building an Eval Dataset
Learn to treat eval datasets as engineering artifacts: how to seed them, label them, version them, and keep them representative of real production traffic.
How Prompts Work
A prompt is not a question: it's a structured program. Understanding its anatomy (system instruction, conversation history, user message) lets you communicate intent reliably and debug output failures systematically.
Tool Schema Design
The schema is not documentation: it is the instruction the model reads to decide whether to call your tool and what to pass. A bad schema causes wrong tool selections, invalid arguments, and hallucinated parameter values. This module covers what separates a production schema from a prototype one.
Embeddings and Vector Search
Semantic search, finding text by meaning rather than keywords, is the engine inside most RAG systems. Understanding how embeddings work and how vector databases store and query them is the foundation you need to build reliable retrieval.
Prompt Injection
Prompt injection is the most prevalent attack class in LLM applications. It takes two forms: direct injection from user input, and indirect injection through retrieved documents or tool results. Both exploit the same root cause: the model cannot distinguish instructions from data when they share the same channel.
Planning and Decomposition
Complex tasks fail when handed to an agent as a single goal. Planning is the process of decomposing a goal into executable steps: deciding what to do, in what order, and when to revise the plan based on what actually happens.
Automated Evaluation Methods
Master the spectrum of automated eval techniques, from exact match and string overlap through semantic similarity and LLM-as-judge, and learn which method to apply for which task.
Models and Model Selection
Not every task needs the most capable model. Understanding the capability-cost-latency tradeoff lets you pick the right model for each job, and avoid paying frontier prices for work a smaller model handles just as well.
Tool Execution Patterns
A single tool call is easy. Production tool use involves chains of calls, parallel execution, shared state, and the ever-present risk of runaway loops. This module covers the patterns that make multi-step tool execution reliable.
Chunking and Indexing
You can't embed a whole document: you split it into pieces first. How you split determines what you can retrieve. The wrong chunking strategy is one of the most common reasons RAG systems fail to find the right answer even when the information clearly exists.
Jailbreaking and Policy Bypass
Jailbreaking is the attempt to get a model to produce output that its alignment training or system prompt prohibit. No defence is permanent: the arms race between jailbreak techniques and countermeasures is ongoing. This module covers the attack taxonomy and the multi-layer defences that reduce, but never eliminate, the risk.
Multi-Agent Patterns
A single agent hits limits: context windows fill, specialisation is hard, and long tasks become fragile. Multi-agent architectures solve this by distributing work, but they introduce coordination costs, trust boundaries, and new failure modes. This module covers the patterns that work in production.
Tracing & Structured Logging
Learn to instrument LLM systems with structured traces that make debugging and performance analysis practical: what to log, how to structure it, and how to avoid PII liability.
Hallucinations and Model Reliability
LLMs generate plausible text, not verified truth. Understanding why models hallucinate, and how to architect around it, is the single most important reliability concern in production AI systems.
Real API Integration
Wrapping a real API as a tool means handling all the things the happy path ignores: auth token expiry, rate limits, flaky networks, non-idempotent operations, and paginated results. This module covers the mechanics of building tool integrations that survive production.
Retrieval Quality: Dense, Sparse, and Hybrid
Semantic search is powerful but not always the best retrieval method. Keyword search finds exact matches that embeddings miss. Re-ranking re-scores candidates with a slower but more accurate model. Understanding when to use each, and how to combine them, is what separates reliable RAG from fragile RAG.
Data Privacy and PII
LLM systems create new PII leakage vectors that traditional data protection controls do not cover: model memorisation, cross-user context leakage, and RAG pipelines that pull in customer records without scrubbing. This module covers detection, scrubbing, retention, and the vendor agreements that govern what happens to your data.
Agent Failure Modes
Agents fail in ways that are qualitatively different from single API calls: errors compound, loops consume unbounded resources, and failures can be invisible until they cause damage. This module catalogues the failure modes and the structural mitigations for each.
Cost Attribution & Token Budgets
Learn to track, attribute, and control LLM API costs before the invoice surprises you: per-request tagging, per-feature aggregation, token budget enforcement, and anomaly alerting.
Structured Output and Tool Use
Getting reliable, machine-readable output from an LLM requires more than asking nicely. Structured output and tool use turn a text generator into a component your application can depend on.
Streaming and Async Tool Workflows
Streaming gives users tokens as they arrive instead of waiting for the full response. Async tools let long-running operations run in the background. Both change how you wire together models and tools, and both have sharp edges that aren't obvious until you're in production.
Prompting for RAG
Retrieved chunks are only as useful as the instructions you give the model for using them. The grounding instruction, context format, citation pattern, and no-answer path are what turn a retrieval result into a reliable, trustworthy answer.
Guardrails Architecture
Guardrails are controls on inputs, outputs, or both: classifiers, validators, and policy checks that run independently of the model. Designing a guardrails architecture means choosing which controls to apply, how to layer them for coverage and performance, and how to calibrate them so false positives do not kill legitimate use.
Human-in-the-Loop
Human oversight is not a bolt-on safety feature: it is an architectural primitive that determines what an agent is permitted to do autonomously and what requires a human decision. This module covers the design of approval gates, interrupt points, confidence escalation, and audit trails that make human oversight practical at scale.
CI/CD Eval Gates
Learn to build automated eval gates that block deployments when prompt changes, model upgrades, or RAG index updates regress quality: before they reach users.
Context and Memory Management
LLMs are stateless: they have no memory between calls. Every form of 'memory' in an AI application is something your code explicitly puts into the context window. Understanding how to manage that window is the core engineering skill behind every reliable AI system.
Security Boundaries
Tools give models real capabilities: which means tool-using systems inherit the security risks of real software plus some new ones specific to AI. Prompt injection, over-privileged tools, and undelimited external content are the three failure modes that show up first. This module covers the boundaries that need to exist.
Evaluating RAG Systems
A fluent, well-formatted answer based on the wrong chunk is a failure, but it reads like a success. RAG evaluation requires two independent measurement tracks: retrieval quality and generation quality. Conflating them hides the real failure mode.
Supply Chain Security
The AI supply chain, base model, fine-tuning data, adapters, Python packages, and API keys, has more attack surfaces than teams typically consider. A .pkl file is executable code. An unverified model weight can contain backdoors. This module covers the controls that keep your AI system trustworthy from training data to production inference.
Production Agent Systems
An agent that works in a demo fails in production the first time it crashes mid-task, gets retried with a duplicate side effect, or loses its state to a process restart. This module covers the durability semantics that separate toy agents from production systems.
Production Monitoring & Drift Detection
Learn to detect quality regressions, distribution shifts, and cost anomalies in live LLM systems before users report them: using metrics, statistical process control, and a sample-and-judge pipeline.
Evaluating LLM Systems
LLM outputs are probabilistic and hard to unit-test. Building a systematic evaluation practice, before you ship, and continuously in production, is what separates AI features that stay reliable from ones that silently degrade.
Production Operations
A tool-using system has more moving parts than a simple prompt-response loop, and more things that can go wrong. This module covers the observability, cost management, and resilience patterns that keep tool integrations reliable after launch.
Advanced RAG Patterns
Basic RAG fails when queries are vague, answers span multiple documents, or context evolves across a conversation. Four patterns, multi-query retrieval, HyDE, contextual retrieval, and small-to-big, each fix a specific retrieval failure mode. Know which failure you have before reaching for a pattern.
Regulatory Landscape
The regulatory environment for AI is moving quickly. The EU AI Act introduced risk tiers and mandatory requirements. GDPR has always applied to automated decision-making. The US has the NIST AI RMF. This module maps the landscape for a B2B SaaS product using LLMs: what you likely need to document, what you need to avoid, and where you need legal counsel.
Agent Evaluation
Evaluating an agent is fundamentally different from evaluating a model. The question is not just 'was the answer correct?' but 'did the agent take the right path to get there, and would it hold up under different conditions?' This module covers offline trajectory evaluation and online production monitoring: the two distinct disciplines that together keep agent quality measurable.
Red-teaming & Adversarial Evaluation
Learn to systematically discover failure modes in LLM systems before attackers do: how to run a red-team session, categorize findings, and convert every confirmed vulnerability into a permanent regression test.
Safety and Guardrails
Safety in AI systems is not a single feature: it is a layered architecture. Understanding what the model handles automatically, what you must build, and where the gaps are is essential before shipping anything user-facing.
Testing and Reliability
Tool-using systems are hard to test because the interesting behavior emerges from the interaction between the model and the tools, not from either alone. This module covers the testing strategy that catches real failures: schema drift, unexpected model behavior, and integration regressions.
Production RAG Checklist
A RAG prototype that works on your test documents is not a production system. This capstone synthesises the full RAG track into a checklist: the gaps that consistently cause RAG failures after launch, and the order to address them.
Incident Response for AI Systems
An AI incident is not a software incident: it involves model misbehaviour, safety violations, or data leakage, each with distinct root causes and remediation paths. This module covers detection, containment, investigation, and post-mortem structure for AI-specific incidents, and the one logging investment that makes all of it possible.
Prototype to Production Checklist
A prototype that works in a demo is not a production system. This capstone synthesises every Foundations concept into a practical checklist: the gaps teams consistently miss when shipping their first AI feature.
Multimodal AI
Modern AI models don't just read text โ they see images, hear audio, and process video. This module explains how multimodal inputs change the context engineering problem and when vision is the right tool versus cheaper alternatives like OCR.