SRE / DevOps Path
You own the infrastructure. This path covers everything that matters when AI moves to production: VRAM budgets, serving architectures, cost management, supply chain security, and the observability stack that keeps you paged at 3am instead of midnight.
What you'll come away with
- โ How to run LLMs locally and in production; VRAM budgets, quantisation, vLLM serving
- โ The full observability stack: traces, costs, evals, and CI gates
- โ Supply chain security for AI: model weights, plugins, and dependency attacks
- โ Gateway patterns, caching layers, and latency engineering for p95/p99 SLOs
Your curriculum
What is an Agent
An agent is not a smarter chatbot: it is a different execution model. This module defines what makes something agentic, maps the spectrum from single call to autonomous agent, and gives you the decision matrix to know which approach fits your problem.
What Makes LLM Evaluation Hard
Learn why LLM eval is structurally different from traditional ML testing, what the three axes of eval design are, and how to build a mental model for the rest of the track.
What is an LLM?
Large Language Models are stateless text-transformation functions: they take text in and return text out, with no memory between calls. Understanding this one fact shapes every architectural decision you'll make with AI.
Hosting Options
Choosing where to run your model determines your cost structure, latency floor, and operational burden: understanding the tradeoffs between API inference, self-hosted, and cloud-managed endpoints lets you pick the right option for each workload rather than defaulting to whatever is easiest to start.
How Vision-Language Models Work
A vision-language model (VLM) combines a visual encoder with a language model: images are converted to token-like embeddings and fed directly into the same context window as text. Understanding this architecture explains why images cost more tokens than they appear to, and why resolution and tiling choices matter in production.
Protocol Landscape and MCP
Before a model can call a tool, both sides have to agree on the contract: what the tool is called, what arguments it accepts, and what it returns. This module maps the protocol landscape and shows where MCP fits.
What is RAG and Why
LLMs know a lot, but they don't know your data. Retrieval-Augmented Generation is the pattern that fixes this: not by training the model on your data, but by finding the relevant pieces at query time and handing them directly to the model.
The AI Threat Landscape
Every LLM application has a multi-layer attack surface: model, context, tools, memory, and outputs. Understanding what attackers want and what they can do is the prerequisite to building defences that actually hold. This module maps the threat landscape and establishes why defence in depth is not optional.
Memory and State
Memory is what separates a stateless chatbot from an agent that can work across sessions and build on past experience. This module covers the four memory types, how to manage the lifecycle of each, and the anti-patterns that cause agents to accumulate stale, conflicting, or poisoned state.
Building an Eval Dataset
Learn to treat eval datasets as engineering artifacts: how to seed them, label them, version them, and keep them representative of real production traffic.
How Prompts Work
A prompt is not a question: it's a structured program. Understanding its anatomy (system instruction, conversation history, user message) lets you communicate intent reliably and debug output failures systematically.
Quantization & Compression
Quantization reduces the memory and compute cost of running a model by storing its weights in lower precision: understanding the tradeoffs between FP16, INT8, and INT4 and the methods used to get there lets you serve larger models on smaller hardware without silently breaking quality.
Working with Images in Production
Sending an image to a VLM is trivial; building a production image pipeline that handles validation, preprocessing, output parsing, and failure modes is not. This module covers the full ingestion pipeline from receipt to parsed output, with emphasis on the silent failure modes that catch teams by surprise.
Tool Schema Design
The schema is not documentation: it is the instruction the model reads to decide whether to call your tool and what to pass. A bad schema causes wrong tool selections, invalid arguments, and hallucinated parameter values. This module covers what separates a production schema from a prototype one.
Embeddings and Vector Search
Semantic search, finding text by meaning rather than keywords, is the engine inside most RAG systems. Understanding how embeddings work and how vector databases store and query them is the foundation you need to build reliable retrieval.
Prompt Injection
Prompt injection is the most prevalent attack class in LLM applications. It takes two forms: direct injection from user input, and indirect injection through retrieved documents or tool results. Both exploit the same root cause: the model cannot distinguish instructions from data when they share the same channel.
Planning and Decomposition
Complex tasks fail when handed to an agent as a single goal. Planning is the process of decomposing a goal into executable steps: deciding what to do, in what order, and when to revise the plan based on what actually happens.
Automated Evaluation Methods
Master the spectrum of automated eval techniques, from exact match and string overlap through semantic similarity and LLM-as-judge, and learn which method to apply for which task.
Models and Model Selection
Not every task needs the most capable model. Understanding the capability-cost-latency tradeoff lets you pick the right model for each job, and avoid paying frontier prices for work a smaller model handles just as well.
Inference Serving
Inference servers are not just web servers that happen to call a model: they implement specific memory management and scheduling algorithms that determine whether your GPU serves 5 requests per second or 50; understanding KV cache, PagedAttention, and continuous batching separates the teams who can scale from the teams who can't.
Audio and Speech AI
The audio AI stack spans automatic speech recognition (ASR), text-to-speech (TTS), and the orchestration layer that connects them to language models. This module covers the key components, their production metrics, and the voice AI pipeline pattern that powers real-time conversational applications.
Tool Execution Patterns
A single tool call is easy. Production tool use involves chains of calls, parallel execution, shared state, and the ever-present risk of runaway loops. This module covers the patterns that make multi-step tool execution reliable.
Chunking and Indexing
You can't embed a whole document: you split it into pieces first. How you split determines what you can retrieve. The wrong chunking strategy is one of the most common reasons RAG systems fail to find the right answer even when the information clearly exists.
Jailbreaking and Policy Bypass
Jailbreaking is the attempt to get a model to produce output that its alignment training or system prompt prohibit. No defence is permanent: the arms race between jailbreak techniques and countermeasures is ongoing. This module covers the attack taxonomy and the multi-layer defences that reduce, but never eliminate, the risk.
Multi-Agent Patterns
A single agent hits limits: context windows fill, specialisation is hard, and long tasks become fragile. Multi-agent architectures solve this by distributing work, but they introduce coordination costs, trust boundaries, and new failure modes. This module covers the patterns that work in production.
Tracing & Structured Logging
Learn to instrument LLM systems with structured traces that make debugging and performance analysis practical: what to log, how to structure it, and how to avoid PII liability.
Hallucinations and Model Reliability
LLMs generate plausible text, not verified truth. Understanding why models hallucinate, and how to architect around it, is the single most important reliability concern in production AI systems.
Batching & Throughput
Throughput and latency are in direct tension in LLM serving: understanding how batching works, why continuous batching is the production default, and how to separate throughput benchmarks from latency benchmarks prevents the common mistake of optimizing one while silently destroying the other.
Multimodal Agents
Multimodal agents extend the standard agent loop with perception across images and audio, and with actions that produce visual or spoken output. This module covers GUI agents, vision as a tool call, multimodal memory, and the specific failure modes that multimodal perception introduces into agent systems.
Real API Integration
Wrapping a real API as a tool means handling all the things the happy path ignores: auth token expiry, rate limits, flaky networks, non-idempotent operations, and paginated results. This module covers the mechanics of building tool integrations that survive production.
Retrieval Quality: Dense, Sparse, and Hybrid
Semantic search is powerful but not always the best retrieval method. Keyword search finds exact matches that embeddings miss. Re-ranking re-scores candidates with a slower but more accurate model. Understanding when to use each, and how to combine them, is what separates reliable RAG from fragile RAG.
Data Privacy and PII
LLM systems create new PII leakage vectors that traditional data protection controls do not cover: model memorisation, cross-user context leakage, and RAG pipelines that pull in customer records without scrubbing. This module covers detection, scrubbing, retention, and the vendor agreements that govern what happens to your data.
Agent Failure Modes
Agents fail in ways that are qualitatively different from single API calls: errors compound, loops consume unbounded resources, and failures can be invisible until they cause damage. This module catalogues the failure modes and the structural mitigations for each.
Cost Attribution & Token Budgets
Learn to track, attribute, and control LLM API costs before the invoice surprises you: per-request tagging, per-feature aggregation, token budget enforcement, and anomaly alerting.
Structured Output and Tool Use
Getting reliable, machine-readable output from an LLM requires more than asking nicely. Structured output and tool use turn a text generator into a component your application can depend on.
Latency Optimization
LLM latency has three distinct components, TTFT, TBT, and E2E, and different use cases require optimizing different ones; knowing which techniques reduce which component, and when prompt caching defeats itself, prevents wasted effort and avoids the most common serving regressions.
Multimodal Safety
Images and audio introduce attack surfaces that text-only safety systems do not cover: injected instructions inside images, adversarial visual inputs, deepfakes, and PII embedded in non-text modalities. This module covers the threat model for multimodal inputs and the defensive patterns that close the gaps.
Streaming and Async Tool Workflows
Streaming gives users tokens as they arrive instead of waiting for the full response. Async tools let long-running operations run in the background. Both change how you wire together models and tools, and both have sharp edges that aren't obvious until you're in production.
Prompting for RAG
Retrieved chunks are only as useful as the instructions you give the model for using them. The grounding instruction, context format, citation pattern, and no-answer path are what turn a retrieval result into a reliable, trustworthy answer.
Guardrails Architecture
Guardrails are controls on inputs, outputs, or both: classifiers, validators, and policy checks that run independently of the model. Designing a guardrails architecture means choosing which controls to apply, how to layer them for coverage and performance, and how to calibrate them so false positives do not kill legitimate use.
Human-in-the-Loop
Human oversight is not a bolt-on safety feature: it is an architectural primitive that determines what an agent is permitted to do autonomously and what requires a human decision. This module covers the design of approval gates, interrupt points, confidence escalation, and audit trails that make human oversight practical at scale.
CI/CD Eval Gates
Learn to build automated eval gates that block deployments when prompt changes, model upgrades, or RAG index updates regress quality: before they reach users.
Context and Memory Management
LLMs are stateless: they have no memory between calls. Every form of 'memory' in an AI application is something your code explicitly puts into the context window. Understanding how to manage that window is the core engineering skill behind every reliable AI system.
Hardware Selection
Choosing the wrong GPU tier, or sizing VRAM based on model weights alone, is the most common hardware mistake in LLM deployment; knowing the VRAM math, the GPU tiers, and when to use multi-GPU parallelism lets you right-size hardware before you need it rather than after an OOM in production.
Multimodal Evaluation
Evaluating multimodal AI is harder than evaluating text: there is no ground truth for 'describe this image', visual hallucinations are invisible without the source image, and labelling image datasets is expensive. This module covers evaluation approaches by task type, reference datasets, hallucination detection, and how to build a practical multimodal eval pipeline.
Security Boundaries
Tools give models real capabilities: which means tool-using systems inherit the security risks of real software plus some new ones specific to AI. Prompt injection, over-privileged tools, and undelimited external content are the three failure modes that show up first. This module covers the boundaries that need to exist.
Evaluating RAG Systems
A fluent, well-formatted answer based on the wrong chunk is a failure, but it reads like a success. RAG evaluation requires two independent measurement tracks: retrieval quality and generation quality. Conflating them hides the real failure mode.
Supply Chain Security
The AI supply chain, base model, fine-tuning data, adapters, Python packages, and API keys, has more attack surfaces than teams typically consider. A .pkl file is executable code. An unverified model weight can contain backdoors. This module covers the controls that keep your AI system trustworthy from training data to production inference.
Production Agent Systems
An agent that works in a demo fails in production the first time it crashes mid-task, gets retried with a duplicate side effect, or loses its state to a process restart. This module covers the durability semantics that separate toy agents from production systems.
Production Monitoring & Drift Detection
Learn to detect quality regressions, distribution shifts, and cost anomalies in live LLM systems before users report them: using metrics, statistical process control, and a sample-and-judge pipeline.
Evaluating LLM Systems
LLM outputs are probabilistic and hard to unit-test. Building a systematic evaluation practice, before you ship, and continuously in production, is what separates AI features that stay reliable from ones that silently degrade.
Containerization & Deployment
Containerizing an LLM inference server is fundamentally different from containerizing a web service; GPU passthrough, multi-stage weight management, and slow pod startup require different patterns for health checks, rolling deployments, and Kubernetes configuration that most teams learn by breaking production first.
Serving Multimodal Models
Serving a vision-language model is not the same as serving a text-only LLM: the vision encoder adds VRAM, image preprocessing adds latency, and variable image sizes complicate batching. This module covers the serving stack for VLMs and audio models, including the VRAM estimation mistakes that cause production OOMs.
Production Operations
A tool-using system has more moving parts than a simple prompt-response loop, and more things that can go wrong. This module covers the observability, cost management, and resilience patterns that keep tool integrations reliable after launch.
Advanced RAG Patterns
Basic RAG fails when queries are vague, answers span multiple documents, or context evolves across a conversation. Four patterns, multi-query retrieval, HyDE, contextual retrieval, and small-to-big, each fix a specific retrieval failure mode. Know which failure you have before reaching for a pattern.
Regulatory Landscape
The regulatory environment for AI is moving quickly. The EU AI Act introduced risk tiers and mandatory requirements. GDPR has always applied to automated decision-making. The US has the NIST AI RMF. This module maps the landscape for a B2B SaaS product using LLMs: what you likely need to document, what you need to avoid, and where you need legal counsel.
Agent Evaluation
Evaluating an agent is fundamentally different from evaluating a model. The question is not just 'was the answer correct?' but 'did the agent take the right path to get there, and would it hold up under different conditions?' This module covers offline trajectory evaluation and online production monitoring: the two distinct disciplines that together keep agent quality measurable.
Red-teaming & Adversarial Evaluation
Learn to systematically discover failure modes in LLM systems before attackers do: how to run a red-team session, categorize findings, and convert every confirmed vulnerability into a permanent regression test.
Safety and Guardrails
Safety in AI systems is not a single feature: it is a layered architecture. Understanding what the model handles automatically, what you must build, and where the gaps are is essential before shipping anything user-facing.
Scaling & Cost Management
LLM serving costs accumulate differently from typical web services; GPU-hours are expensive, autoscaling on CPU metrics is wrong, and scale-to-zero creates cold-start latency that makes it unsuitable for interactive workloads; knowing the right signals to scale on and how to build the cost math keeps infrastructure expenses from becoming a surprise.
The Multimodal Frontier
Multimodal AI is advancing faster than any other part of the field: native multimodality, video understanding, and real-time audio-visual interaction are moving from research to production on a timescale of months. This module covers where the field is heading and, more importantly, what durable knowledge to invest in when specific capabilities become outdated within a year.
Testing and Reliability
Tool-using systems are hard to test because the interesting behavior emerges from the interaction between the model and the tools, not from either alone. This module covers the testing strategy that catches real failures: schema drift, unexpected model behavior, and integration regressions.
Production RAG Checklist
A RAG prototype that works on your test documents is not a production system. This capstone synthesises the full RAG track into a checklist: the gaps that consistently cause RAG failures after launch, and the order to address them.
Incident Response for AI Systems
An AI incident is not a software incident: it involves model misbehaviour, safety violations, or data leakage, each with distinct root causes and remediation paths. This module covers detection, containment, investigation, and post-mortem structure for AI-specific incidents, and the one logging investment that makes all of it possible.
Cost Management
LLM costs are non-linear and easy to underestimate โ especially in multi-agent systems where one orchestration call spawns dozens of sub-calls. This module covers token economics, prompt caching, cost ceilings with graceful degradation, and the attribution infrastructure needed to run LLM workloads sustainably.
Prototype to Production Checklist
A prototype that works in a demo is not a production system. This capstone synthesises every Foundations concept into a practical checklist: the gaps teams consistently miss when shipping their first AI feature.
Fine-Tuning: When & Why
Fine-tuning is one of several ways to adapt a model to a task โ and often the most expensive, slowest, and most fragile. This module is a decision framework: when to fine-tune, when not to, and what you give up either way.
Multimodal AI
Modern AI models don't just read text โ they see images, hear audio, and process video. This module explains how multimodal inputs change the context engineering problem and when vision is the right tool versus cheaper alternatives like OCR.
LoRA, QLoRA & PEFT
LoRA lets you adapt a large model by training only a tiny fraction of its parameters โ keeping the base weights frozen and adding small trainable matrices on top. This module covers the mechanics, the quantised variant QLoRA, and what production adapter serving actually looks like.
Multimodal Evaluation & Observability
Text evals don't transfer. When your pipeline processes images, audio, or video, each modality introduces failure modes that a text judge cannot see. This module covers ground-truth dataset design, judge strategies, and observability instrumentation for non-text pipelines.
Synthetic Data for Training & Distillation
You can use a large model to generate training data for a smaller one โ but the pipeline has failure modes that are hard to detect and expensive to fix once they're baked into weights. This module covers how to build a synthetic data pipeline that doesn't train failure modes into your model.
Reliability Patterns for Agent Systems
Agent failures are often silent, partial, and hard to replay. This module applies distributed-systems reliability patterns โ idempotency, compensation transactions, circuit breakers, and graceful degradation โ to the specific failure modes agents introduce.
Sovereign & Air-Gapped AI Architecture
Some data cannot leave your environment. Air-gapped AI deployments run the full stack โ embeddings, vector database, and inference โ entirely on-premise with no internet access. The architecture is straightforward; the hard parts are model provenance, patch strategy, and keeping the system from going stale.
Caching & Latency Engineering
LLM inference is slow and expensive. Four independent caching layers can cut both โ but each operates at a different point in the stack with different invalidation needs. Applying the wrong cache to the wrong layer is worse than no cache at all.
Data Engineering for AI Systems
Most AI failures blamed on the model are actually data quality failures upstream. This module covers corpus lifecycle management, data contracts for AI pipelines, and the ingestion patterns that determine whether your RAG system retrieves signal or noise.