SRE / DevOps Path
You own the infrastructure. This path focuses on what you care about: deployment topology, transport security, observability hooks, trust boundaries, and running MCP servers reliably in production.
What you'll come away with
- โ How MCP server processes fit into your existing infra (systemd, containers, K3s)
- โ stdio vs HTTP+SSE: when to use each and the security implications
- โ Trust boundaries โ what an AI agent can actually do vs what the host allows
- โ OAuth for remote MCP servers: the full auth flow
- โ Running open-weight models locally with Ollama + LiteLLM as a drop-in API proxy
Your curriculum
Architecture Overview โ For SRE / DevOps
The runtime architecture of an AI agent โ process model, trust boundaries, transport layers, and where each component fits in your infra.
What is an LLM โ For SRE / DevOps
LLMs from an infrastructure perspective โ resource requirements, inference costs, latency characteristics, and what you need to know to run them reliably.
Tools: A Deep Dive
What tools actually are, how the request/execute/return loop works, parallel calls, error handling, and how to write tool definitions that the model uses correctly.
MCP: Model Context Protocol
The open protocol that standardizes how AI agents connect to external systems. JSON-RPC internals, transports, the three primitives, and how to build a custom server.
Skills โ For SRE / DevOps
Skills from an infrastructure perspective โ file layout, context budget, performance implications, and managing skill files across teams.