๐Ÿค– AI Explained
๐Ÿ”ง

SRE / DevOps Path

You own the infrastructure. This path focuses on what you care about: deployment topology, transport security, observability hooks, trust boundaries, and running MCP servers reliably in production.

Production-first Security model Infra topology

What you'll come away with

Your curriculum

1

Architecture Overview โ€” For SRE / DevOps

The runtime architecture of an AI agent โ€” process model, trust boundaries, transport layers, and where each component fits in your infra.

โ†’
2

What is an LLM โ€” For SRE / DevOps

LLMs from an infrastructure perspective โ€” resource requirements, inference costs, latency characteristics, and what you need to know to run them reliably.

โ†’
3

Tools: A Deep Dive

What tools actually are, how the request/execute/return loop works, parallel calls, error handling, and how to write tool definitions that the model uses correctly.

โ†’
4

MCP: Model Context Protocol

The open protocol that standardizes how AI agents connect to external systems. JSON-RPC internals, transports, the three primitives, and how to build a custom server.

โ†’
5

Skills โ€” For SRE / DevOps

Skills from an infrastructure perspective โ€” file layout, context budget, performance implications, and managing skill files across teams.

โ†’
Start here โ†’