๐Ÿค– AI Explained
๐ŸŒฑ

Curious Beginner Path

Zero coding required. This path gives you the accurate mental model for AI: what it actually is, what it can and can't do, and how to think about it clearly in a world full of hype. By the end you'll be able to read AI news intelligently and hold informed conversations with technical teams.

No code Plain English Big picture

What you'll come away with

Your curriculum

1.1

What is an LLM?

Large Language Models are stateless text-transformation functions: they take text in and return text out, with no memory between calls. Understanding this one fact shapes every architectural decision you'll make with AI.

โ†’
8.1

AI ROI: What Actually Gets Measured

Most AI pilots show impressive returns that evaporate at scale. Understanding why, and how to measure value correctly, is the difference between AI investments that compound and ones that quietly fail.

โ†’
1.2

How Prompts Work

A prompt is not a question: it's a structured program. Understanding its anatomy (system instruction, conversation history, user message) lets you communicate intent reliably and debug output failures systematically.

โ†’
8.2

Buy vs Build vs Fine-tune

Every AI capability involves a make-or-buy decision, but the options are more nuanced than they look. This module gives you a decision framework and total cost of ownership model for each path.

โ†’
1.3

Models and Model Selection

Not every task needs the most capable model. Understanding the capability-cost-latency tradeoff lets you pick the right model for each job, and avoid paying frontier prices for work a smaller model handles just as well.

โ†’
8.3

Where AI Creates Durable Advantage

Most AI features can be replicated by any competitor with API access. Durable advantage comes from the layer underneath: proprietary data, deep workflow integration, and feedback loops that compound over time.

โ†’
1.4

Hallucinations and Model Reliability

LLMs generate plausible text, not verified truth. Understanding why models hallucinate, and how to architect around it, is the single most important reliability concern in production AI systems.

โ†’
8.4

Team Structure and AI Capability

How you organise your AI function determines what it can ship. This module maps the tradeoffs between centralised and federated models, defines the roles that actually matter, and gives you a maturity test for assessing whether your AI team can reach production.

โ†’
1.5

Structured Output and Tool Use

Getting reliable, machine-readable output from an LLM requires more than asking nicely. Structured output and tool use turn a text generator into a component your application can depend on.

โ†’
8.5

Managing AI Risk at the Org Level

AI systems introduce risk categories that traditional software governance does not cover. This module maps the five risk categories, explains how to set risk appetite, and distinguishes real risk management from risk theatre.

โ†’
1.6

Context and Memory Management

LLMs are stateless: they have no memory between calls. Every form of 'memory' in an AI application is something your code explicitly puts into the context window. Understanding how to manage that window is the core engineering skill behind every reliable AI system.

โ†’
8.6

Communicating AI to Stakeholders

The gap between what engineers know about AI systems and what stakeholders need to hear is where AI projects lose trust. This module gives you the frameworks to communicate outcomes, risk, cost, and failures in language that lands.

โ†’
1.7

Evaluating LLM Systems

LLM outputs are probabilistic and hard to unit-test. Building a systematic evaluation practice, before you ship, and continuously in production, is what separates AI features that stay reliable from ones that silently degrade.

โ†’
8.7

AI Procurement and Vendor Evaluation

Choosing an AI vendor on benchmark performance alone is one of the most reliable ways to end up with the wrong vendor. This module gives you a complete evaluation framework covering quality, pricing, data handling, SLAs, and exit planning.

โ†’
1.8

Safety and Guardrails

Safety in AI systems is not a single feature: it is a layered architecture. Understanding what the model handles automatically, what you must build, and where the gaps are is essential before shipping anything user-facing.

โ†’
8.8

Building an AI-Ready Data Foundation

Most AI ambitions stall not on model capability but on data readiness. This module gives you a practical checklist to assess whether your data is ready for AI, and explains why data infrastructure investment returns more than model investment for most organisations.

โ†’
1.9

Prototype to Production Checklist

A prototype that works in a demo is not a production system. This capstone synthesises every Foundations concept into a practical checklist: the gaps teams consistently miss when shipping their first AI feature.

โ†’
Start here โ†’