0.4 — The AI Stack
┌─────────────────────────────────┐│ YOU (the orchestrator) │ ← Your intent, your judgment├─────────────────────────────────┤│ HARNESS (Claude Code, Cursor) │ ← The tool you talk through├─────────────────────────────────┤│ CONTEXT (files, memory, docs) │ ← What the AI can see right now├─────────────────────────────────┤│ MODEL (Opus, Sonnet, GPT) │ ← The AI brain doing the work├─────────────────────────────────┤│ PROVIDER (Anthropic, OpenAI) │ ← The company running the model└─────────────────────────────────┘Layer by Layer
Section titled “Layer by Layer”Harness — the tool between you and the AI model. Claude Code, Cursor, ChatGPT’s web interface — these are all harnesses. The harness shapes how you interact with the model.
Why use a terminal harness like Claude Code instead of the Claude app? The app is a conversation window. Claude Code lives inside your project — it can read your files, edit your code, run commands, and build things directly. The app is talking about work. Claude Code is doing work.
Context — everything the AI can “see” when answering you. Files you’ve opened, conversation history, instructions you’ve given. The AI only knows what’s in its context.
Context Window — the size limit on how much the AI can see at once. Like a desk — bigger desk, more documents open simultaneously.
Token — the unit AI uses to measure text. Roughly 3/4 of a word. Context windows are measured in tokens.
Provider — the company that built and runs the model. Anthropic makes Claude, OpenAI makes GPT, Google makes Gemini.
This stack expands in Phase 3 when you add tools, memory, and instructions layers.
Next: 0.5 — Vocabulary | Phase overview: Phase 0