3.2 — The Agent Stack
In Phase 0 you learned the five-layer AI stack: You, Harness, Context, Model, Provider. That’s the foundation.
When you move from talking to AI to building with agents, three new layers appear between you and the context:
┌──────────────────────────────────┐│ YOU (the orchestrator) │ ← Intent, judgment, guardrails├──────────────────────────────────┤│ HARNESS (Claude Code, etc.) │ ← How you interact with the agent├──────────────────────────────────┤│ INSTRUCTIONS (CLAUDE.md, etc.) │ ← Persistent rules the agent follows├──────────────────────────────────┤│ TOOLS (MCP, bash, file access) │ ← What the agent can DO├──────────────────────────────────┤│ MEMORY (files, RAG, Brain) │ ← What the agent REMEMBERS├──────────────────────────────────┤│ CONTEXT (conversation + files) │ ← What the agent SEES right now├──────────────────────────────────┤│ MODEL (Opus, Sonnet, etc.) │ ← The reasoning engine├──────────────────────────────────┤│ PROVIDER (Anthropic, etc.) │ ← The company running it└──────────────────────────────────┘What Changed and Why
Section titled “What Changed and Why”Instructions layer — previously, all guidance to the AI came from your prompts in the moment. Agents get persistent instructions via files like CLAUDE.md that shape every session automatically. The agent follows these rules without you repeating them. Covered in depth in 3.3 — Harness Engineering.
Tools layer — what the agent is allowed to do. Without tools, the AI can only generate text. With tools it can read files, run commands, call APIs, query databases. Every tool expands what the agent can accomplish — and expands what it can get wrong. MCP (covered in 3.4) is how tools get connected.
Memory layer — AI has no persistent memory by default. Every new session starts blank. The memory layer is everything you’ve built to bridge that gap: files the AI reads at startup, RAG systems that retrieve relevant knowledge on demand, memory directories that carry facts across conversations. Covered in 3.5 — Memory & RAG.
The Key Insight
Section titled “The Key Insight”Each layer you understand gives you a lever. If the agent is doing the wrong thing:
- Wrong behavior → fix the Instructions layer (CLAUDE.md)
- Can’t do something → add a Tool (MCP server)
- Forgets context → fix the Memory layer
- Still wrong → look at Context (what it can actually see)
- Fundamentally limited → consider a different Model
Next: 3.3 — Harness Engineering | Phase overview: Phase 3