1.2 — Your First Conversation With AI
The Big Mental Shift
Section titled “The Big Mental Shift”You are not “learning to code.”
You are learning to describe what you want clearly enough that an AI builds it correctly.
This distinction matters. Traditional coding means memorizing syntax, rules, and commands. What you’re doing is different — you’re communicating intent with precision. Your domain knowledge, your judgment, and your ability to articulate what “done” looks like are the skills. The AI handles the translation into code.
Core Prompting Principles (Phase 1 Depth)
Section titled “Core Prompting Principles (Phase 1 Depth)”These four principles will carry you through your first build. They’re not hacks — they’re fundamentals you’ll use at every phase.
1. Be specific about the outcome, not the method
Section titled “1. Be specific about the outcome, not the method”The AI doesn’t need you to know how to build something. It needs to know what the result should look like.
- Weak: “Make me a website”
- Strong: “Build a single-page personal website with my name, a short bio, and links to my LinkedIn and email. Use simple, clean design. No frameworks — just HTML and CSS.”
The strong version tells the AI what success looks like. The weak version leaves every decision to the AI, and you won’t like all the decisions it makes.
2. Give context about who you are
Section titled “2. Give context about who you are”The AI defaults to writing for experienced developers. Tell it otherwise.
- “I’m brand new to this. Explain each file you create and why.”
This one sentence changes the entire character of the output. The AI will label things, explain choices, and avoid jargon — because you told it to.
3. Describe what done looks like
Section titled “3. Describe what done looks like”Give the AI a definition of completion before it starts.
- “When this is complete, I should be able to open
index.htmlin my browser and see the page.”
This prevents the AI from stopping halfway or delivering something technically correct but not actually usable yet.
4. Ask it to explain, not just do
Section titled “4. Ask it to explain, not just do”You learn nothing from watching output appear. Ask for the reasoning.
- “Build this AND explain what each file does in plain language.”
Understanding the why behind what the AI created is how you build judgment for future projects. It also lets you catch errors — if the explanation doesn’t make sense, the code might not either.
Terms Introduced
Section titled “Terms Introduced”| Term | Definition |
|---|---|
| Prompt | The instruction you give to an AI |
| System prompt | Background instructions that shape how the AI behaves across an entire conversation |
| Hallucination | When AI confidently generates something that’s wrong — states false facts, invents files that don’t exist, or fabricates code that won’t work |
| Iteration | Refining results through multiple rounds of feedback. Rarely perfect on the first try — that’s normal, not failure. |
What Hallucination Means in Practice
Section titled “What Hallucination Means in Practice”Hallucination isn’t a bug that gets fixed eventually — it’s a structural property of how AI models work. They predict what should come next based on patterns, and sometimes that prediction is confidently wrong.
This is why the principle “describe what done looks like” matters so much. When you can test the output yourself (“can I open this in a browser and see the page?”), you catch hallucinations before they cause problems.
The antidote: verify everything. Run the code. Open the file. Check that it actually works. Don’t assume the AI was right because it answered confidently.
Next: 1.3 — Understanding What Came Back | Phase overview: Phase 1
Goes deeper in Phase 2: Prompt Engineering Deep Dive