Build with AI/Data & Prompts
Part 610 min read

Prompting is programming: master the skill everyone needs

There's a persistent myth that prompting is just "talking to AI."

Prompting is programming: master the skill everyone needs

Part 6 of the "Build with AI" series


There's a persistent myth that prompting is just "talking to AI."

That if you type something naturally, in plain English, and the AI understands you, then you're already doing it right. That prompting is the easy part — the thing anyone can do without thinking about it.

This myth is costing people enormous amounts of time and producing mediocre results.

Prompting is not talking. Prompting is instructing. And like all forms of precise instruction — writing a legal contract, briefing a designer, writing a test case for a developer — the quality of the instruction determines the quality of the outcome.

The difference between a casual user of AI and a power user isn't the tool they use or the model they pick. It's the quality of their prompts. And prompt quality is a learnable skill — with a small set of patterns that, once you know them, change everything.

This post gives you those patterns.


Why prompting is programming

When a developer writes code, they're giving a machine precise, unambiguous instructions. Leave something vague and the code breaks, or worse, does something you didn't intend.

Prompting works the same way — with one important difference. Code is executed literally. A prompt is interpreted. The AI fills in gaps based on patterns — and those patterns may or may not match your intent.

This means:

  • Precision matters — the more specific you are, the less the AI has to guess
  • Structure matters — how you organize a prompt affects how the AI processes it
  • Context matters — what you tell the AI before the task shapes how it approaches the task
  • Examples matter — showing is more reliable than telling

Think of prompting as writing instructions for an extraordinarily capable intern who has read everything ever written, but knows nothing specific about you, your context, or what "good" looks like for your situation. Every gap you leave is a gap they'll fill with their best guess.


The 5 core prompt patterns

These are not tricks or hacks. They're structural patterns that work across almost every use case — writing, analysis, coding, research, decision support. Learn these five and you've covered 80% of what makes a prompt effective.


Pattern 1: Role + task + context

The single most impactful pattern. Tell the AI who to be, what to do, and what it needs to know.

Structure:

You are [role with specific expertise].
Your task is [specific action verb + deliverable].
Context: [relevant background the AI needs to do this well]

Without the pattern:

"Write a LinkedIn post about our new product launch."

With the pattern:

"You are a B2B SaaS copywriter who specializes in writing for technical founders. Your task is to write a LinkedIn post announcing our product launch that drives sign-ups for a free trial. Context: Our product is an AI-powered requirements tool called Clearly (clearlyreqs.com). Our audience is product managers and startup founders who are frustrated with how long it takes to go from idea to spec. The tone should be direct and confident — no hype, no buzzwords. Our typical customer is skeptical of marketing claims and responds to specificity."

The second prompt will produce something radically more useful. Not because it's longer — but because it eliminates three layers of guessing.


Pattern 2: Format specification

AI will default to a format that seems reasonable based on your prompt. That default is often wrong for your purpose. Specify the format explicitly.

Weak:

"Summarize this meeting transcript."

Strong:

"Summarize this meeting transcript. Format your response as:

  • Decisions made (bullet list, 1 sentence each)
  • Action items (bullet list, owner and deadline if mentioned)
  • Open questions (bullet list, unresolved items that need follow-up) Keep the entire summary under 200 words."

Format specification is especially important when:

  • The output needs to go somewhere specific (an email, a slide, a form)
  • You need to scan the output quickly
  • The AI tends to be verbose when you need brevity (or vice versa)
  • You're chaining AI outputs — the output of one prompt feeds the input of another

If you find yourself editing AI outputs to change their structure every time, that's a signal to add format specification to your prompt.


Pattern 3: Constraints and anti-patterns

Tell the AI what not to do. This is one of the most underused patterns — and one of the highest-leverage.

Every output you've received from AI that felt slightly off, slightly generic, or slightly wrong usually violated a constraint you had but didn't state.

Examples of constraints:

"Do not use bullet points — write in flowing prose." "Do not mention competitors by name." "Do not use the words 'innovative' or 'cutting-edge'." "Do not assume the reader has technical background." "Do not start any sentence with 'I'." "Do not exceed 150 words."

Examples of anti-patterns (what to avoid doing):

"Avoid generic advice that could apply to any company — everything should be specific to our situation." "Avoid restating the question before answering it." "Avoid hedging language like 'it might be worth considering' — be direct and specific."

Constraints feel like restrictions, but they're actually a form of precision. They close the gap between "technically correct" and "actually what I wanted."


Pattern 4: Chain of thought (ask it to reason first)

For complex tasks — analysis, strategy, debugging, decisions — asking the AI to reason through a problem before giving you an answer dramatically improves output quality.

This works because it forces the model to build up to a conclusion rather than pattern-matching to a plausible-sounding answer. Think of it as asking someone to "show their work."

Without CoT:

"Should we expand to the Japanese market this year?"

With CoT:

"I'm considering expanding our B2B SaaS product to the Japanese market this year. Before giving me a recommendation, reason through the following:

  1. What factors typically determine success or failure for US SaaS companies entering Japan?
  2. What information would you need to make a confident recommendation?
  3. What are the strongest arguments for expanding now?
  4. What are the strongest arguments for waiting? Then give me your recommendation based on that reasoning."

The second prompt will consistently produce more nuanced, more reliable output — because the model has to engage with the problem's complexity before it's allowed to give you an answer.

Use CoT for any decision with meaningful stakes, any analysis where the reasoning matters as much as the conclusion, debugging, and any time you've gotten a confident-sounding answer that turned out to be wrong.


Pattern 5: Examples (few-shot prompting)

The fastest way to close the gap between what you imagine and what AI produces: show it an example of what you want.

This is called "few-shot prompting" in technical literature, but the concept is simple: instead of describing what good looks like in words, demonstrate it.

Without examples:

"Write a product update email in our company's voice."

With examples:

"Write a product update email in our company's voice.

Here are two examples of emails that match our tone:

[Example 1 — paste a real email you've sent]

[Example 2 — paste another real email]

Notice: we keep sentences short, we use 'you' not 'users', we lead with the benefit not the feature, and we never use exclamation points."

The example does more work than a paragraph of description. It shows rhythm, vocabulary, structure, and tone — things that are very hard to describe but immediately clear when demonstrated.

Keep a small library of your best AI outputs. The next time you need something similar, use a previous output as a few-shot example rather than describing the style from scratch.


Putting it together: the full prompt

Here's what a well-constructed prompt looks like when all five patterns are applied:

Task: Write a cold outreach email to a potential partner.

You are a business development writer specializing in Korean-US technology partnerships.

Your task is to write a cold outreach email to a director at a US smart city technology company, 
introducing Seattle Partners and proposing a partnership to bring Korean smart city startups 
to the US market.

Context:
- Seattle Partners is a US market entry firm focused on Korean technology companies
- We've worked with 20+ Korean startups across smart city, smart building, and IoT sectors
- The recipient is a Director of Business Development at a mid-size US smart city tech company
- They likely receive many generic partnership emails; ours needs to earn their attention immediately

Format:
- Subject line (under 8 words)
- Email body (under 150 words)
- Single clear call to action at the end

Constraints:
- Do not use the word "synergy" or "innovative"
- Do not be vague — mention at least one specific technology category we cover
- Do not start with "I hope this email finds you well" or any variation
- The tone should be direct and confident, not deferential

Here is an example of a previous outreach email that got a positive response:
[paste your example here]

This prompt will consistently produce something you can actually use — because it eliminates almost all of the AI's guesswork about who you are, what you want, and what good looks like.


The iteration mindset

One more thing that separates casual users from power users: how they respond to imperfect outputs.

Casual users try one prompt, get something that's not quite right, and either accept it or start over from scratch.

Power users treat prompting as an iterative process. The first output reveals gaps. You tighten the prompt, add a constraint, give an example of what went wrong. The second output is better. You refine again. By the third or fourth iteration, you have something genuinely excellent — and more importantly, you have a prompt you can reuse.

The goal isn't to write a perfect prompt on the first try. The goal is to learn what your prompt was missing from each output, and add that precision to the next version.

Every prompt you refine is an asset. Save them. Build a library. That library is one of the most valuable things you can build for your workflow.


Key takeaways

The quality of your prompt determines the quality of your output — every time. It's a skill, not luck. Five patterns cover 80% of what makes a prompt effective: role + task + context, format specification, constraints and anti-patterns, chain of thought, and examples. Constraints are especially underused — telling AI what not to do closes the gap between technically correct and actually what you wanted. Showing the AI what you want works better than telling it, so keep a library of your best outputs and use them as few-shot examples. And every refined prompt you save is reusable. Over time, that library is one of the most productive things you can build.


Want the full framework?

This post covers the core patterns. The AI Development Guide by Jaehee Song goes deeper into advanced prompting for specific use cases — including how to prompt for agents, how to chain prompts in multi-step workflows, and how to adapt these patterns for different models and contexts.

📱 Apple Books ▶️ Google Play Books 🌐 All Platforms (Books2Read)


Next in the series: "AI Agents — Build a Mini Workforce Without Writing Code" — how to go beyond single prompts into multi-step autonomous workflows that work while you sleep.