Last updated: April 2026
This is the advanced companion to our beginner's guide to prompt engineering. If you haven't read that yet, start there: it covers the five building blocks (role, task, context, format, constraints) that everything in this guide builds on.
This guide covers the techniques that separate someone who uses AI occasionally from someone who gets consistently excellent results. These aren't tricks or hacks. They're structured approaches that work because they align with how large language models actually process information.
Chain-of-Thought Prompting
Chain-of-thought (CoT) is the single most impactful advanced technique. Instead of asking for an answer directly, you ask the AI to show its reasoning step by step.
Why does this work? AI models produce better answers when they "think out loud." The intermediate reasoning steps help the model avoid logical errors and catch nuances it would miss if jumping straight to a conclusion.
Without chain-of-thought:
You'll get a surface-level "it depends" answer.
With chain-of-thought:
- Calculate the total cost of leasing for 5 years (assume $350/month, $2,000 due at signing, mileage overage penalties at 15,000 miles/year)
- Calculate the total cost of buying (assume $30,000 car, $5,000 down, 6% interest rate for 60 months, estimate depreciation and resale value at 5 years)
- Compare the two totals
- Factor in: I like having a new car every 3 years, I don't want to deal with major repairs, and I might relocate to a city with good public transit in 3 years
- Now give me your recommendation with the math shown
The chain-of-thought version produces a detailed financial comparison with actual numbers, factors in your personal preferences, and gives a recommendation grounded in math rather than generalities.
Few-Shot Prompting
Few-shot prompting means giving the AI examples of what you want before asking it to produce something. Instead of describing the output format in words, you show it.
This is incredibly powerful for tasks where the style or format is hard to describe but easy to demonstrate.
Example 1: Product: Wool hiking socks Description: "Built for the trail, comfortable enough for the couch. Merino wool regulates temperature whether you're climbing switchbacks or binge-watching. Reinforced heel and toe because we know where socks die first. $18."
Example 2: Product: Canvas tote bag Description: "The bag that replaced three bags. Laptop sleeve inside, water bottle pocket outside, and enough room for the farmer's market haul you didn't plan on. Waxed canvas that ages like leather. $45."
Now write descriptions in the same style for:
- A stainless steel water bottle (32oz, insulated)
- A leather journal (A5, 200 pages)
- A wireless phone charger
The AI mirrors the tone (conversational, benefit-focused, specific), the structure (one-liner hook, key features woven into lifestyle context, price at the end), and the length. This is far more effective than saying "write it in a conversational, benefit-focused style": showing beats telling.
How many examples do you need?
- 1 example (one-shot): good for simple style matching
- 2-3 examples (few-shot): ideal for most tasks, shows the pattern clearly
- 4+ examples: rarely necessary and wastes context window
Structured Output
When you need AI output that feeds into another system (a spreadsheet, a database, a script, or another AI prompt) you need structured output.
{ "sentiment": "positive" | "negative" | "mixed", "rating_implied": 1-5, "product_mentioned": "string", "issues": ["string array of specific complaints"], "praise": ["string array of specific compliments"], "would_recommend": true | false | "unclear", "key_quote": "most representative sentence from the review" }
Review: [paste review]
This output can be pasted directly into a spreadsheet, parsed by a script, or fed into another AI prompt for aggregation. The schema definition with types and allowed values keeps the output consistent across multiple reviews.
System Prompts and Persistent Context
System prompts are instructions that frame the entire conversation. They're different from regular prompts because they set behavioral rules that apply to every subsequent message.
In ChatGPT: Go to Settings → Personalization → Custom Instructions. What you put here applies to every conversation.
In Claude: Create a Project and add instructions. Every conversation within that project inherits those instructions.
A well-designed system prompt:
Rules:
- Never add corporate jargon (leverage, synergy, utilize, facilitate)
- Prefer short sentences. If a sentence has more than 25 words, split it.
- Use active voice. Flag any passive voice and rewrite it.
- Don't soften my opinions. If I wrote something direct, keep it direct.
- When I say "make it shorter," cut 30% of the word count without removing key points.
- Format: return the edited version followed by a brief list of changes you made and why.
My writing style: direct, conversational, slightly irreverent. Think blog post, not academic paper.
Every conversation within this project starts with the AI knowing your preferences, your style, and your rules. No re-explaining.
Meta-Prompting: Using AI to Write Prompts
One of the most underused advanced techniques: ask the AI to help you write a better prompt.
Here's what I've been asking: [paste your current prompt]
Here's what I got back: [paste or describe the unsatisfying output]
What's wrong with my prompt? Rewrite it to get better results. Explain what you changed and why.
This is remarkably effective. The AI identifies vague language, missing context, and structural issues in your prompt and fixes them. It's like having a prompt engineering tutor.
Prompt Decomposition
Complex tasks produce worse results when crammed into a single prompt. Prompt decomposition breaks a complex task into sequential steps.
Instead of one massive prompt:
Decompose into a chain:
Review the competitors, correct any errors, then:
Verify the comparison, then:
Finally:
Each step produces a focused, verifiable output. You can catch errors early instead of getting a 2,000-word document where the wrong competitor analysis cascades into a flawed roadmap.
Handling Hallucinations
AI confidently generates wrong information. Advanced users build verification into their prompts:
Question: [your question]
Building a Prompt Library
The highest-leverage skill in prompt engineering isn't writing great prompts: it's saving and reusing them. Build a personal prompt library organized by use case.
Storage options:
- Simple: A note in Apple Notes, Notion, or Google Keep with folders by category
- Structured: A spreadsheet with columns for Name, Category, Prompt Text, Notes/Tips, and Last Used
- Advanced: A Notion database or Obsidian vault with tags, templates, and version history
Categories to start with:
- Writing (emails, posts, documents)
- Analysis (data, competitors, decisions)
- Learning (explain topics, create study materials)
- Work (meeting prep, project planning, communication)
- Personal (meal planning, travel, finances)
Every time you write a prompt that produces great results, save it. Every time you refine a prompt through iteration, save the final version. After a month, you'll have a personal toolkit that makes you dramatically faster.
When to Use Which Model
Different AI models have different strengths. Matching the task to the model matters at the advanced level:
Claude: Best for: long document analysis, following complex instructions precisely, structured output, nuanced writing. Weakest at: real-time web search, image generation.
ChatGPT: Best for: conversational follow-ups, code generation, image generation (DALL-E), web browsing. Weakest at: following very complex multi-constraint instructions without drifting.
Gemini: Best for: Google ecosystem integration, multimodal tasks (image + text), research with web sources. Weakest at: creative writing, maintaining consistent persona.
For most advanced workflows, Claude or ChatGPT are the primary tools. Pick one as your default and learn its quirks deeply rather than switching between tools constantly.
Frequently Asked Questions
What's the most important advanced prompting technique?
Chain-of-thought prompting. Asking AI to reason step by step before answering produces dramatically better results for any task involving analysis, comparison, or problem-solving. It's simple to use (just add "think through this step by step") and works on every AI model.
How many examples do I need for few-shot prompting?
Two to three examples is the sweet spot. One example shows the format but might not convey the pattern clearly. Four or more is rarely necessary and wastes context window space that could be used for your actual task.
Should I use Claude or ChatGPT for advanced prompting?
Claude is better at following complex multi-step instructions and producing structured output consistently. ChatGPT is better at conversational workflows and code generation. For most advanced prompt engineering, both are capable: pick the one you know better and learn its specific behaviors.
How do I stop AI from making things up?
Add explicit instructions: "If you're not confident about a fact, say so. Don't guess. Distinguish between things you know and things you're inferring." Also decompose complex prompts into verifiable steps so you can catch errors early. AI hallucination decreases significantly with specific, narrow prompts versus broad, open-ended ones.
Is prompt engineering still relevant as AI models improve?
Yes, but the baseline is rising. Models are getting better at handling vague prompts, so basic prompting becomes less important. But advanced techniques (structured output, prompt chains, few-shot examples, and system prompts) remain valuable because they solve problems that better models don't eliminate: ambiguity in what you want, consistency across outputs, and integration with other tools and workflows.
Get the best tools delivered to your inbox
Weekly reviews, comparisons, and deals. No spam, unsubscribe anytime.
You might also like

The AI-Powered Job Search System: Prompts and Workflows That Land Interviews
Apr 23 · 13 min read
God-Tier AI Meal Planning: Automations, Integrations, and Prompts That Run Your Kitchen
Apr 23 · 12 min read
Prompt Engineering for Beginners: How to Talk to AI (No Tech Background Needed)
Apr 23 · 12 min read
