Reasoning Techniques: CoT, ToT, and Self-Consistency
Master chain-of-thought, tree-of-thought, and self-consistency prompting to dramatically improve AI performance on complex reasoning, math, and logic tasks.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
Basic prompting tells the AI what to do. Reasoning techniques tell it how to think. This distinction dramatically improves performance on complex problems — math, logic, analysis, planning, and multi-step tasks.
🔄 Quick Recall: In the previous lesson, you learned to structure prompts with XML tags, JSON schemas, and the COSTAR framework. Now you’ll add reasoning techniques that improve the AI’s thinking process — the quality of the computation, not just the format of the result.
Chain-of-Thought (CoT) Prompting
The simplest and most widely used reasoning technique. Instead of asking for a direct answer, you ask the AI to show its reasoning.
Zero-Shot CoT
Add one phrase to any prompt:
Without CoT: “What is 17 × 24?” → AI might answer “398” (wrong)
With CoT: “What is 17 × 24? Let’s think step by step.” → AI reasons:
- 17 × 24 = 17 × (20 + 4)
- 17 × 20 = 340
- 17 × 4 = 68
- 340 + 68 = 408 ✓
That’s zero-shot CoT — no examples needed, just the instruction to reason explicitly.
Few-Shot CoT
Provide examples that include reasoning chains:
<examples>
Question: A store has 45 apples. They sell 3/5 of them. How many are left?
Reasoning: 3/5 of 45 = 45 × 3/5 = 27 apples sold. 45 - 27 = 18 remain.
Answer: 18 apples
Question: A car travels 60 mph for 2.5 hours, then 40 mph for 1.5 hours.
Total distance?
Reasoning: Distance = speed × time. Leg 1: 60 × 2.5 = 150 miles.
Leg 2: 40 × 1.5 = 60 miles. Total: 150 + 60 = 210 miles.
Answer: 210 miles
</examples>
<question>
A bakery makes 120 cookies per batch. They need 500 cookies for an event.
They've already made 2 batches. How many more batches do they need?
</question>
The AI follows the reasoning pattern from the examples.
✅ Quick Check: You ask an AI to “Evaluate whether our Q3 marketing campaign was successful.” The AI immediately says “Yes, it was successful.” What’s missing? (Answer: Reasoning. The AI jumped to a conclusion without defining what “successful” means, examining the data, or weighing evidence. A CoT version: “Evaluate whether our Q3 marketing campaign was successful. Think step by step: First, define the success criteria we set before the campaign. Then, compare actual results against each criterion. Finally, weigh the overall evidence and make a judgment.” This forces a reasoned evaluation, not a gut reaction.)
When CoT Helps Most
| Task Type | Without CoT | With CoT | Improvement |
|---|---|---|---|
| Arithmetic | 60-70% | 85-95% | Large |
| Logic puzzles | 40-60% | 70-85% | Large |
| Reading comprehension | 80-85% | 85-90% | Moderate |
| Simple factual Q&A | 90%+ | 90%+ | Minimal |
| Creative writing | Varies | Varies | Minimal |
Rule of thumb: If the task requires more than one mental step, CoT helps.
Tree-of-Thought (ToT) Prompting
CoT follows a single reasoning path. What if the first path is wrong? Tree-of-Thought explores multiple paths simultaneously.
How ToT Works
Problem: Design a marketing strategy for a B2B SaaS launch.
Path A: Content marketing focus
→ Pro: Low cost, builds authority
→ Pro: Compounds over time
→ Con: Slow initial results (6-12 months)
→ Verdict: Good for long-term, poor for launch urgency
Path B: Paid advertising focus
→ Pro: Immediate results
→ Pro: Measurable ROI
→ Con: Expensive at scale, stops when budget stops
→ Verdict: Good for launch, unsustainable long-term
Path C: Partnership/channel focus
→ Pro: Leverages existing audiences
→ Pro: Higher trust (warm introductions)
→ Con: Harder to control timing and volume
→ Verdict: Unpredictable but high-value when it works
Best path: Combine B (launch) + A (long-term) + C (opportunistic)
ToT Prompt Pattern
<task>
Evaluate three different approaches to solving this problem.
For each approach:
1. Describe the approach
2. List 2-3 advantages
3. List 2-3 disadvantages
4. Rate feasibility (1-10)
After evaluating all three, recommend the best approach
or a combination. Explain why.
</task>
<problem>{{PROBLEM_DESCRIPTION}}</problem>
This forces the AI to explore before committing — catching dead ends that CoT would miss.
Self-Consistency
Self-consistency runs the same prompt multiple times and takes the majority answer. It’s like asking five experts the same question and going with the consensus.
When to Use It
- High-stakes decisions where you need confidence in the answer
- Math or logic problems where a single error is catastrophic
- Classification tasks where borderline cases need extra validation
Implementation
- Run the same prompt 3-5 times (with temperature > 0)
- Collect all answers
- The most frequent answer is the “self-consistent” answer
✅ Quick Check: You run a sentiment analysis prompt 5 times on the same customer review. Results: Positive (3), Neutral (1), Negative (1). What’s the self-consistent answer, and how confident should you be? (Answer: The self-consistent answer is “Positive” (3/5 = 60% agreement). With only 60% agreement, confidence is moderate — the review likely has mixed signals. If agreement were 5/5 (100%) or 4/5 (80%), you’d be more confident. For borderline cases like this, a human reviewer should make the final call.)
Combining Techniques
The techniques stack. Use them together for maximum reliability:
<role>You are a financial risk analyst.</role>
<task>
Analyze the following investment opportunity using tree-of-thought.
Consider three perspectives:
1. Bull case (optimistic)
2. Bear case (pessimistic)
3. Base case (most likely)
For each case, think step by step through:
- Revenue projections
- Risk factors
- Market conditions
- Competitive landscape
Then synthesize all three cases into a final recommendation.
</task>
<data>{{INVESTMENT_DATA}}</data>
<output_format>
## Bull Case
[Step-by-step reasoning]
## Bear Case
[Step-by-step reasoning]
## Base Case
[Step-by-step reasoning]
## Recommendation
[Synthesized judgment with confidence level]
</output_format>
This combines: XML structure + role definition + ToT (three perspectives) + CoT (step-by-step in each) + structured output.
Practice Exercise
- Take a math or logic problem and compare: direct answer vs. “Let’s think step by step”
- Use the ToT pattern to evaluate three approaches to a decision you’re facing
- Run a classification prompt 5 times and check for consistency
- Combine CoT + ToT: ask the AI to explore multiple paths, reasoning through each step by step
Key Takeaways
- Chain-of-thought: “Think step by step” improves accuracy 20-40% on reasoning tasks
- Few-shot CoT (with reasoning examples) outperforms zero-shot CoT
- Tree-of-thought explores multiple paths — catches errors that single-path CoT misses
- Self-consistency uses majority voting across multiple runs for higher confidence
- CoT helps most on multi-step tasks; minimal improvement on simple factual questions
- Combine techniques: XML structure + CoT reasoning + ToT exploration for maximum reliability
Up Next
In the next lesson, you’ll master few-shot prompting — the art of teaching AI through strategically chosen examples.
Knowledge Check
Complete the quiz above first
Lesson completed!