Assessing Evidence and Source Credibility
Evaluate source credibility, research methodology, and information quality to determine what evidence deserves your trust and action.
The Article That Fooled Everyone
A news article claimed a new study “proved” that a common food additive caused cancer. The article was shared millions of times. People changed their diets. Companies reformulated products. But the study was conducted on rats at doses 1,000 times what humans would consume, published in a low-impact journal, and funded by a competitor of the food additive manufacturer. The evidence didn’t support the panic—but nobody checked.
By the end of this lesson, you’ll evaluate any source of information systematically, determining whether it deserves your trust, skepticism, or outright dismissal.
🔄 Quick Recall: In the previous lesson, we identified ten common logical fallacies. Remember the appeal to authority fallacy? Today we go deeper: when is authority credible, and when is it misleading? Source evaluation answers that question systematically.
The CRAAP Test for Source Evaluation
Librarians developed the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) as a practical framework for evaluating any information source:
Currency: How Recent Is It?
Information ages at different rates depending on the field:
- Technology: Outdated within months
- Medical research: Major updates every few years
- Historical facts: Rarely change
- Statistics: Check the data year, not the publication year
Ask: “When was this published? Has newer evidence superseded it?”
Relevance: Does It Address Your Actual Question?
A high-quality source on the wrong topic is useless. A study about exercise benefits in elderly adults may not apply to young athletes.
Ask: “Does this source address my specific question, population, or context?”
Authority: Is the Source Qualified?
- Author credentials: Do they have relevant expertise?
- Publication: Is it a peer-reviewed journal, a reputable news outlet, or a random blog?
- Institutional affiliation: University researcher vs. industry spokesperson?
Ask: “Why should I trust this specific source on this specific topic?”
Accuracy: Is the Information Verifiable?
- Citations: Does the source cite its evidence?
- Methodology: Is the research design described?
- Consistency: Does it match other credible sources?
- Errors: Are there factual mistakes or typos that suggest carelessness?
Ask: “Can I verify these claims independently?”
Purpose: Why Was This Created?
- Inform: Objective reporting of facts
- Persuade: Advocating a position
- Sell: Marketing a product or service
- Entertain: Engagement over accuracy
- Deceive: Deliberate misinformation
Ask: “What is the creator’s motivation?”
✅ Quick Check: Apply the CRAAP test to this source: a blog post from 2019 by an anonymous author claiming a specific vitamin cures depression, with no citations. What scores would you give?
Evaluating Research Quality
When a claim references a “study,” ask these questions:
Sample Size and Selection
- How many participants? Larger samples produce more reliable results.
- How were they selected? Random selection is stronger than convenience sampling.
- Who was studied? College students may not represent the general population.
Study Design
| Design | Strength | Example |
|---|---|---|
| Meta-analysis | Very strong | Analysis of 50 previous studies |
| Randomized controlled trial | Strong | Randomly assigned drug vs. placebo |
| Cohort study | Moderate | Following a group over 10 years |
| Cross-sectional | Weak-moderate | Survey at one point in time |
| Case report | Weak | Single patient observation |
Statistical Significance vs. Practical Significance
A result can be statistically significant (unlikely to be random) but practically insignificant (too small to matter).
Example: A study finds that a new teaching method improves test scores by 0.3% with p < 0.05. Statistically significant? Yes. Practically meaningful? Probably not.
Ask: “Even if this result is real, is the effect size large enough to matter?”
Replication
Has the finding been replicated by independent researchers? A single study, no matter how well-designed, could be a fluke. Replicated findings are far more trustworthy.
✅ Quick Check: A headline says “Scientists discover that chocolate improves memory.” The study had 12 participants, no control group, and hasn’t been replicated. How confident should you be?
The Source Evaluation Prompt
Evaluate the credibility of this source:
[paste the source information]
Run a CRAAP analysis:
1. CURRENCY: How recent? Is the information still
valid?
2. RELEVANCE: Does it address the specific question?
3. AUTHORITY: Who created it? What are their
credentials and potential conflicts?
4. ACCURACY: Is it cited, verifiable, consistent
with other sources?
5. PURPOSE: Why was it created? Inform, persuade,
sell, or deceive?
If it references research, also evaluate:
6. Sample size and selection method
7. Study design strength
8. Whether the effect size is practically meaningful
9. Whether the finding has been replicated
Overall credibility rating: 1-10 with explanation.
Red Flags That Signal Low Credibility
Watch for these warning signs:
- No author identified. Credible sources put names on their work.
- No citations or references. Claims without sources are just opinions.
- Extreme language. “BREAKTHROUGH,” “MIRACLE,” “THEY don’t want you to know”
- Undisclosed conflicts of interest. Study funded by the company selling the product.
- Single source. Only one study, one expert, one data point.
- Emotional manipulation. Fear, outrage, or urgency used to bypass rational evaluation.
- Inconsistency with established knowledge. Extraordinary claims need extraordinary evidence.
Try It Yourself
Find a news article or research claim that interests you. Run a full credibility evaluation:
- Apply the CRAAP test to the source
- If research is cited, evaluate the study design
- Check for red flags
- Search for independent verification
Then compare your evaluation with AI’s assessment. Where did you agree? Where did you miss something?
Key Takeaways
- The CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) provides a systematic source evaluation framework
- Research quality depends on sample size, study design, effect size, and replication
- Statistical significance doesn’t equal practical significance—always ask about effect size
- Red flags include anonymous authors, no citations, extreme language, and undisclosed conflicts
- A single study is never proof; look for replication by independent researchers
- AI can rapidly evaluate sources, but you should develop the habit of independent verification
Up Next
In Lesson 6: Decision Frameworks for Complex Problems, we’ll apply our critical thinking tools to making better decisions when multiple factors compete for your attention.
Knowledge Check
Complete the quiz above first
Lesson completed!