Privacy and Data
What happens to your data and how to protect what matters.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skills included
- New content added weekly
What Happens to Your Data?
In the previous lesson, we explored understanding ai bias. Now let’s build on that foundation. When you type something into an AI tool, where does it go?
The answer varies by tool, but here’s what often happens:
- Your input is sent to servers (often in the cloud)
- It may be logged and stored
- It may be reviewed by humans for quality or safety
- It might be used to train or improve the AI
- It could potentially be exposed in security breaches
“But my conversation is private, right?”
Not necessarily. Not automatically.
The Default Assumptions
Assume your data might be:
- Stored: For some period, possibly indefinitely
- Reviewed: By humans for quality control or safety
- Used: To improve AI systems (training data)
- Vulnerable: No system is perfectly secure
These vary by provider. Some offer stronger privacy guarantees. But verify—don’t assume.
What You’re Sharing
Think about what you input into AI tools:
Direct content:
- Documents you upload
- Questions you ask
- Code you share
- Data you analyze
Indirect content:
- Information about other people
- Client confidential data
- Proprietary business information
- Context that reveals sensitive things
Metadata:
- When you use the service
- Patterns of use
- Topics you’re interested in
Data Categories to Protect
Personal Identifiable Information (PII):
- Names, addresses, phone numbers
- Email addresses
- Social security numbers
- Medical information
- Financial details
Confidential Business Data:
- Trade secrets
- Client information
- Unreleased product plans
- Financial projections
- Internal communications
Others’ Data Without Consent:
- Information about colleagues
- Customer details
- Anyone who hasn’t agreed to AI processing
Practical Privacy Habits
Before sharing data with AI:
- Ask: Does this need to include identifying information?
- Anonymize: Can I remove names, dates, specific details?
- Check: What’s this provider’s data policy?
- Consider: Who else might this affect?
Example transformations:
Instead of: “My client John Smith at ABC Corp is having trouble with…” Use: “A client at a financial services company is having trouble with…”
Instead of: “Here’s our Q3 revenue numbers: $2.3M with…” Use: “Here’s sample revenue data for analysis: [modified figures]”
Understanding Privacy Policies
What to look for:
| Question | Find in Policy |
|---|---|
| Is my data used for training? | “Data use” or “training” sections |
| Can humans review my inputs? | “Human review” or “quality assurance” |
| How long is data stored? | “Data retention” |
| Can I delete my data? | “Your rights” or “data deletion” |
| Where is data stored? | “Data location” or “jurisdiction” |
Quick check: Before moving on, can you recall the key concept we just covered? Try to explain it in your own words before continuing.
Reality check: Most people don’t read these policies. But for sensitive use cases, you should.
Enterprise vs. Consumer Tiers
Many AI providers offer different privacy levels:
Consumer/Free tiers:
- Fewer privacy guarantees
- Data may be used for training
- Less control over retention
Enterprise/Business tiers:
- Often exclude data from training
- Better security commitments
- More control over your data
If privacy matters for your use case, consider whether the free tier is appropriate.
When Others’ Privacy Is at Stake
You don’t just protect your own data. You protect others':
Clients and customers: Don’t share client details without consent or appropriate anonymization.
Colleagues: Don’t share personal information about coworkers.
Third parties: Anyone mentioned in your prompts is having their data processed without their knowledge.
The ethical question: Would this person be comfortable with their information being processed by AI?
Legal and Regulatory Considerations
Some regulations restrict AI data use:
- GDPR (EU): Strict rules on personal data processing
- HIPAA (US healthcare): Protected health information rules
- Industry regulations: Finance, law, and other sectors have specific requirements
If you work in regulated industries: Check your compliance obligations before using AI with sensitive data.
This isn’t legal advice. Consult appropriate experts for your situation.
Breaches Happen
Past incidents:
- Conversation histories accidentally exposed to other users
- Prompts appearing in training data, then in outputs
- Security vulnerabilities exposing stored data
Protect accordingly:
- Don’t share anything you couldn’t tolerate being exposed
- Consider the “newspaper test”—would this be a problem if it became public?
- Have a plan if something goes wrong
Exercise: Data Audit
Review your recent AI use:
- What kinds of data have you shared with AI tools?
- Did any of it include PII (your own or others’)?
- Did any of it include confidential business information?
- Could you have anonymized some of it?
- Do you know the privacy policies of the tools you use?
Identify one change you could make to better protect data.
Key Takeaways
- Don’t assume AI conversations are private by default
- Data may be stored, reviewed by humans, and used for training
- Protect: PII, confidential business data, information about others
- Anonymize where possible; remove identifying details
- Check privacy policies for sensitive use cases
- Enterprise tiers often offer stronger privacy guarantees
- Consider others’ privacy, not just your own
- Breaches happen—don’t share what couldn’t tolerate exposure
Next: Transparency and disclosure—when and how to be upfront about AI use.
Up next: In the next lesson, we’ll dive into Transparency and Disclosure.
Knowledge Check
Complete the quiz above first
Lesson completed!