AI in Healthcare: Promise, Practice, and Caution
Understand what AI can and cannot do in healthcare settings. Build the ethical framework you need before using any AI tool with patients.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skills included
- New content added weekly
A Nurse’s Monday Morning
It’s 7:15 AM. Sarah, a medical-surgical nurse, has just received handoff for six patients. Over the next 12 hours, she’ll document assessments, create care plans, write discharge instructions, update families, coordinate with specialists, and – oh right – actually take care of her patients.
By noon, she’s spent more time typing than talking to patients. Sound familiar?
This is the reality for most healthcare workers. And it’s exactly where AI can help – not by replacing your clinical expertise, but by handling the writing, organizing, and formatting that consumes your day.
What to Expect
This course is broken into focused, practical lessons. Each one builds on the last, with hands-on exercises and quizzes to lock in what you learn. You can work through the whole course in one sitting or tackle a lesson a day.
What You’ll Learn in This Lesson
By the end of this lesson, you’ll be able to define what AI can and can’t do in healthcare, identify which of your daily tasks are good candidates for AI assistance, and apply an ethical framework for every AI interaction in a clinical context.
Building on What You Know
You already use technology in healthcare every day – EHRs, lab systems, imaging software, communication platforms. AI is another tool in that toolkit. The difference is that AI works with language, which means it can help with the enormous portion of your job that involves writing, reading, and communicating.
Think of it this way: your EHR stores information. AI helps you write, organize, and communicate that information faster.
What AI Actually Does (and Doesn’t Do)
Let’s be clear about AI’s capabilities in healthcare settings.
AI is excellent at:
- Drafting text (notes, letters, handouts, summaries)
- Simplifying complex language to lower reading levels
- Organizing information into structured formats
- Summarizing long documents or research articles
- Generating multiple versions of the same content
- Translating clinical concepts into plain language
AI cannot and should not:
- Make clinical diagnoses
- Recommend treatments or medications
- Replace clinical judgment in any scenario
- Access or interpret real-time patient data (unless in a purpose-built system)
- Guarantee factual accuracy about medical information
- Serve as a substitute for professional consultation
The critical distinction: AI is a writing assistant, not a clinical assistant. It helps you communicate faster, not decide better.
Quick Check
Before moving on, ask yourself: can you think of three tasks you did yesterday that involved writing or formatting rather than clinical decision-making? Those are your AI opportunities.
The Ethical Framework: REVIEW
Every time you use AI in a healthcare context, apply this framework:
R - Responsibility stays with you. You’re the licensed professional. AI output is a draft that you own the moment you review and use it.
E - Evidence check. Does the AI’s output align with current evidence-based practice? AI can generate plausible-sounding medical information that’s outdated or incorrect. Verify claims.
V - Vulnerability awareness. Your patients are vulnerable. Are you using AI in a way that respects their dignity, privacy, and autonomy?
I - Information security. Never put protected health information (PHI) into a general-purpose AI tool. Use de-identified examples or hypothetical scenarios instead.
E - Equity lens. Does the AI output work for diverse patient populations? Check for bias in language, assumptions, and cultural sensitivity.
W - Workflow integration. Does this AI use fit naturally into your workflow without creating new risks or workarounds?
Write “REVIEW” on a sticky note and put it near your workstation. Until this becomes second nature, check each letter before using any AI output.
The PHI Red Line
This deserves its own section because it’s that important.
Never enter real patient data into a general-purpose AI tool like ChatGPT, Claude, or Gemini.
These tools may store, process, or train on the data you input. Even if their policies say otherwise, HIPAA compliance requires you to treat general-purpose AI tools as unsecured channels.
What to do instead:
- Use hypothetical patient scenarios: “A 65-year-old patient with Type 2 diabetes and mild cognitive impairment needs discharge instructions for…”
- Use de-identified information: Remove names, dates, MRNs, and any combination of data that could identify a patient.
- Use your organization’s approved AI tools if available – these are designed with HIPAA compliance in mind.
- Create templates with placeholders: “Dear [Patient Name], your recent [test/procedure] results show…”
SAFE: "Write discharge instructions for a patient recovering
from a total knee replacement who lives alone and has
limited mobility support."
NOT SAFE: "Write discharge instructions for John Smith, MRN 4523891,
DOB 03/15/1958, who had a left TKR on 2/1/2026 at
Memorial Hospital."
Quick Check
If a colleague asked you to input a patient’s medication list into ChatGPT to generate a drug interaction summary, what would you say? You’d explain the PHI concern and suggest using a hypothetical scenario or an organization-approved tool instead.
Mapping Your AI Opportunities
Let’s identify where AI fits in your specific role. Here’s a framework for categorizing your daily tasks:
Category 1: Pure writing tasks (high AI value)
- Documentation and charting narrative sections
- Patient education materials
- Discharge instructions
- Referral letters
- Email communications
- Meeting minutes and summaries
Category 2: Writing + clinical knowledge (moderate AI value, needs careful review)
- Care plan documentation
- Clinical summaries
- Research literature reviews
- Protocol development
- Policy writing
Category 3: Clinical judgment tasks (AI should not assist)
- Diagnosis and assessment
- Treatment planning
- Medication decisions
- Triage decisions
- Any decision that directly affects patient safety
Try this prompt to start mapping your own tasks:
AI: "I'm a [your role] working in [your setting]. Help me
categorize my daily tasks into three groups:
1. Tasks where AI can draft content for me to review
2. Tasks where AI can assist but needs careful clinical review
3. Tasks where AI should not be involved
My typical day includes:
- [Task 1]
- [Task 2]
- [Task 3]
- [Task 4]
- [Task 5]
For each task, explain why it fits that category and
suggest how AI could help (or why it shouldn't)."
Real Expectations for Healthcare AI
Let’s set honest expectations about what you’ll experience:
Week 1-2: Learning curve
- You’ll spend time figuring out prompts
- Some outputs will miss the mark
- You might feel it’s slower than doing it yourself
- That’s normal – you’re building a new skill
Week 3-4: Finding your groove
- Prompts start to feel natural
- You develop templates that work for your role
- Time savings become noticeable
- You spot new opportunities
Month 2+: Real productivity gains
- Documentation time drops noticeably
- Patient education materials improve
- You have more bandwidth for patient care
- AI becomes a natural part of your workflow
The key is persistence through the learning curve. The healthcare workers who benefit most from AI are the ones who stick with it past the first awkward week.
What Your Colleagues Are Doing
AI adoption in healthcare is accelerating. Here’s where it’s making the biggest impact right now:
- Documentation: Clinicians report saving 30-60 minutes per shift on narrative documentation
- Patient education: Materials created at appropriate reading levels in minutes instead of hours
- Research: Literature reviews that took days now take hours
- Communication: Referral letters, handoff summaries, and team updates drafted in seconds
- Administrative: SOPs, compliance documentation, and training materials generated efficiently
You don’t need to use AI for everything. Start with the task that eats the most of your time for the least clinical value.
Exercise: Your AI Starting Point
Take five minutes right now to answer these questions:
What’s your biggest time drain that doesn’t require clinical judgment? (Example: writing discharge instructions, documenting patient education, drafting referral letters)
How much time does it take per occurrence? Per day? Per week?
What would you do with that reclaimed time? (More patient contact? Research? Education? Self-care?)
What’s your PHI safety plan? How will you ensure no protected information enters a general-purpose AI tool?
Write down your answers. Your answer to question 1 is where you’ll start applying what you learn in the next seven lessons.
Key Takeaways
- AI is a writing and productivity tool for healthcare, not a clinical decision tool
- Apply the REVIEW framework every time you use AI in a clinical context
- Never enter protected health information into general-purpose AI tools
- Focus AI on high-volume writing tasks that don’t require clinical judgment
- Expect a learning curve, then significant time savings
- Human oversight is non-negotiable – you’re the licensed professional
Next lesson: We’ll dive into patient communication and health literacy – learning to use AI to explain complex medical information in ways patients actually understand.
Knowledge Check
Complete the quiz above first
Lesson completed!