AI for Nurses: 5 ANA Guardrails, 5 Bedside Prompts (May 2026)

The ANA released its first AI consensus report May 5. Here's the 5-minute-per-shift routine for floor RNs — and the 3 prompts that violate it.

If you’re reading this in scrubs after a long shift, here’s the short version: on May 5 the American Nurses Association released its first-ever consensus report on AI in nursing practice. The Think Tank that wrote it met April 22. The release dropped one day before National Nurses Week — the ANA’s 130th — and inside the same news cycle as the Black Book Nursing AI Readiness Gap report (May 7), the Massachusetts Nursing Survey in the Boston Globe (May 7), and Abridge’s expansion to 250+ health systems.

The ANA report has a fundamental principle and five actions. The principle is the part you should tape inside your locker: AI must support, not replace, professional nursing judgment. Nurses remain the final accountable decision-makers. That’s the line that decides which prompts honor the report and which ones break it.

This post translates each of the five actions into a single concrete prompt you can paste into ChatGPT or Claude before your next shift. Then it shows you the three routines that violate the ANA’s principle — and why a few of those are routines a lot of nurses are quietly already running.

The fundamental principle, in one paragraph

You’re the licensed clinician. The chatbot isn’t. If something goes wrong, the board of nursing investigates you, not OpenAI. So every prompt you run on a shift has to keep you as the decider — the AI helps you draft, summarize, simplify, brainstorm, or rephrase, but the clinical call is yours.

That’s the line. Keep it in mind for everything below.

The 4 risks the ANA named (in plain English)

  1. Judgment erosion from overreliance — when you start trusting the AI’s output more than your own assessment of the patient in front of you.
  2. Unclear accountability — when an AI tool influenced a care decision and nobody’s sure whether the chart shows it, the vendor owns it, or you do.
  3. Algorithmic bias — when the AI’s training data underrepresents your patient population (race, language, age, payer mix) and the output is subtly or sharply wrong as a result.
  4. Cognitive burden from bad implementation — when the EHR-bolted AI tool actually slows you down, fires false alerts, or buries the alert that mattered.

If you read those four and went “yep, I’ve seen all of them,” you’re not alone — the National Nurses United survey released this week found a sizable majority of bedside nurses saying AI has degraded patient safety in their unit. That’s part of why the ANA’s report frames the work as nurse-led guardrails, not vendor-led innovation.

The 5-prompt routine — one prompt per ANA action

Each prompt is in plain English. You can copy them straight into your phone notes app tonight. Every single one starts with the same two lines, which we’ll call the no-PHI prompt prefix:

I’m a [your role]. I will not paste any patient identifiers. I’d like you to help me [task].

That two-line opener is the single most important habit in the routine. It does three things at once: it reminds you not to paste PHI, it tells the chatbot what you’re trying to do, and it leaves a literal trail in your phone history that you can show your unit manager if anyone ever asks how you used AI on shift.

Prompt 1 — Honors action #1 (clear, nurse-led guardrails): the safe-use prefix

This is the prefix above. Save it as a phone shortcut. Use it as the first two lines of every single prompt you run for clinical work. If you’re not sure whether what you’re about to paste contains a patient identifier — name, MRN, room number, age plus diagnosis combo, anything — assume it does and rewrite. The ANA’s guardrail is “nurse-led” precisely because no vendor is going to do this work for you.

Prompt 2 — Honors action #2 (curate a nursing AI playbook): shift-handoff polish

I’m a med-surg RN. I will not paste any patient identifiers. Polish this 4-line SBAR handoff into clearer plain English for my oncoming nurse, keep it under 100 words, and do not add any clinical detail I didn’t write:

Situation: [short non-PHI line — “post-op day 1, pain controlled, ambulating with one assist”] Background: [short non-PHI line] Assessment: [short non-PHI line] Recommendation: [short non-PHI line]

The “do not add any clinical detail I didn’t write” line is the part that protects you. Models hallucinate. If your handoff goes from “ambulating with one assist” to “ambulating with one assist, occasional dizziness,” that addition is the AI inventing data that ends up in your colleague’s brain. The instruction tells it to stay inside your bullets.

Prompt 3 — Honors action #3 (advance AI literacy and competence): the verify-before-it-leaves check

This isn’t a prompt to send. It’s a four-question check you run on every AI output before it touches a patient or a chart:

  1. Did the AI add anything I didn’t write? If yes, delete it.
  2. Is anything in here a clinical decision the AI made? If yes, that’s mine to make, not the chatbot’s. Rewrite.
  3. Does any number, dose, time, or rate need a primary-source check? If yes, check it. Don’t trust the model on numbers.
  4. Would I be comfortable if this output was screenshotted and sent to my unit manager? If no, why not, and what changes.

Tape this list to the back of your phone case if you have to. The ANA’s literacy action is exactly this — the discipline of treating AI output as a draft, not an answer.

Prompt 4 — Honors action #4 (policy and regulatory advocacy): the patient-education-letter rewrite

I’m an outpatient family-practice NP. I will not paste any patient identifiers. Rewrite this discharge instruction into plain language at a 6th-grade reading level, keep all the clinical specifics, do not change any medication name or dose, and add a one-sentence reminder to call the office if [symptom] gets worse:

[paste the chart-language version]

This is the workflow that saves real time. A 12-minute discharge teaching session compresses to a 3-minute review of an AI-rewritten letter you confirm and hand to the patient. The ANA wants nurses advocating for AI tools that demonstrably help patients — this is one of the most defensible examples, because it improves health literacy at the bedside.

Prompt 5 — Honors action #5 (cross-sector collaboration): the EHR-vendor-question template

When you spot an AI feature in your EHR (Epic, Oracle Health, Meditech, Athena, etc.) that’s making your shift harder, you can use this:

I’m a charge RN at a 200-bed community hospital. I will not paste any patient identifiers. I want to write a clear 1-paragraph note to our EHR vendor explaining a problem with the AI [feature name] tool. The problem is [describe in plain English — e.g., “it’s flagging routine post-op patients as high fall-risk every shift, and we’re getting alert fatigue”]. Help me write the note in language a vendor product manager will take seriously.

This is the action the ANA report cares about most quietly. The 4th risk — cognitive burden from bad implementation — only gets fixed when nurses report it back upstream in a format vendors can act on. The chatbot is a perfectly reasonable assistant for that one.

The 3 routines that violate the ANA report

These are the prompts a lot of nurses are running. They each break the fundamental principle in a different way.

Violation 1 — Pasting any patient identifier into a chatbot

Names, MRNs, dates of birth, room numbers, age + specific-diagnosis combos, photos of wristbands, screenshots of the EHR. All of them. This breaks HIPAA and breaks the ANA’s first action (clear nurse-led guardrails on safe use). The “no-PHI prompt prefix” exists precisely so this never happens by accident. If you can’t strip the identifiers, you can’t run the prompt.

There’s a Reddit thread on r/nursing called “Using ChatGPT to chart” that has been quietly racking up posts for over a year. A meaningful fraction of the comments are nurses describing PHI workflows that violate this guardrail. If you’re reading this and recognize yourself: stop. The risk to your license is much higher than the time savings.

Violation 2 — Asking the chatbot for the final clinical decision

“Should I send this kid home from school?” “Is this dose right for a 70-year-old with stage 3 CKD?” “Is this rash worth a same-day callback?”

These prompts are the textbook violation of the ANA’s fundamental principle. The clinical decision is yours. The chatbot can help you brainstorm differential considerations, can help you draft a question for the physician, can help you rephrase a parent-callback message — but the actual “what do I do for this patient” call sits with you, your license, and whatever evidence you trust.

When the chatbot speaks confidently in clinical mode, that confidence is a feature, not a fact. It will sound certain when it’s wrong. Treat every confident-sounding clinical recommendation as a draft to verify, never as an answer.

“What’s the latest evidence on [drug X] in [population Y]?”

Chatbots can summarize evidence well when they have access to retrieval. They cannot reliably know what the latest evidence is — they hallucinate citations, they confidently quote retracted papers, and they don’t know what your hospital’s formulary actually allows. The ANA’s literacy action is partly about keeping nurses fluent in the difference between a chatbot summary and a primary source. UpToDate, Lexicomp, your hospital’s clinical informaticist, OpenEvidence, the actual journal article — those are evidence. The chatbot summary is a starting point.

What this means for you

If you’re a brand-new floor RN

The single most valuable habit from this whole post is the “no-PHI prompt prefix.” Save it in your phone notes today. Run it as the first two lines of every clinical prompt. If you only adopt one thing from the ANA report, this is it.

If you’ve been on the unit for years and just started using ChatGPT

The verify-before-it-leaves four-question check is the one that matters most for you. You already have the clinical judgment. The AI’s job is to draft for you, not decide for you. The check formalizes the line between the two.

If you’re a school nurse, hospice nurse, or travel nurse

Each of your settings has a sub-version of these five prompts. School nurses: parent-notification email drafting after a playground injury, with no name. Hospice: family-update plain-language translation, keeping clinical specifics for the chart. Travel: 30-minute new-unit-orientation summary for your phone. The principle is the same. The tasks adjust to your workflow.

If you’re an NP or charge nurse

You have an additional responsibility under the ANA’s cross-sector action. When a unit-wide AI tool is creating cognitive burden, the structured EHR-vendor-question prompt is yours to use. The report is asking the profession to push upstream, not just to manage downstream.

If you’re a nurse manager or administrator

The five actions are operational, not aspirational. Issue local guardrails this quarter. Curate a unit playbook with the prompts your team is actually using. Make AI literacy part of your annual competencies. Bring the EHR vendor in for a Q&A on cognitive burden. Talk to your state nurses association about advocacy. Cross-sector collaboration starts at your charge desk.

What this can’t fix

It doesn’t fix bad AI in your EHR. If your unit got Epic’s AI features turned on without your input, the ANA report doesn’t change what’s already there. It changes how you push for what comes next. The cross-sector-collaboration action is the lever.

It doesn’t make ChatGPT or Claude HIPAA-compliant. No general consumer chatbot has a BAA with you. The “no-PHI prompt prefix” exists because the chatbot is, by default, on the wrong side of HIPAA the moment you paste identifiers. Hospital-deployed AI tools (Epic’s, Abridge’s, Suki’s, Nabla’s) operate under separate compliance regimes — those rules don’t transfer to your phone.

It doesn’t replace your clinical judgment, your hospital policy, or your state’s board of nursing. If your facility has banned AI on the floor, this post doesn’t change that. If your state board has issued specific guidance, that overrides anything here.

It doesn’t catch every hallucination. The verify-before-it-leaves check is a backstop, not a guarantee. The single best protection against AI hallucination is your years of clinical experience. The check is there to keep that experience engaged.

It doesn’t tell you whether your hospital’s vendor-supplied AI tool is good. The Black Book report this week found a wide gap between what AI vendors promise and what nursing leadership actually sees in production. That’s a separate evaluation. The ANA report’s framework helps you ask the right questions; it doesn’t grade your specific vendor.

What the data actually says about nurses and AI right now

The ANA report didn’t drop into a vacuum. The same week it landed, three other surveys quantified what nurses on the floor are seeing.

The National Nurses United survey of 2,300+ RNs found that 60% disagreed with the statement “I trust my employer will implement AI with patient safety as the first priority.” Among nurses whose facilities use algorithmic acuity scoring, 69% said the AI-generated acuity score doesn’t match their own clinical assessment — typically because the algorithm misses psychosocial, educational, or immunocompromise factors. In facilities using AI-linked imaging or sound capture for pain or wound assessments, 29% of nurses said they couldn’t change the software-generated categorization. In facilities using predictive scores for outcomes, complications, or discharge readiness, 40% said they couldn’t modify scores to reflect their clinical judgment. NNU’s framing was direct: management is “pushing for speed and efficiency” while “tech is used to justify understaffing” and essential safeguards are removed.

The Black Book “Nursing AI Readiness Gap” report (May 7, 2026) surveyed 118 nurse managers and put numbers behind the rest of the picture. 68% worry AI-generated or prefilled documentation could shift legal, licensure, audit, or patient-safety risk to nurses without meaningfully reducing workload. 74% say physician-style ambient documentation tools won’t solve nursing documentation burden unless redesigned for nursing workflows specifically. 77% prefer AI tools start with low-risk, high-volume tasks before moving into autonomous assessment or clinical-judgment support. The Black Book takeaway: “the winning tools will not be those that generate the most documentation, but those that remove the most redundant documentation while preserving RN control, transparency, source visibility, editability, and auditability.”

On vendor deployment scope: Abridge confirmed general availability of “Abridge for Nurses” across more than 250 health systems including Mayo Clinic, Corewell Health, Johns Hopkins Medicine, Emory Healthcare, Bon Secours Mercy Health, Reid Health, Duke Health, UPMC, and a 40+ hospital, 24,600-physician rollout at Kaiser Permanente. Nabla is deployed system-wide at M Health Fairview (running inside Epic) and at UToledo Health for hundreds of physicians and APPs. Suki and Ambience are also commonly deployed, “via health-system workflows where EHR integration and governance matter more than self-serve setup.” If your facility is on Epic, you’re statistically more likely than not to encounter at least one of these ambient documentation tools in the next 12 months.

The community sentiment on r/nursing tracks the surveys: cautious-but-pragmatic. Top threads on “Using ChatGPT to chart” describe both legitimate workflows (template polish, patient-education rewrites, study aids) and the warnings that make the ANA framework worth paying attention to — hallucinations, deskilling concerns, “shame on you” replies to nurses who use AI without review. The published academic literature on ChatGPT in nursing remains thin: a recent rapid review found 88.2% of included studies focused on nursing education, and only one each focused on clinical practice and research. The gap between informal bedside experimentation and formal evaluation is exactly the gap the ANA’s “AI literacy and competence” action is asking the profession to close.

The bottom line

You don’t have to read the full ANA consensus report tonight. You don’t have to memorize the five actions. You do have to walk onto your next shift with the “no-PHI prompt prefix” in your phone, the four-question verify check in your head, and the fundamental principle on your locker: the clinical decision is yours.

The ANA gave you a framework that locks the bedside RN at the center of nursing AI through the next 18 months. The 5 prompts above are the single thinnest practical layer on top of that framework — small enough to actually use on a shift, sharp enough to honor what the report asks for.

Want a deeper, structured walkthrough of these prompts plus 30 more bedside-RN scenarios? Our AI for Bedside Nurses: Charting course covers the full routine — handoff polish, patient education, care plan rewording, and the verify-before-it-leaves discipline — in under an hour, ANA-aligned. If you’re broader nursing-clinical leadership looking at AI literacy across your unit, Nurses and Clinical AI has the team-level playbook. And if you want the evidence-search-done-right alternative to chatbot-as-evidence, OpenEvidence for Bedside Nurses is the workflow.

Stay safe out there. National Nurses Week — happy 130th to the ANA, and thank you for the work.

Sources

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume