Here’s a quiet shift worth paying attention to. The “ChatGPT for clinicians” that you’ve been hearing about — the one your hospital’s residents have been using since late 2024 — just crossed 15 million consultations a month, got embedded in Mount Sinai’s Epic system on March 31, and raised $200 million at a $6 billion valuation in October. It’s called OpenEvidence. It’s free if you have an NPI. And unlike ChatGPT, it’s actually HIPAA-compliant.
If you’re a registered nurse who’s been quietly typing clinical questions into ChatGPT during breaks — and let’s be honest, most of us are — this is the conversation nobody’s had with you yet.
What OpenEvidence Actually Is
Think of OpenEvidence as a search engine that’s been trained only on peer-reviewed medical literature — not the whole internet. You type a clinical question in plain English, and it gives you a cited answer pulled from NEJM, JAMA, and the guidelines you’d actually be asked to justify in a root-cause analysis.
A few things that make it different from the generic chatbots:
- It cites every claim. Every sentence has a source you can click and verify. You’re not guessing whether it made something up.
- It refuses to answer when evidence is thin. This is the big one. Where ChatGPT confidently hallucinates, OpenEvidence says “the evidence here is limited to case reports” and stops. Fewer dangerous confabulations.
- It’s trained on clinical sources only. No Reddit, no blogs, no AI-generated SEO sludge. NEJM is actually their official partner — you get real NEJM content inside answers.
- It’s HIPAA-compliant. Since April 2025. SOC 2 Type II certified. Covered entities can input PHI under their Business Associate Agreement. This is the single biggest reason clinicians are migrating from ChatGPT for any question that touches a real patient.
It was founded in 2022 by Daniel Nadler — the same person who built Kensho, a financial-AI firm S&P Global bought for $550 million in 2018. So this isn’t a weekend hackathon project. Boston-based. Backed by Google Ventures, Sequoia, and Blackstone.
Why Nurses Should Care Right Now
Three things landed in the last six weeks that changed the calculus:
1. Mount Sinai embedded it inside Epic on March 31, 2026. This is the first time OpenEvidence got deployed enterprise-wide across the whole care team — not just the MDs. RNs and pharmacists now have licenses. Nicholas Gavin, Mount Sinai’s Chief Clinical Innovation Officer, put it directly: “Implementing OpenEvidence provides our pharmacists, nurses, and physicians with a unified, trusted platform for evidence-based decision-making.” When a system the size of Mount Sinai puts it in Epic, other systems follow. Fast.
2. A federal judge just made ChatGPT legally risky for clinical questions. In February, the U.S. v. Heppner ruling held that consumer AI chats are not confidential and can be used in litigation. That ruling has been rippling through legal, HR, and now healthcare. If you’ve been typing patient-identifying details into ChatGPT — even stripped of names — you’re in the waiver zone. OpenEvidence is the HIPAA-safe alternative specifically designed to not be.
3. The tool is free for you. That part deserves its own headline. If you have an NPI — which you do if you’re an RN — you can sign in, get unlimited searches, and start asking questions in about five minutes. The company makes money from pharma and medical-device ads shown to clinicians, not from your subscription.

The Five-Minute NPI Sign-In
Here’s the actual flow, walked through:
- Go to openevidence.com
- Click “Sign up” — select “Registered Nurse” as your role
- Enter your NPI number (10 digits; you already have one if you’re licensed in the US)
- Add your full name and license state — it cross-checks against the NPPES registry
- Confirm your email
- You’re in. Unlimited searches. No credit card.
If you don’t know your NPI, look it up free at npiregistry.cms.hhs.gov. It takes 10 seconds — type your name, pick your state, and it appears. Every US nurse licensed in the last 20 years should have one.
There is one friction point I should flag: LPNs and LVNs frequently don’t have NPIs. NPIs are linked to providers who bill for services, and LPN/LVN roles historically haven’t required billing identifiers. If that’s you, OpenEvidence’s student/trainee path doesn’t apply either. You can use the 2-search-per-day public tier, or you can lean on UpToDate if your hospital subscribes. We’ll watch whether OpenEvidence expands access this year.
Nursing students: you can verify with your .edu email and your program enrollment documentation. Most BSN/ADN programs have official letterhead that works — your nursing school’s registrar can supply it in a day.
Five Questions to Ask on Your Next Shift
The fastest way to understand what this tool is like is to see what it returns. Here are five prompts taken from actual bedside scenarios. Type them into OpenEvidence as-is:
1. Sepsis interpretation
“What is the clinical significance of a serum lactate above 4.0 mmol/L in a patient with suspected sepsis, and what are the current bundle recommendations?”
You’ll get a cited answer drawing from the Surviving Sepsis Campaign guidelines, typically referencing Evans 2021 CCM or the most recent NEJM review. Takes about 8 seconds.
2. Wound care protocol
“For a Stage 3 pressure injury with tunneling, what dressing is recommended per current evidence, and what’s the comparative evidence for calcium alginate vs. hydrofiber?”
Expect a Braden-aware answer with specific dressing comparisons and a citation trail back to Cochrane or JWOCN.
3. Postpartum medication safety
“Which antihypertensive agents are preferred for severe postpartum preeclampsia in a breastfeeding patient, and what’s the evidence for labetalol vs. nifedipine?”
This is the kind of question that used to require paging a pharmacist or pulling Hale’s. OpenEvidence pulls from ACOG committee opinions and recent JAMA/NEJM literature with specifics on timing and monitoring.
4. Pediatric dosing check
“What’s the evidence-based protocol for IV fluid resuscitation in a pediatric DKA patient, including rate limits and rebound hypoglycemia risk?”
You’re going to get a careful answer. It will reference the ISPAD 2022 guidelines and likely caution you on the cerebral edema question — because the literature does.
5. A question you’d be embarrassed to ask a preceptor
“In a patient with end-stage renal disease on hemodialysis, how do I adjust the timing of a dose of vancomycin if the scheduled dose falls on a dialysis day?”
This is the classic “I should know this but I’m three hours into a 12” question. OpenEvidence hands you a clear answer with the actual pharmacokinetic reasoning, without making you feel dumb.
The point isn’t that any of these replaces clinical judgment — they don’t. The point is that five years ago, each of these answers cost you 20 minutes of PubMed digging. Now it’s 20 seconds. You still need to read the citation and apply your scope of practice. But you no longer lose the 20 minutes.
OpenEvidence vs. ChatGPT: The Comparison That Matters
This is probably the first thing everyone asks, so let’s put it in one table.
| OpenEvidence | ChatGPT (Plus/Free) | |
|---|---|---|
| Cost for nurses | Free with NPI | $0 (Free) or $20/mo (Plus) |
| HIPAA compliant | Yes (April 2025, BAA available) | No |
| Sources | NEJM, JAMA, peer-reviewed literature only | The internet + Reddit + AI-generated content |
| Citations | Every claim, clickable | Frequently hallucinated |
| Refuses when unsure | Yes (“evidence is inconclusive”) | Rare — will confidently confabulate |
| Covered by NEJM partnership | Yes, official AI partner | No |
| Usable for patient-identifying questions | Yes (per your BAA) | Absolutely not |
| Legal discovery risk | Protected enterprise contracts | High — Heppner ruling means chats are discoverable |
| Clinical specificity | Designed for it | General-purpose |
The short version: for any clinical question, OpenEvidence is almost always the right tool. ChatGPT still has a place for non-clinical writing — the letter to a family, the shift-change summary template, the draft of a PTO request. But the moment the question has a patient in it, you’re using OpenEvidence.
One honest caveat: a 2025 comparison study in structural heart disease found that subject-matter experts rated ChatGPT-4o’s answers slightly more “reliable” than OpenEvidence’s in certain narrow subspecialty scenarios. OpenEvidence is, in some cases, almost too evidence-based — it treats absence of evidence as evidence of absence. If you’re asking something on the bleeding edge of a subspecialty, don’t assume OpenEvidence’s “we can’t find strong evidence” means “there isn’t any.” Cross-reference with your specialty’s society guidelines.
What OpenEvidence Can’t Do (Yet)
Let’s be specific, because this matters for trust.
It won’t replace your preceptor on procedural questions. “How do I actually insert this Foley with a hostile 90-year-old male who’s confused” is a clinical skill question, not a literature question. OpenEvidence is literature.
Nursing-specific topics still have gaps. The Nurse.org piece from October 2025 flagged that nursing-original research — things like thermal wound assessment protocols, specialty bedside techniques — is underrepresented in the sources OpenEvidence trains on. The medical literature skews toward MD-led research. If you ask a pure-nursing question, you might get a physician-framed answer.
Complex undifferentiated presentations get thinner answers. A November 2025 medRxiv pilot study found that accuracy drops in highly complex subspecialty scenarios and undifferentiated diagnoses — which is most of primary care, honestly. Use it as a starting point, not an ending one.
It’s not a replacement for clinical decision support built into your EMR. If your hospital uses Epic’s native CDS or Cerner’s alerts, those still fire during your workflow. OpenEvidence complements, not replaces.
The ad-supported model is something to know about. Pharma and device companies advertise inside the clinician interface. The company says ads don’t affect answers, and the UI clearly labels them. But if you see a medication suggested and there’s a discreet “sponsored” banner on the same page, that’s a useful reminder to look at your own institutional formulary before recommending it.
What This Means for You
If you’re a bedside RN who’s been using ChatGPT for clinical questions: Stop, make the switch, and thank me later. ChatGPT’s confidentiality risk post-Heppner and its hallucination rate are not worth saving the five minutes of switching tools. OpenEvidence is free, faster for citations, and — this matters — won’t be a discoverable record in a patient-safety incident.
If you’re a nurse practitioner: This becomes part of your morning workflow inside of two weeks. Nurse.org’s piece referenced NPs using it for antihypertensive safety in pregnancy and complex chronic disease management. Those are the exact use cases where the UpToDate search is slow and the pharmacist is unreachable.
If you’re a nursing student: Verify with your .edu email now, while you’re in school. You get the same access level as a licensed RN. This is going to be on your NCLEX study tools’ radar inside the next year, and being early to it is how you answer case-study questions faster on rotations.
If you’re a nurse manager or CNIO: Mount Sinai’s Epic integration is the template. If your system is on Epic, ask your CMIO whether OpenEvidence is on the evaluation roadmap. The enterprise rollout gives RNs native access inside the EMR workflow instead of a separate tab — that’s the difference between “I’ll try this later” and “I’m using it on every admission.”
If you’re an LPN, LVN, or retired nurse without an active NPI: You’re still excluded from the free tier today. The 2-search/day public tier is something. Watch whether OpenEvidence expands verification paths in 2026 — the pressure is building from the nursing community, and the company has explicitly said they want full care-team coverage.
The bottom line: A nurse friend asked me last week whether OpenEvidence was “just another Epic add-on nobody actually uses.” The honest answer is: not this one. The usage curve is real (15 million consultations a month), the Mount Sinai integration makes it the default for a full care team for the first time, and the search volume on “openevidence” grew 3.3× in twelve months. If you wait six months, you’ll be the nurse catching up.
Sign Up Tonight if You Can
Ten-minute action plan if you want to actually try it:
- Look up your NPI at npiregistry.cms.hhs.gov (30 seconds)
- Go to openevidence.com, click Sign Up, select RN, enter your NPI (4 minutes)
- Confirm your email (30 seconds)
- Pick one of the five questions above and try it (2 minutes)
- Save the answer somewhere you’ll find it again — your phone’s Notes app works — and bring it up in tomorrow’s huddle (1 minute)
You now have a free, HIPAA-safe, NEJM-grounded clinical reference in your pocket. That’s a different shift than the one you walked into today.
Sources:
- OpenEvidence — official site
- Mount Sinai × OpenEvidence Epic Integration — Mount Sinai Newsroom (Mar 31, 2026)
- Mount Sinai Integrates OpenEvidence AI Into Epic EHR — HIT Consultant (Apr 1, 2026)
- Nurses, Meet the “ChatGPT for Clinicians” That Just Raised $200 Million — Nurse.org (Oct 2025)
- OpenEvidence & Registered Nurses — Healthcare Simulationists (Substack)
- AI in Healthcare: Tips for Using Elicit, OpenEvidence, Speechify — Stanford Lane Blog
- OpenEvidence Offers ‘ChatGPT for Doctors’ — Healthcare Brew (Nov 2025)
- Comparing Large Language Models in Healthcare — Mayo Clinic Platform
- OpenEvidence in Primary Care — PMC / SAGE Journals 2025
- Accuracy and Repeatability of OpenEvidence on Complex Subspecialty Scenarios — medRxiv (Nov 2025)
- Daniel Nadler on OpenEvidence — Sequoia Capital podcast
- NPI Registry — CMS NPPES