Workday AI Lawsuit: The 5-Step HR Audit, 90 Minutes Flat

Workday AI lawsuit explained for HR: a 90-minute, 5-step audit with copy-paste ChatGPT prompts. No legal jargon. Ready before your CEO asks.

A federal judge let Mobley v. Workday move forward as a collective action covering every applicant 40 and over who applied through Workday’s screening since September 2020. In the company’s own filings, that software has rejected over 1.1 billion applications.

If you run hiring at a mid-market company — between roughly 50 and 2,000 employees — that ruling didn’t put Workday on the hook. It put you on the hook. Workday’s own defense argued, in plain language, that it has “no ability to force a customer to make decisions in the hiring process or otherwise.” Translation: the vendor says the customer made the call. The customer is your HR team.

That’s why a Reddit post about getting auto-rejected at 1:50 a.m. now has 287 upvotes, why a thread about being “rejected in 4 minutes after an hour of tailoring” has 316, and why a March 17 X post explaining the ruling pulled 14,700 likes. The story has crossed the line from legal-trade chatter into the kind of thing your CEO sees on LinkedIn before you do.

This guide is what to actually do about it — a five-step audit you can run this week with the tools your team already has open. No legal jargon. No expensive consultants. Just the work that gives you a defensible record before the next ruling lands.

What actually happened (the 90-second version)

Three things, in this order:

Mobley v. Workday (Northern District of California). Filed February 2023 by Derek Mobley, a Black applicant over 40 who’d applied to more than 100 jobs through Workday’s platform. On May 16, 2025, Judge Rita Lin granted preliminary certification on the age-discrimination claim. The class includes anyone 40+ who was denied a job recommendation through Workday’s platform from September 24, 2020 through today. Discovery is now active, and the motion on the remaining claims is calendared for 2026.

Eightfold AI (California Superior Court, Contra Costa). Filed January 20, 2026 by former EEOC chair Jenny Yang and the nonprofit Towards Justice. This one is a different theory entirely. Plaintiffs don’t claim the algorithm was biased. They claim it was secret — that Eightfold scraped data on more than a billion workers, gave each candidate a 0-to-5 “likelihood of success” score, and discarded low-scored candidates before any human looked at them. The legal framework: the Fair Credit Reporting Act and California’s ICRAA, which both require disclosure when consumer reports drive decisions about you.

iTutorGroup (the precedent everyone forgets). Settled with the EEOC in August 2023. Their AI auto-rejected female applicants 55+ and male applicants 60+. First federal AI hiring settlement. The pattern was clear two years before Mobley filed.

What “Workday lost the algorithm-did-it defense” actually means

There were two arguments Workday made in court that mattered for HR teams. Both lost.

Argument one: Disparate-impact protection only applies to employees, not applicants. The court rejected this, citing the EEOC’s longstanding position that anti-discrimination law protects job seekers at the application stage too. So if your AI tool screens applicants by a process that disproportionately rejects a protected class, you’re exposed — same as if a human did it.

Argument two: Workday is a vendor; the customer makes hiring decisions, not us. The court let this argument live in part — Workday isn’t the employer — but the practical effect was the opposite of what the vendor wanted. By emphasizing it has “no ability to force a customer to make decisions,” Workday handed every HR team a clear message: the configuration is yours, the data is yours, the outcome is yours.

Tripp Scott’s labor-law team summarized it bluntly in their April 17 analysis: when vendors stand on neutrality, employers carry the bag. That’s the legal reality your audit needs to be ready for.

Who this audit is for (and who it isn’t)

This works if:

  • You run hiring or HR at a company with 50–2,000 employees
  • Your stack includes any of: Workday, Greenhouse, Lever, iCIMS, SmartRecruiters, ADP Workforce Now, BambooHR, Eightfold, HiredScore, Paradox, Phenom, or HireVue
  • You don’t have a dedicated AI governance lead, and the legal team is “we’ll get to it”
  • You’ve never opened ChatGPT for a work task and would prefer not to start with anything fancy

This is not a substitute for an independent bias audit (NYC Local Law 144 requires one of those, conducted by a third party, every 12 months for any covered tool). It’s not a legal opinion. It is the work that gets you to “we did the diligence” the next time a candidate complaint, a regulator question, or a CEO calendar invite shows up.

The 5-step audit

The whole thing takes about 90 minutes if you do it in one sitting. Block the calendar. Close Slack. The point is to walk away with a written record, not a vibes-based memory of “we looked into it.”

Step 1 — Map every AI touchpoint in your hiring funnel (15 min)

Open a blank doc. Across the top, list your hiring stages: Sourcing → Posting → Application → Screen → Assessment → Interview → Reference → Offer.

Down the left, list every tool in your stack that touches a candidate. Don’t skip anything. LinkedIn Recruiter, your ATS, the spam filter on your hiring inbox, your scheduling assistant, the HireVue interview platform, the background-check vendor — all of it.

Now mark every cell where AI is doing something to a candidate. Use three labels:

  • AUTO: the tool can reject, rank, or filter without a human seeing the result first
  • ASSIST: the tool surfaces or prioritizes, but a human must act before anything happens to the candidate
  • NONE: no AI feature in this stage, full stop

This single grid is the most valuable artifact you’ll produce all quarter. It is the evidence you keep handy if a regulator asks “what AI tools influenced your hiring decisions?” — which under Colorado SB 24-205 (effective June 30, 2026) becomes a documentation requirement for “high-risk AI systems,” and under NYC Local Law 144 has been enforceable since 2023 with penalties starting at $500 per violation and rising to $1,500 per day.

A clean grid usually surfaces three or four AUTO cells nobody on the leadership team realized were there. That’s the work.

Step 2 — Audit your live job descriptions for ADEA / FCRA flags (20 min)

Pull your 10 most recently posted job descriptions. Drop them one at a time into ChatGPT (free tier is fine for this). Use this prompt:

You are an employment-law-aware editor reviewing a job description for hiring-bias exposure. I will paste a job description below. For that posting, do the following:

1. Flag any phrase, requirement, or qualification that could disproportionately exclude older workers, workers with disabilities, women, racial minorities, or non-native English speakers under U.S. law (specifically ADEA, ADA, Title VII, and FCRA principles).

2. For each flag, explain in one sentence WHY it's a flag (what proxy or disparate-impact concern it triggers).

3. Suggest a defensible, equally-effective rewrite that preserves the actual job requirement.

4. Note any "preferred qualifications" that read as ageism dog-whistles ("digital native", "high energy", "recent graduate", "5–7 years experience" caps, "culture fit", university-name preferences, GPA cutoffs).

5. End with a one-sentence overall risk rating: LOW / MODERATE / HIGH.

Do not hedge. Be specific. The goal is a defensible posting, not a sanitized one.

Job description:
[PASTE JOB DESCRIPTION HERE]

Save the output for each posting. Anywhere ChatGPT flags MODERATE or HIGH, fix the language and repost. The fixes are usually small. The documentation matters more than the fixes.

This prompt isn’t legal advice. But the reason it works as a first pass is that it forces you to see every line a regulator might. The University of Miami Law Review’s analysis of the Mobley case made the same point: most of the exposure isn’t in the algorithm — it’s in the inputs the algorithm was given.

Step 3 — Document your human-in-the-loop policy (15 min)

If you can’t show a regulator a written policy that says “no candidate is rejected by AI without a human reviewing the rejection,” your odds in a complaint go down. Right now, the published research suggests roughly half of companies operate without that human checkpoint, and only about three in ten have full human oversight on AI-driven rejections. Pick which side of that line you’re on, then document it.

Write a one-page policy that answers four questions:

QuestionWhat “good” looks like in your policy
Which AI tools auto-reject candidates today?Specific tool names and the exact stage where they reject. Cite your Step 1 grid.
Who is the human reviewer for each auto-reject path?A named role (not a person — roles persist). E.g., “Senior Recruiter on the role” not “Sarah.”
What is the SLA for human review before a rejection email goes out?Hours, not “ASAP.” E.g., “no automated rejection email is sent before 24 hours of human review.”
What is the candidate appeal process?An email address, a form, a documented response window.

If your current setup can’t answer one of these, that’s the gap to close before the policy gets signed. NYC Local Law 144 requires you to give candidates 10 business days’ notice before AEDT use; California’s FEHA Automated Decision System rules (effective October 1, 2025) require four-year retention of decision logs. Your one-pager should reference both.

Step 4 — Brief your CEO (or hiring exec) in writing (20 min)

The thing nobody warns you about in compliance work is that the deliverable that protects you is the memo nobody read but everyone got. Email beats Slack. Signed beats unsigned. Dated beats undated.

Send a short note. Subject line: “Hiring AI compliance audit — completed [date].”

Body:

Hi [name],

Following the May 2025 Mobley v. Workday class certification and the
January 2026 Eightfold lawsuit, I ran our internal hiring-AI audit
this week. Summary:

- AI inventory: [N] tools across our hiring funnel ([N] auto-reject,
  [N] assist, [N] none). Full grid attached.
- Job-description audit: [N] postings reviewed, [N] flagged for
  rewrite, [N] rewritten and reposted.
- Human-in-the-loop policy: drafted, attached, ready for legal review
  before sign-off.
- Re-audit cadence: quarterly, next on [date].

Two open items I'd like your call on:
1. [the one place your AUTO reject path doesn't have a human reviewer]
2. [the one tool your contract doesn't include indemnification for]

Happy to walk through it for 15 minutes whenever works.

[name]

That’s it. The point is the date, the audit, and the named open items. Thirty months from now, when a question comes back, this email is the answer.

Step 5 — Schedule the quarterly re-audit (10 min)

Open your calendar. Put four recurring events on it — one per quarter, 90 minutes each. Title each one “Hiring AI quarterly compliance audit.” Add a checklist in the body:

  • Re-run the funnel grid (Step 1). What’s new since last quarter?
  • Re-audit 10 most recent live job descriptions (Step 2 prompt).
  • Confirm the human-in-the-loop policy is still accurate (Step 3).
  • Send the quarterly memo to leadership (Step 4).
  • Verify each vendor’s most recent bias-audit report is on file. NYC LL144-covered tools require an annual independent audit; ask your vendor for the latest one if it’s been more than 12 months.

The cadence does the work. The first audit is exploratory. The second is when you start finding patterns. The fourth is when an internal candidate tells you the rejection email cadence on a particular role looks suspicious — and you have a documented process to actually go look.

The state-by-state snapshot you need on the same one-pager

Compliance is no longer one law — it’s three live ones plus federal liability under unchanged statutes. Keep this matrix near your audit:

JurisdictionStatusWhat it requires of HRFirst deadline that matters
Federal (EEOC)AI-specific guidance withdrawn January 27, 2025; ADEA, Title VII, ADA still applyDisparate-impact analysis on any “selection procedure,” AI or notAlready in effect
NYC Local Law 144Enforced since July 2023; December 2025 Comptroller audit found enforcement under-resourced — stricter sweep comingAnnual independent bias audit, public disclosure of results, 10-day candidate noticeAnnual audit cycle
Colorado SB 24-205Delayed to June 30, 2026Documented AI governance + risk-management program for any “high-risk” AI system in employmentRisk management policy: June 30, 2026. Impact assessments + consumer disclosure: February 1, 2027
California FEHA / ADS regsEffective October 1, 2025Four-year retention of decision logic and inputs/outputs for automated decision systemsAlready in effect

Massachusetts, Illinois, Texas, and New York State all have AI-hiring bills moving in 2026. If you operate in any of those, your policy needs to be portable, not jurisdiction-specific.

What this audit can’t do

Worth being honest. A 90-minute internal audit:

  • Doesn’t replace an NYC LL144 independent audit. That requires a third party (BABL AI, ORCAA, BLDS, SolasAI, or another qualified auditor). If your tools fall under LL144, the third-party audit is mandatory and unavoidable.
  • Doesn’t fix biased training data. Vendor models trained on biased historical hiring outcomes will keep reproducing those outcomes regardless of how clean your job descriptions are.
  • Doesn’t get you off the hook for Eightfold-style FCRA violations. If your vendor is producing scored “consumer reports” without disclosure, your audit doesn’t cure that. Read the Outten & Golden complaint and the Norton Rose Fulbright analysis on the FCRA theory before signing your next renewal.
  • Doesn’t substitute for legal review. The one-pager and CEO memo are operational documents. Run the final policy past employment counsel before it’s the company’s official position.

What it does is establish that you saw the risk, named it, and acted on it before a regulator or a candidate’s lawyer did. That’s the entire defensive purpose. It is also the thing most companies, per the OutSolve and Fisher Phillips analyses, still don’t have on file.

What This Means for You

If you’re an HR generalist at a mid-market company: This audit is your job until a Chief AI Officer is hired. Run it this week. The five-step version above is faster than the next “AI in HR” webinar you’ve been invited to. Block 90 minutes Friday morning.

If you’re a recruiter using Workday or HiredScore: Stop telling candidates “the system rejected you.” That phrasing is the exact quote your legal team doesn’t want surfacing in a deposition. The candidate didn’t apply to a system. They applied to your company. Use language that reflects who actually owns the decision.

If you’re a job seeker over 40 reading this: A rejection email arriving within minutes of submission is, per the cited 2026 hiring-stack research, almost certainly automated. The CBS News explainer notes that a documented timestamp on that rejection email is potentially useful evidence in any future class proceeding. Save the screenshots. Save the dates. Especially if you applied with the right qualifications and got rejected without an interview.

If you’re a CEO: Your downside on this is much bigger than your HR lead is communicating. The Mobley class is uncapped. The Eightfold theory means your vendor’s practices can land in your inbox even if you did nothing wrong yourself. Your HR head running a documented quarterly audit is the cheapest insurance you’ll buy this fiscal year. Sign the memo when it arrives.

The bottom line: The legal exposure of AI hiring isn’t a future problem. It’s a present problem with case law dating to August 2023 (iTutorGroup), May 2025 (Mobley certification), January 2026 (Eightfold filing), and an enforcement uplift coming from Colorado in June 2026. The companies who’ll spend the least on this are the ones who already have the audit, the policy, and the email thread to prove it.

The CEO question you’ll get next quarter

There’s one question that comes from the CEO after they read the next viral LinkedIn thread, and it’s the same question every quarter:

“Are we exposed on this Workday thing?”

The audit you just ran is the seven-word answer: “No, and here’s the documentation that proves it.”

If you want the deeper version — the lesson-by-lesson walkthrough of EEOC, NYC Local Law 144, Colorado AI Act, and Mobley v. Workday with a credentialed completion certificate — our AI for HR Without Creating a Legal Mess course is built around this exact playbook. The companion Smarter Hiring with AI course covers the day-to-day of writing job posts and screening with AI without creating new exposure.

Run the audit. Send the memo. Put the calendar invites in. Then go finish your week.

Sources:

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume