If you’re an HR generalist at a 50-2,000 employee company and you’ve been seeing the Mobley v. Workday lawsuit news on LinkedIn this month, you’ve probably had the same uncomfortable internal conversation: “Wait — does our ATS do that?”
The honest answer for most HR teams in mid-market companies is I don’t know. You inherited the ATS from the last People Ops lead. You configure the workflows you need. The AI features were turned on by your vendor at some point — you’re not sure when, you’re not sure which ones, and you definitely don’t have a written audit. Now there’s a class-action lawsuit, and everyone is asking whether your company is exposed.
You’re not alone in this. From Kristen Fife, a Talent Acquisition pro on LinkedIn in April 2026, in a post on the lawsuit: “I don’t know how anyone in TA can take it seriously.” From Nikki Reyes, SHRM-CP, in the comments: “why it’s crucial for us to think critically about deploying AI or algorithmic-driven practices in hiring.” The HR community is having this conversation in real time.
This post is the plain-English version. The 6 AI features you should look for in your ATS, the legal exposure each one creates, the questions to ask your vendor this week, and a script for the internal memo to your leadership team. Readable by a non-lawyer in about 10 minutes.
The Lawsuit, in Two Sentences
Mobley v. Workday is a federal class action that conditionally certified in 2025 and gained traction through April 2026. The class is applicants 40 and over rejected by Workday-deployed AI screening tools since 2020. The legal theory: AI screening that produces statistically disparate impact on protected classes (age, race, disability) creates employer liability under the ADEA, FCRA, and Title VII — even when the vendor wrote the algorithm. Translation for HR: “the vendor did it” is not a defense.
From u/henrytheeleventh on r/recruiting (April 16, 2026): “The Workday Lawsuit… is basically making it easier for candidates to sue if they feel a ‘black box’ algorithm unfairly rejected them. Using a sketchy AI detector to thin the herd is exactly the kind of thing that gets flagged in discovery.” From u/Ok-Row-6088 in a separate r/recruiting thread, April 2026: “Proceed with caution and look into the lawsuit against workday ai tools. Its setting a precedent that everyone should be aware of, that candidates have the same protections against discrimination as employees.”
The HR community has converged on the same takeaway. Now the question is: what’s in your ATS?
The 6 Features That Put You in the Blast Radius
This is the checklist. If your ATS does any of these things, you have AI screening — even if your vendor doesn’t market it that way. The defense isn’t “we don’t have AI”; it’s “we have AI, and here’s the audit.”
1. Resume scoring (also called “match score” or “candidate ranking”)
What it does: Assigns a numeric score (typically 0-100) to each incoming resume based on how well it matches the job description. The candidates above a threshold go into the “review” queue; the ones below get auto-rejected or buried.
Where to find it: In Workday, look under Recruiting Configuration → Job Profile Match Score. In Greenhouse, Job Posting → Scoring Rules. In Eightfold, this is the core feature; you can’t turn it off. In iCIMS and Lever, look for “AI Insights” or “Match Score.”
Why it’s risky: The training data for the scoring model is your past hires. If your past hires skewed younger, less disabled, or more from one racial group, the model has learned to score those candidates higher. Disparate impact is built in.
The audit question: “Show me the demographic breakdown of candidates who scored above and below our threshold for the last 12 months. If we exclude protected-class proxies, does the distribution change?”
2. Knockout questions with auto-rejection
What it does: Pre-screening questions that, if answered “wrong,” automatically eliminate the candidate before any human sees the application.
Where to find it: Almost universally in modern ATSes. “Are you currently legally authorized to work in [country]?” is fine. “Are you currently employed?” (proxy for unemployment-discrimination) is not. “What year did you graduate?” is the line everyone is being told to remove.
Why it’s risky: Knockout questions correlate with protected classes more often than HR teams realize. Graduation year → age. Current employment status → unemployment discrimination (protected in NYC, NJ, several states). Salary history → gender pay gap propagation (banned in 22 US states).
The audit question: “Pull a list of every knockout question we have configured. For each, can we articulate why an answer should auto-reject without human review?”
3. AI video interview analysis
What it does: Records the candidate’s video interview, transcribes it, and uses AI (HireVue, Modern Hire, etc.) to score the candidate on traits like “engagement,” “enthusiasm,” or sometimes more nebulous “culture fit.”
Where to find it: If you’re using HireVue, Modern Hire, Spark Hire, or similar — you have this. Sometimes integrated into Workday or iCIMS as an embedded feature.
Why it’s risky: Speech patterns, facial expressions, eye contact, and accent all correlate with race, disability (autism spectrum, hearing impairment, speech impediments), and gender. Illinois passed the AI Video Interview Act in 2020; New York City Local Law 144 requires bias audits; Colorado AI Act has stricter requirements taking effect 2026.
The audit question: “Have we received the bias audit certification from our video interview vendor? When was it last updated? Has anyone on our HR team actually read it?”
4. Predictive hiring / “success” models
What it does: Builds a profile of “what successful candidates look like” based on past hires who stayed and got promoted, then ranks new candidates against that profile.
Where to find it: Eightfold’s core “talent intelligence” feature. Workday Talent and Performance. Sometimes built into iCIMS Talent Acquisition Suite.
Why it’s risky: This is where bias becomes structural. If your past “successful” employees skewed in any demographic direction, the model amplifies it. Worse, the model often considers proxies you don’t realize are proxies — name, school, ZIP code, college extracurriculars — that map directly onto protected classes.
The audit question: “What features does our predictive hiring model use as inputs? Can we exclude any name-based, school-based, or ZIP-code-based features?”
5. AI-generated job description optimization
What it does: Rewrites your job descriptions to “improve apply rates” or “attract better candidates” using AI suggestions.
Where to find it: Workday Recruiting, Greenhouse Inclusive, LinkedIn Recruiter, and most JD-writing tools.
Why it’s risky: AI-rewritten JDs sometimes introduce gendered language or culture-fit phrasing that filters out protected groups before they apply. The bias is invisible to the recruiter — the AI suggested it.
The audit question: “Can we run our last 20 job descriptions through a gender-coded language analyzer (Textio, Gender Decoder, or similar)? Are any of them measurably skewed?”
6. AI-assisted candidate sourcing
What it does: AI tools (LinkedIn Recruiter, hireEZ, SeekOut) recommend candidates from external pools based on profile-matching.
Where to find it: If your recruiters use LinkedIn Recruiter or any third-party sourcing tool, you have this.
Why it’s risky: The recommendation algorithm has been documented to produce demographically skewed candidate pools — even when the recruiter has set “diversity” filters. The AI’s recommendations come from a search index it built on its own data, not yours.
The audit question: “What does our last 100 sourced candidates’ demographic breakdown look like vs. our applicant pool baseline? If they diverge significantly, why?”
The Liability Question (Plain English)
Under the legal theory in Mobley v. Workday, you are potentially liable as the employer for any of these features producing disparate impact on protected classes. The vendor is also potentially liable, but their liability doesn’t cover yours. Your defense is documented audit + human-in-the-loop oversight + bias monitoring. Your non-defense is “we didn’t know what the tool was doing.”
From X user @YosoyYdo on April 26, 2026 (1,193 views): “The risk of relying exclusively on AI tools during the hiring process is considerable… HR must understand precisely how their tools works and how they might be influencing their decision-making.” This is the consensus.
The federal regulators are getting involved too. The EEOC has issued AI hiring guidance. New York City Local Law 144 requires annual bias audits for AI in hiring. The Colorado AI Act (effective 2026) extends similar requirements at the state level. The EU AI Act categorizes AI hiring tools as “high-risk” with full applicability August 2026.
If your company has more than ~50 employees, you have at least one of the 6 features above. Probably three. The audit is no longer optional.
What to Do This Week (5 Concrete Steps)
1. Open ChatGPT (or Claude) and run this audit prompt
Act as a senior HR compliance attorney. I'm an HR generalist at a
{COMPANY SIZE}-employee company in {INDUSTRY}. We use {ATS NAME}
as our applicant tracking system.
Walk me through a 10-question audit to identify which AI screening
features in {ATS NAME} are likely active in our environment, what
the typical default settings are, and which of those features
create exposure under ADEA, FCRA, Title VII, NYC Local Law 144,
and the Colorado AI Act.
For each feature, output:
- The feature name
- The likely default setting in {ATS NAME}
- The specific legal risk
- The exact configuration question I should ask my vendor
I am not a lawyer. Use plain language. I need to walk into a
meeting with our People Ops director on Friday with this list.
How to use: Open ChatGPT, paste the prompt, replace {COMPANY SIZE} / {INDUSTRY} / {ATS NAME} with your actual values. Press Enter. What you’ll see: A structured 10-question audit with feature names, default settings, and vendor questions you can use verbatim. What to do with the output: Save it. Print it. Bring it to the meeting.
2. Email your ATS vendor (today)
Subject line: “AI features audit — request for compliance documentation.” Body: ask for (a) full list of AI / algorithmic features active in your tenant, (b) current bias audit status, (c) Colorado AI Act / NYC LL144 compliance certifications, (d) date these features were turned on. Vendors are getting these emails by the dozen this week. Yours will get answered.
3. Pull a demographic breakdown of your last 12 months of applicants
If you have an HRIS that tracks demographics (most do, even if anonymized), you can produce an applicant-pool-vs-hired-pool demographic comparison in a few hours. The metric you care about: did your AI-screening pipeline produce statistically different outcomes for protected classes vs the input pool? If yes, that’s a flag. If no, that’s a defense.
4. Document a human-in-the-loop policy
The strongest legal defense in Mobley-style cases is documented human review of AI-screened decisions. “The AI scored this candidate 32. A human recruiter independently reviewed the resume and confirmed the rejection.” If you don’t have this documented today, write a one-page policy by Friday and circulate it to your recruiters. Even a draft is better than none.
5. Send the leadership memo
Draft a 1-page memo to your CEO / COO / CFO that says: “Following the Mobley v. Workday class certification, we audited our ATS and identified the following AI features and the following actions we’re taking.” The memo creates a paper trail showing leadership was informed and a remediation plan exists. This is the single highest-leverage compliance artifact you can produce.
For the structured 90-minute version of all five steps with templates, scripts, and a complete vendor-question list, the AI for HR (Legal-Safe) course walks through it end-to-end in a single afternoon. It’s the systematic version of this post — same anchor, same audit framework, but with the worksheets, the vendor email templates, and the leadership memo draft worked out.
What This Won’t Do
A few honest disclaimers:
- It is not legal advice. Mobley v. Workday is real, the class certification is real, but your company’s specific exposure depends on your specific ATS configuration and your jurisdiction. This post is the explainer; your employment counsel is the legal answer.
- It won’t fix the underlying training data. AI screening bias exists because the training data has bias baked in. Even after audit, even after configuration changes, statistical bias in your applicant pool is hard to eliminate. The goal of audit is defensible documentation, not bias-free output.
- It is not a one-time exercise. Vendors push updates. New AI features ship monthly. The audit you do this week needs to be repeated quarterly at minimum.
What’s at Stake
For HR generalists at small-to-mid-market companies, the Mobley lawsuit is the moment AI hiring stopped being a vendor problem and became your problem. The audit isn’t optional. The leadership memo isn’t optional. The vendor email isn’t optional. The 10 minutes you just spent reading this is the cheapest part of getting compliant; the next 10 hours of audit work is what actually closes the exposure.
The good news: the audit is doable. The 6 features are knowable. The vendor questions are answerable. You don’t need to be a lawyer; you need to be a thorough HR professional with a checklist and a Friday meeting on the calendar.
Walk into that meeting with the audit. Walk out with a remediation plan. The lawsuit will work its way through the courts; your company’s position depends on what you document this week, not next quarter.