Anthropic vs Pentagon: 5 Questions to Ask Before May 19

Anthropic vs Pentagon: Google signed Apr 28. Anthropic refused in March. Here's the 5-question audit every AI buyer should run before May 19.

On Monday, April 28 at 4:00 p.m., Google signed a classified AI deal with the U.S. Department of Defense for “any lawful government purpose.” That’s the same language Anthropic refused in March — the refusal that got Anthropic officially designated a Pentagon “supply chain risk” on March 5, sued in federal court on March 9, and watched its emergency stay denied at the D.C. Circuit on April 8.

950+ Google employees signed an open letter against their own deal that same Monday. The Pentagon’s AI chief, on CNBC, called over-reliance on a single AI model “never a good thing” — coded shade at Anthropic.

If you’re an enterprise AI buyer, a defense subcontractor, a GRC officer, or anyone whose org runs Claude, Gemini, or both, the next 21 days matter more than the news framing suggests. The D.C. Circuit hears oral argument on May 19. Until then, you’re operating in a window where the legal status of your AI vendor is ambiguous — and yet most procurement frameworks were never built for the situation we’re in.

This is the audit you should run before the court rules. Five questions, plain English, primary sources cited.

What Actually Changed on April 28 (in 90 Seconds)

Three deals are now public, all with the same root structure: classified Pentagon access to commercial AI models for “any lawful government purpose.”

VendorStatusDeal capDate
OpenAIActive$200M2025
xAIActive$200M2025
GoogleJust signedReported similar tierApr 28, 2026
AnthropicRefused → designated supply-chain riskMarch 2026

The Apr 28 Google deal is the headline. But the structurally important fact is what’s in the contract language, not that it was signed. Per The Information’s reporting (since confirmed by Reuters and Business Standard — no primary outlet has published the full text), the contract includes:

“the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”

Read that qualifier carefully. “Without appropriate human oversight and control” is not a ban. It’s a permission slip with a footnote. The contract permits autonomous weapons use with human oversight — whatever that ends up meaning operationally — not a flat-no.

A separate clause states the deal “does not confer any right to control or veto” anything the government decides to do. Google has guardrails on paper. The Pentagon has the steering wheel.

Anthropic’s Acceptable Use Policy doesn’t, contrary to most press characterization, contain a standalone clause banning fully autonomous target-selection by lethal weapons. That red line was articulated by CEO Dario Amodei in his February 24 statement and in private contract negotiations. The AUP does ban “battlefield management applications,” “predictive policing,” and “law enforcement applications that violate or impair the liberty, civil liberties, or human rights of natural persons.” More telling, the AUP carries this carve-out:

“Anthropic may enter into contracts with certain governmental customers that tailor use restrictions to that customer’s public mission and legal authorities if, in Anthropic’s judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms”

Translation: Anthropic was always willing to negotiate a tailored contract. The DoD just wouldn’t accept the tailoring it offered. The bilateral negotiation is what broke down — not the AUP.

The Supply-Chain-Risk Designation Has Real Teeth (Even If It Sounds Like PR)

Most coverage has framed the supply-chain-risk designation as symbolic — a PR slap. It is not. It’s enforceable procurement law that propagates through every defense contract in the U.S. supply chain.

The DoD invoked two statutory authorities: DFARS Subpart 239.73 (NDAA-based exclusion authority) and FASCA (Federal Acquisition Supply Chain Security Act). Per the Mayer Brown legal analysis, the designation applies to “all DoW procurements for which 48 C.F.R. Subpart 239.73 is applicable” and extends to all Anthropic affiliates, products, and services.

The mechanism is FAR 52.204-25-style flow-down, the same model used for Huawei and ZTE telecom-equipment exclusions. Here’s how it cascades:

  1. Prime contractor to DoD signs a contract that incorporates the supply-chain-risk clause.
  2. The clause requires the prime to flow it down to all subcontracts and other contractual instruments, including subcontracts for commercial items.
  3. First-tier subs flow it to second-tier. Second-tier flows to third. Mandatory all-tiers cascade, regardless of dollar value.
  4. Every contractor at every tier must make an affirmative certification that they don’t use Anthropic’s Claude in their military-related operations.
  5. If a contractor discovers covered technology in their supply chain, they have 1 business day to notify the contracting officer and 10 business days to submit a mitigation plan.

Per CNBC’s Apr 28 reporting, this designation explicitly mandates that “defense contractors, including Microsoft and Palantir, certify non-use of Claude in their military-related operations.” Microsoft is a Claude customer through Azure Foundry. Palantir runs Claude in some of its AIP deployments. Both now have a procurement question that didn’t exist 60 days ago.

One important nuance: the designation only covers DoD/DoW procurements. Companies may continue using Claude for non-Pentagon work, including civilian agency contracts. So the question for a Fortune 500 buyer isn’t “do we ban Claude?” — it’s “where in our org does Claude touch a Pentagon-flow-down contract, and have we certified accordingly?”

That question is what most procurement frameworks haven’t answered yet.

Where Your Procurement Framework Has a Hole

Here’s a thing I went looking for and didn’t find: a published vendor-risk or AI-procurement framework — from any major Fortune 500 buyer, the federal CIO Council, GSA, or even NIST AI RMF 1.0 — that contemplates the specific scenario of your AI vendor simultaneously holding a DoD classified contract and a commercial relationship with you.

NIST AI RMF addresses third-party AI risk in its GOVERN and MAP functions, but it does not contemplate the dual-use government/commercial vendor conflict. GSA and the CIO Council have AI acquisition guidance, but neither addresses what happens when “your AI vendor is also the Pentagon’s AI vendor” — and what that implies for your GRC reviews, your data-residency posture, and your contract pass-through obligations if you’re somewhere in a DoD supply chain.

The conversation on X this week is at the ethics-versus-national-security altitude. The procurement, legal, and operational layers are silent — not because they don’t matter, but because most procurement teams haven’t seen this configuration before. The question this week isn’t whether the deal is good or bad. The question is whether your buying committee can answer the audit below by Friday.

The 5-Question Vendor Audit (Run This Before May 19)

Pull this into your next AI vendor review. Five questions, in order. Each one references either the contract language above or the FAR/DFARS framework. None require legal training to ask, though some answers will require legal review.

1. Are we (or any of our top 20 customers) somewhere in a DoD/DoW supply chain?

This is the gating question. If the answer is “no, anywhere,” then the supply-chain-risk designation has minimal direct procurement impact on you — though the broader policy question (should we keep buying Claude?) remains.

If the answer is “yes” or “we don’t know,” then FAR 52.204-25-style flow-down may apply, and you have a real audit on your hands. Even one defense subcontract somewhere in your customer base creates pass-through obligations.

What to verify: Pull your top-20 customer list. Run it against the SAM.gov Federal Procurement Data System. Flag any customer with active DoD/DoW prime or sub status. For those, request flow-down clause text.

2. Where in our org is Claude deployed, and which deployments touch a flow-down contract?

This is the inventory question. Most enterprises don’t have a clean answer. Claude shows up via Anthropic’s direct API, via AWS Bedrock, via Microsoft Azure Foundry, via integrations in Notion / Slack / Salesforce / GitHub Copilot, and via custom builds. Each touchpoint is a potential flow-down trigger if it’s used in performance of a DoD-flowed contract.

What to verify: Three lists. (a) All Claude API keys issued. (b) All third-party SaaS where Claude is the underlying model. (c) Mapping of deployments to contracts your team is performing under. Your CTO and your contracts team need to do this together.

If you’re a defense prime or subcontractor, the certification requirement isn’t theoretical — it’s contractual. The Claude usage map from Question 2 feeds directly into this certification.

What to verify: Read the supply-chain-risk language in your latest DoD-flowed contract. Identify the certification clause. Cross-check it against your Claude-deployment inventory. If you can’t honestly certify non-use, you have a one-business-day clock once that’s surfaced — and a 10-business-day mitigation-plan deadline.

4. What’s our exposure if the May 19 D.C. Circuit ruling goes against Anthropic?

Two scenarios. If Anthropic wins on the merits at the D.C. Circuit (or the N.D. Cal. injunction holds), the designation effectively pauses and you have breathing room. If Anthropic loses, the designation hardens — and any open contracts you’ve signed assuming “this will get resolved” come due.

The April 8 D.C. Circuit panel reasoning is worth reading: the court framed Anthropic’s harm as “relatively contained risk of financial harm to a single private company” — a signal about how the appellate court is weighing the equities. That doesn’t predict the May 19 outcome, but it’s not a confidence signal either.

What to verify: Build two scenario plans. Scenario A: injunction holds, Anthropic wins. Scenario B: designation enforced, May 20+. For each, list the contractual triggers, the certification deadlines, and the alternative model providers (Gemini Enterprise, Microsoft MAI, OpenAI on AWS post-restructure, open-weights via Bedrock). Yes, “switch to Gemini” is now a different conversation than it was last week — see Question 5.

5. Does the Apr 28 Google deal change our Gemini risk posture?

This is the question almost nobody is asking publicly, and it’s the one that matters most for buyers reading the contract language. Google’s “without appropriate human oversight and control” qualifier is contractual aspiration, not external safeguard. There’s no third-party audit right disclosed in public reporting. There’s no penalty clause. There’s an explicit no-veto clause. Google’s safety filters are, per anonymous Engadget sourcing, adjustable at government request — though that specific clause has not been confirmed in primary sources.

If your governance posture explicitly relies on “our AI vendor will refuse problematic government uses on principle,” that posture is now strictly weaker for Google than it was on April 27. Anthropic, even with the designation, has demonstrated a willingness to take a financial hit to maintain a stated red line. Google has demonstrated a willingness to sign with the “any lawful government purpose” language and a non-binding human-oversight footnote.

That doesn’t make Gemini wrong for your use case. It does change the disclosure language you need in your AI Vendor Risk Register.

What to verify: Pull your Gemini Enterprise contract. Identify any clause that references your vendor’s right to refuse government access requests. Update your AI Vendor Risk Register to reflect the Apr 28 deal. If you’re in healthcare, finance, or any regulated industry where downstream client trust matters, this is a board-level disclosure question.

What’s Enforceable vs What’s PR

The single most useful frame for an enterprise buyer reading the Apr 28 announcements:

Google’s Pentagon deal language is PR-grade enforcement. The “appropriate human oversight” qualifier and the “no right to veto” clause are aspirational contractual commitments with no technical enforcement mechanism, no third-party audit right, and no public penalty clause. They’re enforceable only as terms between Google and the DoD. They’re not external safeguards an enterprise buyer can independently rely on.

Anthropic’s supply-chain-risk designation is procurement-law-grade enforcement. It propagates through every tier of every DoD-flowed contract. It requires affirmative certification. It triggers reporting obligations within one business day. That’s not PR — that’s procurement law with statutory teeth.

The wild card is the White House. Per Axios reporting on April 28, the White House is developing draft guidance — possibly an executive action — that would let federal agencies sidestep the supply-chain-risk designation and onboard new Anthropic models including Mythos. Reuters could not independently verify the report and Anthropic declined comment. If this guidance lands before May 19, the procurement landscape shifts again. One source characterized the effort as “bring Anthropic back into the fold while saving face.”

That’s a lot of moving pieces. The audit above doesn’t make them stop moving. It does make sure that on May 20, whichever way the court rules, you’re not the buyer who hasn’t run the inventory yet.

What to Do Friday

Forward this to your Head of GRC and your AI/ML lead. If you’re a defense prime or sub, also send to your contracts team and your VP of Sales. Set 30 minutes on the calendar for next Tuesday to walk through the five questions. By the May 19 oral argument, you should have answers — or at least a written list of who owes you which answer, and by when.

This is one of those weeks where the news framing and the operational reality are running on different clocks. The 950 employees signing the open letter and the Pentagon AI chief’s CNBC sound bite are the news framing. FAR 52.204-25 flow-down certification deadlines and the May 19 oral argument are the operational reality. Your buying committee lives downstream of both.

For more on the Microsoft × OpenAI restructure that closed two days before this Pentagon deal — and how it changes the model stack you can actually buy at $15/user — see our companion analysis on Microsoft Agent 365’s 4-model picture and the post-restructure read for everyone using ChatGPT or Claude at work.

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume