Sam Altman posted on X Friday night that OpenAI struck a deal with the Pentagon. His AI models are going on classified military networks.
He called it the “Department of War.” Not Defense. War. Like that was a flex.
Hours earlier, Anthropic — built by people who quit OpenAI over safety concerns — got blacklisted by the Trump administration. Their crime? Refusing to let their AI power autonomous weapons and mass surveillance without restrictions.
Anthropic said no. Got punished. OpenAI stepped over them to sign the contract.
The chatbot you use to write emails and brainstorm ideas? It’s being wired into systems that decide who lives and who dies. And the man who built it spent a decade swearing this would never happen.
What “Classified Networks” Actually Means
“Pentagon deal” sounds abstract. So let’s get specific about what classified military networks actually do.
They process intelligence data. Target identification. Threat assessment. Communications interception. The infrastructure that runs modern warfare and surveillance lives on these networks.
When Altman says OpenAI will deploy models on classified networks, he means ChatGPT’s core technology gets plugged into the systems the U.S. military uses to find people, track people, and decide what to do about them.
Altman insists the agreement includes prohibitions on “domestic mass surveillance” and “autonomous weapon systems.” He says the Pentagon “displayed a deep respect for safety.”
But here’s the thing. Anthropic asked for the exact same restrictions. The Pentagon told them no. Defense Secretary Pete Hegseth called those restrictions “philosophical” and “woke.” Then the government declared Anthropic a supply-chain risk to national security — a designation usually reserved for Chinese companies like Huawei.
So the Pentagon rejected safety guardrails when Anthropic proposed them. Then accepted them when OpenAI proposed them. Same guardrails. Different company.
Something doesn’t add up.
We Already Know What “AI on Military Networks” Looks Like
We don’t have to imagine what happens when AI gets wired into military targeting. We’ve already seen it.
The Lavender System
In Gaza, the Israeli Defense Forces deployed an AI system called “Lavender” that processed mass surveillance data — phone records, social connections, behavioral patterns — and flagged up to 37,000 Palestinians as potential targets.
A second system, “Where’s Daddy?”, tracked those flagged individuals via mobile phone location data and notified operators when targets arrived home. At their homes. With their families.
The human review for each target? About 20 seconds. Enough time to check a name. Not enough time to question an algorithm.
The system had a known 10% error rate. That means roughly 3,700 of those 37,000 flagged individuals were likely misidentified — police officers, aid workers, people who shared a name with someone else. UN experts reported that more than 15,000 civilian deaths occurred during the first six weeks when these AI systems were heavily relied upon for target selection.
Not science fiction. This already happened. And those AI systems were far dumber than what OpenAI is putting on classified networks now.
The Libya Precedent
In March 2020, a Turkish-made Kargu-2 drone hunted down and attacked human targets in Libya without requiring any data connection between the operator and the weapon. A UN Security Council report described it as a “fire, forget and find” capability. The drone selected its own targets.
That was 2020 technology. Primitive compared to GPT-4 or whatever OpenAI is deploying now.
The Scale Problem
DARPA is currently working on swarms of 250 autonomous lethal drones. India wants 1,000-drone swarms. The UN Secretary-General has called for a legally binding treaty to ban autonomous weapons by 2026.
Now add the world’s most capable language models to that picture. Models that can process intelligence reports, cross-reference databases, identify patterns across millions of data points, and generate action recommendations.
That’s what’s going on classified networks.
The Vision of This Future Nobody’s Talking About
The Pentagon deal isn’t an endpoint. It’s a starting line. And the finish line is terrifying.
Mass Surveillance, Industrialized
AI doesn’t just make surveillance possible. It makes it automatic.
Right now, surveillance requires humans. Analysts have to read intercepted communications, review camera footage, connect dots between data points. It’s expensive and slow. That’s actually a feature, not a bug — the friction is what prevents governments from surveilling everyone all the time.
AI removes that friction.
Anthropic CEO Dario Amodei explained why his company drew the line: AI enables a government to assemble “scattered, individually innocuous data [about individual Americans] into a comprehensive picture of any person’s life — automatically and at massive scale.”
Your phone location data. Your purchase history. Your social media connections. Your email metadata. Every piece individually is nothing. But an AI model can stitch them together into a complete profile of who you are, where you go, who you talk to, and what you believe.
And it can do this for everyone. Simultaneously. Without getting tired or asking questions.
Altman says the agreement prohibits “domestic mass surveillance.” But here’s the problem: the definition of “domestic” gets blurry fast. What about a U.S. citizen living abroad? What about communications that cross borders? What about the data of foreign nationals that lives on American servers?
And who enforces the prohibition? OpenAI says it will deploy personnel with security clearances to monitor usage. A handful of company employees watching how the entire U.S. military uses their technology on classified networks — networks where, by definition, outsiders aren’t allowed to see what’s happening.
Autonomous Kill Chains
Here’s what “human responsibility for the use of force” means in practice: a human somewhere in the loop clicks “approve.”
But what does that approval look like when the AI has already identified the target, calculated the threat score, recommended the response, and selected the weapon? The human isn’t making the decision. They’re rubber-stamping the AI’s decision. Just like the 20-second reviews in Gaza.
Thomas Wright of the Brookings Institution put it plainly: “Demanding unconditional access before these systems are ready is not an assertion of authority. It is a wager that the unknowns will not matter.”
The unknowns include: Can GPT-level models reliably distinguish between a combatant and a civilian from drone footage? Can they account for cultural context in intelligence analysis? Can they understand that the person carrying what looks like a weapon might be carrying a farming tool?
The answer, right now, is no. Even Anthropic acknowledged this, saying their models “are not reliable enough to be used in fully autonomous weapons” and that “allowing current models to be used in this way would endanger America’s warfighters and civilians.”
That was Anthropic’s argument. The Pentagon called it “woke.”
The Normalization Problem
The most dangerous thing about this deal isn’t the deal itself. It’s what comes next.
Once AI is on classified networks, it becomes infrastructure. It gets integrated into workflows. People depend on it. And the guardrails that exist today? They become negotiable. Because the next contract renewal won’t happen in public. It will happen behind classification barriers, where no one outside the Pentagon and OpenAI will know what changed.
Today: “no mass surveillance, no autonomous weapons.” Next year: “limited autonomous targeting in defined scenarios.” Year after: “expanded use consistent with evolving threat environment.”
That’s how institutional creep works. Not in big dramatic leaps, but in small redefinitions of terms that already sounded flexible.
Every Contradiction, Receipts Included
Sam Altman is very good at saying the right thing at the right time. Here’s what he said, and what actually happened.
“Open-Source, For Humanity” → Closed, For Profit
2015: OpenAI’s founding charter committed to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” Research was published freely. Code was shared.
2019: OpenAI created a “capped-profit” subsidiary allowing 100x investor returns. Microsoft invested $1 billion. Internal communications from 2016-2017, revealed by the OpenAI Files investigation, show co-founder Greg Brockman writing: “cannot say that we are committed to the non-profit. don’t want to say that we’re committed.”
2025: OpenAI completed its conversion to a for-profit Public Benefit Corporation valued at $500 billion. SoftBank invested $41 billion.
“I Own No Equity” → Actually, I Did
May 2023: Altman told the U.S. Senate: “I have no equity in OpenAI. I’m doing this because I love it.”
December 2024: TechCrunch reported Altman held indirect stakes through a Sequoia fund and a Y Combinator fund.
September 2024: Reuters reported the restructuring was designed to give Altman equity for the first time. In the final October 2025 deal, he did not receive a stake — but his Senate testimony was already undermined by the indirect holdings he’d had all along.
“We Need Strong Regulation” → Regulation Is Overreach
May 2023: Altman told Congress “regulatory intervention would be critical to mitigate the risks of increasingly powerful models.”
May 2025: Same man, same Senate. Agreed with Senator Ted Cruz that “overregulation” was the real danger.
“20% of Compute to Safety” → Safety Teams Dissolved
2023: OpenAI pledged 20% of compute to the Superalignment team for long-term AI safety research.
May 2024: Both team leaders resigned. Jan Leike said “safety culture and processes have taken a backseat to shiny products.” He joined Anthropic. The team was dissolved. The compute went to ChatGPT. Later that year, the AGI Readiness team was disbanded. In early 2026, the Mission Alignment team too. Three safety teams, gone.
“I Didn’t Know About the NDAs” → His Signature Was on Them
2024: When OpenAI’s equity clawback NDAs became public, Altman apologized and claimed ignorance. Vox obtained incorporation documents from April 2023 bearing Altman’s signature authorizing the provisions.
Daniel Kokotajlo, a safety researcher, forfeited equity worth 85% of his family’s net worth to keep his right to speak freely about the company’s safety failures.
“No Military Use” → Pentagon’s Classified Networks
Until January 10, 2024: OpenAI’s usage policy explicitly prohibited “military and warfare” applications.
January 10, 2024: Those words were quietly deleted. No blog post. No announcement. The Intercept noticed.
November 2025: OpenAI deleted the word “safely” from its mission statement entirely. Old: “safely benefits all of humanity.” New: “benefits all of humanity.”
February 2026: Full Pentagon classified network deployment. Hours after the company that said “no” was blacklisted.
“We Share Anthropic’s Red Lines” → We Signed What Anthropic Refused
This is the freshest one. In a memo to employees, Altman said OpenAI would “largely follow Anthropic’s approach” if it were in the same position.
But they’re not in the same position. Anthropic is blacklisted. OpenAI has the contract. Saying you share someone’s principles while taking the deal they turned down is just words.
Hundreds of employees from Google and OpenAI have since signed a petition calling on their companies to mirror Anthropic’s actual position — not just its language.
The Pattern Is the Point
| What Altman Said | What He Did |
|---|---|
| “Open-source, for humanity” | $500B for-profit corporation |
| “I own no equity” | Held indirect stakes all along |
| “We need strong regulation” | Called regulation “overreach” two years later |
| “20% of compute to safety” | Dissolved 3 safety teams in 2 years |
| “I didn’t know about the NDAs” | His signature was on the documents |
| “No military and warfare use” | Quietly erased the ban, then signed with the Pentagon |
| “Safely benefits all of humanity” | Deleted the word “safely” from the mission |
| “We share Anthropic’s red lines” | Signed the deal Anthropic refused |
Every single position was abandoned the moment it became inconvenient. Not once. Not twice. Eight times, on the public record.
This isn’t a person who changed his mind. This is a pattern. And the pattern says: whatever Sam Altman tells you today, reverse-engineer what he needs to be true right now, and you’ll understand why he said it.
What You Can Actually Do
You’re probably not going to influence Pentagon procurement. But you’re also not powerless.
The AI you use is a choice. ChatGPT isn’t the only option. It never was. Claude, Gemini, Copilot, Llama, Mistral — there are models built by companies with different values, different structures, and different relationships to military power. Use more than one. Compare them. Don’t let convenience lock you into a single provider whose priorities are shifting under your feet.
AI skills are portable. A well-crafted prompt works in ChatGPT, Claude, Gemini, or any other model. When you download and save your own AI skills, you own them. They’re text files. They don’t care which company’s model runs them. If your current provider changes direction — again — your skills still work.
Pay attention to who says no. In an industry where the financial incentives all push toward saying yes, the companies that say no are telling you something about their actual values. Not their marketing. Their values. Anthropic just lost a $200 million contract and got blacklisted by the federal government because it refused to remove safety guardrails. That cost them something real.
Follow the mission statements, not the press releases. When a company deletes the word “safely” from its mission and puts its technology on military classified networks in the same quarter, that’s not a mixed signal. That’s a clear one.
The Question Nobody Wants to Answer
Here’s what keeps me up at night about this.
The technology itself is neutral. Large language models process text. They predict tokens. They don’t care if the text is an email draft or an intelligence briefing, if the tokens represent a recipe or a target list.
The question is who controls them and what guardrails exist.
Right now, the guardrails are: a contract that Sam Altman says includes safety restrictions, enforced by a handful of OpenAI employees on classified networks where no external oversight is possible, signed by a CEO who has broken every major promise he’s made in the last decade.
That’s the guardrail between “AI assistant that helps you write better” and “AI system that processes mass surveillance data for the world’s largest military.”
And if the guardrail fails? We won’t know. Because it’s classified.
Related Skills
- AI Security Policy Writer — Write organizational AI usage policies that reflect your actual values
- AI Security Red Team Prompter — Test AI systems for safety vulnerabilities
- Privacy Settings Optimizer — Lock down your data across platforms and services
- AI-Proof Your Career — Build skills that matter regardless of which company wins the AI race
- System Prompt Architect — Build portable instructions that work across any AI model