Make money doing the work you believe in

Trump moves to blacklist Anthropic AI from all government work.

Here is how the most consequential AI policy confrontation in US history unfolded, from A to Z:

LAST SUMMER

The Pentagon awards $200M contracts each to Anthropic, OpenAI, Google, and xAI. But only one gets cleared for classified systems: Anthropic's Claude. Defense officials considered it the most advanced and secure model for sensitive military applications. Claude ends up being used in the operation to capture Nicolas Maduro, embedded deep in Palantir's most sensitive military infrastructure.

The contract includes two hard limits: Claude cannot be used for fully autonomous weapons. Claude cannot be used for mass domestic surveillance of Americans.

For two years, those limits were never triggered. Pentagon users loved Claude. Internally, one official described it as a "huge pain in the ass" to disentangle.

TUESDAY, FEB 24

Hegseth and Amodei meet face to face. Hegseth demands Anthropic lift all safeguards and make Claude available for "all lawful purposes" — the same standard xAI agreed to when it signed its own classified systems deal. He threatens to label Anthropic a "supply chain risk," a designation normally reserved for foreign adversaries like Huawei, and invokes the Korean War-era Defense Production Act as a tool to compel compliance. Deadline: 5:01pm Friday, Feb 27.

WEDNESDAY, FEB 25

The Pentagon reaches out to Boeing and Lockheed Martin asking them to assess their exposure to Anthropic. The message to the entire defense industrial base is clear: prepare to cut ties. Lockheed confirms it received the request. Boeing says it has no active Anthropic contracts.

THURSDAY, FEB 26

Amodei goes public. He refuses to back down: "These threats do not change our position. We cannot in good conscience accede to their request." He flags the real concern: "My main fear is having too small a number of fingers on the button, such that one or a handful of people could operate a drone army without needing any other humans to cooperate."

Pentagon undersecretary Emil Michael fires back on X, calling Amodei a liar with a "God-complex" who wants to personally control the US military.

Also Thursday: OpenAI CEO Sam Altman sends an internal memo saying OpenAI holds the same red lines. Over 100 Google employees write to chief scientist Jeff Dean demanding identical protections for Gemini. Workers at Microsoft and Amazon circulate similar letters.

FRIDAY, FEB 27 — DEADLINE DAY

With one hour to go, Trump bypasses Hegseth entirely and posts on Truth Social: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE... I am directing EVERY Federal Agency to IMMEDIATELY CEASE all use of Anthropic's technology."

He grants the Pentagon a six-month phaseout window — an implicit admission that Claude is too deeply embedded in classified operations to pull overnight.

THREE THINGS THAT MAKE THIS EXTRAORDINARY

1. The $200M contract is almost irrelevant. The supply chain risk label would require every company with a Pentagon contract to certify zero exposure to Anthropic, potentially gutting its $380B enterprise business.

2. Claude is the only AI running on classified military networks. Its replacements — Gemini, OpenAI, xAI — are not yet cleared. The six-month window exists because there is no ready substitute.

3. Anthropic is planning to IPO this year. This entire standoff plays out in front of prospective investors, and Amodei has chosen the reputational bet: "Our valuation and revenue have only grown since we took this stand."

The question now is whether OpenAI, Google, and xAI hold their stated red lines when the Pentagon comes knocking — or whether $200M and classified access prove too much to resist.

One source put it plainly: "The Pentagon doesn't want to destroy Anthropic. It wants to send a message to every AI company about who sets the rules."

#Anthropic #Pentagon #AI #NationalSecurity #Claude #DarioAmodei #AIPolicy #Trump

Feb 27
at
9:54 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.