Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”
Defense Secretary Pete Hegseth labeled the start-up a “supply chain risk,” a move that would sever ties between the company and the U.S. government.
Anthropic’s unwillingness to accede shows how the Department of Defense cannot easily force Silicon Valley firms to comply. Unlike defense contractors that have worked with the Pentagon for decades and are reliant on longstanding military contracts, the A.I. companies are contending with different internal pressures and external factors.
. . . and, Sam Altman caved; OpenAI agreed to the DoD’s terms to pick up Anthropic’s lucrative government contracts.
the article mentions that:
The rallying behind Anthropic was tinged with opportunism. Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons,” which is the same stance as Anthropic’s.
But late Friday, after Mr. Trump had ordered federal agencies to stop using Anthropic’s technology, OpenAI said it had reached its own agreement with the Pentagon to provide its A.I. for classified systems. OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
For many A.I. companies, government contracts are only one piece of an expanding pipeline of business. The $200 million contract that Anthropic had been negotiating with the Pentagon for A.I. use in classified systems, which precipitated the fight, would most likely be only a small percentage of the company’s revenue. Anthropic primarily sells A.I. software to other businesses and last year hit a monthly pace of $8 billion to $10 billion in annual revenue, Dr. Amodei said in December.
I’d argue these quotes are on topic but don’t come close to addressing the logical inconsistencies.
OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
That could depend on your take of this statement. I personally don’t understand how this could be done with high certainty and most AI researchers I respect seem to have a similar analysis
Humorously, Anthropocene wanted the gig. So, it might have been openAI who got the rally. They’re both the same people.
OpenAI said it had found a way to put safeguards into its technologies that would somehow prevent the systems from being used in ways that it does not want them to be.
When pressed for specifics on the nature of the safeguards, OpenAI’s Altman replied, “We’ve included the phrase ‘pretty please don’t use this for killing people or spying on Americans’ in our contract with Department of Defense. With this language in place we’re confident that our company values respecting human life and the privacy of all Americans is protected”. /s
The real problem for Anthropic is the clearly vindictive “supply chain threat” designation they were immediately slapped with, which prohibits the Defense Department from buying services from anyone who uses Anthropic’s services themselves.
This can be contested in court, at least, and is almost sure to be ruled on in Anthropic’s favor since it’s so blatantly unjustified. But that might not matter. It’ll take a while (costing contracts and momentum) and once the ruling is made I wouldn’t bet on the Trump administration obeying it anyway.
For money, surely
Okay. Clearly OpenAI is hella desparate to make next month’s rent on their trillion dollar house of promissory note cards.




