|

Zee Live News News, World's No.1 News Portal

Trump directs US government agencies to stop using Anthropic products

Author: admin_zeelivenews

Published: 28-02-2026, 3:32 AM
Trump directs US government agencies to stop using Anthropic products
Telegram Group Join Now


By Jen Judson, Josh Wingrove and Maggie Eastland

 


The Pentagon declared Anthropic PBC a supply-chain risk after President Donald Trump directed US government agencies to stop using its products, capping a feud between the artificial intelligence giant and defence officials over safeguards on its technology. 


Defence Secretary Pete Hegseth ordered the Pentagon to bar its contractors and their partners from any commercial activity with Anthropic. In a post on X, Hegseth set a six-month period for Anthropic to hand over AI services to another provider.

 

“America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote. “This decision is final.” 


Hegseth had given the company until 5:01 p.m. on Friday to allow the Pentagon to use the Claude chatbot for any purpose within legal limits — but without any usage restrictions from Anthropic. The firm has insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations.

 
 


Anthropic hasn’t received any direct communication from the government on the status of negotiations, and will challenge any supply chain risk designation in court, the company said in a statement on Friday.

 


Since its founding, Anthropic has positioned itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology. That stance had thrust Chief Executive Officer Dario Amodei into a high-stakes confrontation with Hegseth, who has vowed to root out “woke” practices at the sprawling agency he leads.

 


Before the deadline arrived, Trump wrote on social media that he was ordering federal agencies to drop Anthropic, warning that if the company failed to cooperate with the handover it would face “major civil and criminal consequences” that he didn’t specify.

 


“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” the president posted on social media. “Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.”

 


In its statement on Friday, Anthropic said being labelled a supply-chain risk “would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”

 

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said. “We will challenge any supply chain risk designation in court.” 


The Trump administration’s decision will send a shockwave through AI companies that have invested hundreds of billions of dollars on the technology and are weighing how to win federal business. 

 


It wipes out as much as $200 million in work that Anthropic had agreed to do for the military, along with smaller but important contracts for civilian agencies including the State Department. The company also struck a broad deal with the General Services Administration for federal agencies to use Claude for a nominal $1 fee last year.

 


The supply-chain risk declaration could have a wider impact, since other firms also deploy Anthropic’s technology in their own products. Anthropic is racing to persuade more businesses to pay for its software to offset the immense cost of developing AI and justify its lofty $380 billion valuation. The company is widely expected to be preparing for an initial public offering as soon as this year.

 


Hegseth’s order that “no contractor, supplier, or partner may conduct any commercial activity with Anthropic” will affect Maven Smart System, the widely used AI-enabled battle management system made by Palantir Technologies Inc., which in the second half of 2024 negotiated a deal to use Anthropic’s AI tools.

 


A Palantir representative didn’t immediately respond to a request for comment about the likely impact or ease of removing Claude from systems it provides for use by US military operators.

 


Supply-chain risk designations are usually reserved companies seen as adversaries, such as China’s Huawei Technologies Co., said Michael Horowitz, a political science professor at the University of Pennsylvania, who served in the Pentagon during the Biden administration, told Bloomberg News.

 


“This would be legally murky territory, and it is not clear how it would turn out,” he said.

 


Ousting Anthropic from the government creates a potential national security challenge, given that the company was until recently the only AI system that could operate in the Pentagon’s classified cloud. Its Claude Gov tool is a favoured option among defence personnel for its ease of use. 

 


Replacing Claude in Pentagon systems could set the US government’s national security use of AI back by at least six months, said a person familiar with the matter who works in the field. 

 

One Pentagon worker familiar with Claude says its code base is easier than others to deal with, making up-to-minute changes easier, swifter and more reliable. But that doesn’t mean rivals won’t catch up, or the Pentagon wouldn’t put up with a lesser partner while their AI improves, the Pentagon worker added. 


Amodei had pledged Thursday to ensure that any transition away from its products to another provider would be smooth. The firm was facing growing competition for Pentagon business from Elon Musk’s xAI, which just won approval for classified work, as well as rivals OpenAI and Google’s Gemini. 

 


OpenAI Chief Executive Officer Sam Altman had also pushed back against the Pentagon, telling employees in a memo seen by Bloomberg that his company was talking to defense officials about using its models with similar limits. “We would like to try to help de-escalate things,” he wrote in the memo.

 


The moves will almost certainly provoke further blowback from Silicon Valley, where workers rallied to Anthropic’s side. Employees at several major tech companies including Amazon.com Inc. and Microsoft Corp. had called for their firms to reject Pentagon demands for unrestricted use of AI products. 

 


On Thursday, Amodei had made clear the company was standing its ground. “These threats do not change our position: we cannot in good conscience accede to their request,” he said in a statement.

 


That provoked an evening social-media tirade from Emil Michael, the undersecretary of defence for research and engineering, who wrote that Amodei “is a liar and has a God-complex.”

 


“He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote.

 


The feud erupted just weeks after the Pentagon published a new strategy on artificial intelligence that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic barriers to use.

 


The approach specifically urged the Defence Department to choose models that are “free from usage policy constraints that may limit lawful military applications.” Defence officials have reiterated that the military would use Anthropic’s technology within the bounds of the law. 

 


Michael struck a more conciliatory tone Friday, telling Bloomberg Television the department was willing to continue its discussions with Anthropic.

 


“So long as they’re in good faith, we’re always open to talks,” Michael said. “Up until that deadline, I’m open to more talks and I told them so.”

 

Source link
#Trump #directs #government #agencies #stop #Anthropic #products

Related News

Leave a Comment

Plugin developed by ProSEOBlogger
Facebook
Telegram
Telegram
Plugin developed by ProSEOBlogger. Get free Ypl themes.
Plugin developed by ProSEOBlogger. Get free gpl themes