Microsoft Just Backed Anthropic’s Fight Against the Pentagon—and Issued a Stark Warning to the Department of War

Microsoft Just Backed Anthropic’s Fight Against the Pentagon—and Issued a Stark Warning to the Department of War

Microsoft has filed a court brief backing Anthropic after the Pentagon labeled the AI company a “supply chain risk,” warning the move could disrupt military AI systems and ripple across the U.S. tech sector.

BY LEILA SHERIDAN, NEWS WRITER

Microsoft CEO Satya Nadella. Illustration: Inc.; Photo: Anthropic, Getty Images

Microsoft has stepped into Anthropic’s escalating legal fight with the Trump administration, filing a court brief supporting the artificial intelligence company after the Pentagon labeled it a “supply chain risk” and ordered federal agencies to stop using its technology.

In an amicus brief shared with The Hill on Tuesday, Microsoft urged a federal judge to temporarily block the designation while the courts review the dispute. The company warned that forcing contractors to immediately cut ties with Anthropic could disrupt existing AI systems used by the U.S. military and ripple across the broader technology sector. 

“Immediate implementation of the designation could have broad negative ramifications for the entire technology sector and American business community,” Microsoft argued in the filing, according to The Hill. The company also warned that U.S. warfighters could be “hampered” if defense contractors are forced to quickly alter AI products and contracts built around Anthropic’s models.

The legal clash began Monday when Anthropic sued the Trump administration after the Pentagon classified the company as a supply chain risk, a designation typically used for foreign adversaries and companies considered national security threats. The label effectively prevents defense contractors from using the company’s technology.

How Canva Became the Power Player in the AI Design Wars

Anthropic argues the move was retaliation for its refusal to give the government unrestricted access to its AI models, The Hill reported. The company has drawn firm boundaries around how its technology can be used, including prohibitions on mass domestic surveillance and fully autonomous weapons.

In its lawsuit, Anthropic asked the court to overturn the Pentagon’s determination and block enforcement of the designation. The company alleges the government punished it for expressing a “protected viewpoint” on AI safety and the limitations of its own systems.

According to a Microsoft spokesperson quoted by The Hill, the company believes the dispute should be resolved through negotiation rather than immediate restrictions.


© Inc.com