menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Claude Under Attack

25 0
latest

One can admire Anthropic CEO Dario Amodei for his stand on principle, to the point that he may be risking his company’s future. Hours before the U.S. and Israel launched Operation Epic Fury/Roaring Lion on February 28, 2026, at 1:15 AM EST (8:15 AM Israeli time), President Trump and U.S. Secretary of War Pete Hegseth declared war on Anthropic. President Trump’s post on Truth Social is dated February 27 at 10:47 PM. Secretary Hegseth’s tweet on X is dated February 28, 2026, at 12:14 AM, only one hour before the start of the U.S.-Israel offensive on Iran. This is quite interesting timing, taking into consideration that the only Gen AI model integrated into U.S. classified systems is Anthropic’s Claude, through its partnership with the decision-intelligence company Palantir.

If not resolved during the next six months ­­­— the phase-out period given by Trump and Hegseth ­­­­­­­­— the move could result in a catastrophic outcome for one of the most innovative AI companies. The administration not only ordered a halt to the use of Anthropic’s technology; it went as far as designating Anthropic a Supply-Chain Risk to National Security: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” On March 5, 2026, it was reported that Anthropic had received a formal letter from the Pentagon on the subject. Anthropic claims the action is illegal and plans to challenge it in court.

Leaving aside the threats, the style, and the possible legal fight between Anthropic and the U.S. government, it is worthwhile to focus on the issues, specifically Amodei’s statement that “AI systems of today are nowhere near reliable enough to make fully autonomous weapons” (the other issue is “domestic mass surveillance,” i.e., mass surveillance of U.S. citizens).

Concerning autonomous weapons, in February 2023, the U.S. Department of Defense (DOD, now Department of War) issued a directive titled “Autonomy in Weapon Systems” (DOD Directive 3000.09), whose objectives are:

• To establish policy and assign responsibilities for developing and using autonomous and semi-autonomous functions in weapon systems, including armed platforms that are remotely operated or operated by onboard personnel. • To establish guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

An autonomous weapon system is defined as “a weapon system that, once activated, can select and engage targets without further intervention by an operator.”

Section 3 of the document deals with verification and validation (V&V), and testing and evaluation (T&E), of autonomous and semi-autonomous systems: “Systems will go through rigorous hardware and software V&V and realistic system developmental and operational T&E, including analysis of unanticipated emergent behavior.”

Autonomous weapon systems, as defined above, are already part of the U.S. military arsenal. In an interview held on March 6, 2026, former Deputy Assistant Secretary of Defense Professor Michael Horowitz stated: “The U.S. military have been using autonomous weapon systems for more than 40 years.”

These weapon systems, which use older technologies, were thoroughly evaluated and tested following DOD procedures.

In his interviews, Amodei refers to an army of millions of drones or robots that can operate without any human oversight and to weapons that fire without any human involvement. He raises the question of what norms would govern such AI weapons. Indeed, this is a very serious issue that requires the attention of leaders and policymakers. In fact, in February 2023, the U.S. Department of State published the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” (see here), which, according to the department’s website, has been endorsed by over 50 countries, including Israel.

At present, this policy statement has not been modified or replaced by the Trump administration, as has been done with other AI policies established during the Biden presidency.

As quoted above, Dario Amodei’s concern regarding the use of Anthropic’s Gen AI for autonomous weapons is that the technology is not at the required level of maturity to implement such a system. One must assume that Amodei is familiar with DOD Directive 3000.09, which covers any technology, including LLMs.

It is reasonable to estimate that the development of an autonomous weapon system of the kind described by Amodei — millions of autonomous drones or robots replacing human soldiers — is in the early stages of development and years away from full operational deployment. Instead of risking his company, Amodei could have taken a more pragmatic approach and used other platforms to pursue the very necessary discussion on autonomous weapon systems based on powerful AI technology.

DOD has a relatively recent directive on autonomous weapons, and the system that Amodei imagines is not one that will be deployed in the near future.

Anthropic’s Claude has risen to become a leading model in tasks such as coding and content creation. Its leader, Dario Amodei, is a creative and farsighted leader. Too bad that in his dealings with the Pentagon — especially with this administration’s Pentagon — instead of risking his company, he did not take the approach: Don’t be right, be wise (אל תהיה צודק, תהיה חכם).


© The Times of Israel (Blogs)