Anthropic and AI’s Oppenheimer Moment: How Israel-US Are Blurring Ethical Lines in Iran War

Listen to this article:

Claude is an artificial intelligence (AI) assistant developed by Anthropic, designed to be helpful, honest, and harmless. It can handle tasks like summarisation, search, writing, quarry response, and coding with strong reliability and predictability. Anthropic is a prominent US AI safety and research company, started in 2021 by former OpenAI persons. Anthropic researches into training helpful, honest, and harmless AI systems. Today, this company is in news since it has been blacklisted by Pentagon over ethics concerns. 

Sometime back on Truth Social, US President Donald Trump had stated that the US would not allow a ‘radical left, woke’ company to dictate how the military operates. He criticised Anthropic for allegedly attempting to force the Department of War to follow its terms of service instead of adhering to constitutional principles. Trump further said he wants all federal agencies to immediately cease using Anthropic’s technology, with a six-month phase-out period. He also warned that the company would face consequences, if it failed to comply with the needs of the Department of War.

Interestingly, in the ongoing Iran conflict (Operation Epic Fury) the US forces are allegedly depending on Anthropic’s Claude system even after Trump’s dictate to stop using the AI tools of this company. It has been reported that Claude system is put in use during the ongoing air operation against Iran.

The US Central Command (Centcom) has been using Claude in operational environments for intelligence assessments, target identification, and simulated battle planning. This highlights a clear contradiction in Trump’s orders and what the Pentagon is actually doing on ground. This clearly shows that how deeply embedded AI systems have become in military planning and operations. 

Also read: Modi’s Silence Over US-Israeli Attack on Iran Is at Odds........

© The Wire