In a striking demonstration of the growing reliance on artificial intelligence in military operations, the US military reportedly used Anthropic’s Claude AI system during a significant air strike in Iran, just hours after the Trump administration banned federal agencies from using the company’s technology.
According to sources cited by The Wall Street Journal, military commands, including US Central Command (CENTCOM), leveraged Claude for intelligence analysis, target identification, and battlefield simulations. This incident highlights the deep integration of advanced AI systems into defense operations, even as the administration sought to sever ties with Anthropic.
Conflict and Integration
The Pentagon had previously entered into a multiyear contract with Anthropic, alongside other major AI labs, worth up to $200 million. Through partnerships with tech giants like Palantir and Amazon Web Services, Claude was approved for use in classified intelligence and operational workflows. The system was also reportedly involved in earlier operations, such as a January mission in Venezuela that resulted in the capture of President Nicolás Maduro.
Breaking Down: Contract Talks and Ethical Boundaries
Tensions escalated when Defense Secretary Pete Hegseth demanded that Anthropic grant unrestricted military use of its AI models for any lawful scenario. Anthropic CEO Dario Amodei firmly rejected the request, citing ethical concerns and stating that certain applications crossed fundamental boundaries. The company’s stance was clear: they would not compromise on their principles, even if it meant losing government business.
Pentagon’s Response and Future Plans
In response to Anthropic’s refusal, the Pentagon began lining up alternative providers, ultimately reaching an agreement with OpenAI to deploy its AI models on classified military networks. This move underscores the military’s determination to maintain its technological edge, despite the ethical and security concerns raised by private AI companies.
Anthropic’s CEO Defends Ethical Stance
During an interview on Saturday, Dario Amodei emphasized Anthropic’s opposition to the use of its AI models for mass domestic surveillance and fully autonomous weapons. He argued that military decisions should remain under human control, a position that has gained traction in the tech community and among human rights advocates.
Amodei’s stance reflects a broader debate in the tech industry about the ethical implications of AI in military applications. As AI continues to evolve and become more sophisticated, the balance between innovation and ethical responsibility will remain a critical issue for both private companies and government agencies.
Looking Forward
The incident involving the US military’s use of Anthropic’s AI in the Iran strike serves as a stark reminder of the complex interplay between technology, ethics, and national security. As the Pentagon continues to explore AI’s potential in defense operations, the ethical boundaries set by companies like Anthropic will play a crucial role in shaping the future of military technology. The ongoing dialogue between tech firms, policymakers, and the military will be essential in ensuring that AI’s use in defense remains responsible and aligned with broader societal values.
