Pentagon labels Anthropic a ‘supply-chain risk’

The U.S. government accused artificial intelligence firm Anthropic of endangering members of the armed forces by adding its own restrictions to the lawful use of its Claude AI software. Consequently, the government labeled Anthropic a supply-chain risk yesterday, a designation normally reserved for businesses from foreign adversaries. The label prohibits the military and its many contractors from doing business with the company. This follows reports that the U.S. military used Claude AI to capture Venezuelan President Nicolás Maduro and is using it for intelligence analysis and target evaluation in Iran. The military has been ordered to phase out Claude AI and to replace it with one or more of its AI competitors. The dispute highlights the importance of and reliance on artificial intelligence, despite the unknowns and the risks, as highlighted in “Why We Must Develop AI (Even If It Kills Us).”