What happens when an AI hallucination leads to bombing an elementary school? ⚠ Disclaimer: While there is sufficient evidence to indicate that Israel has used AI to determine the targets of their bombing, there is currently insufficient evidence to conclude that the US military is using AI to determine the targets of their bombings. The allegation that the United States is using AI to determine targets for their bombings is a work of speculation, based on current events.
Update: The WSJ reported that the US military did, in fact, use Anthropic’s Claude AI tool for “target identification” in Iran.
Update: It has been reported that AI was specifically used to bomb the Shajareh Tayyebeh elementary school. The Pentagon refused to answer questions when asked if AI was used to target the elementary school. It’s also been reported that it was a double-tap strike (targeting paramedics in the second bombing of the school).
Update: The UN Office for Disarmament Affairs met to discuss the use of AI in militaries
It appears likely that the US government is using Anthropic, OpenAI, Google and/or xAI data models for processing signals intelligence (SIGINT), for AI-generated “kill lists” to determine where to drop their bombs.
. . . → Read More: The Banality of Artificial Intelligence


















