As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.
Israel did the same thing, by using AI, they can hallucinate as many targets as they can with the “plausible deniability”.
Like the absolute worst plausible deniability in human history.
There are just so many things to be said about the ills of AI, but one of them is that it is very purposefully a liability laundering machine. The decisions and thought process are blackboxed and unauditable. We’ve been trained to dismiss any oopsies as an inevitable part of the system, both while it’s still “rapidly developing” as well as just inherent to the technology. Absolutely none of this is acceptable and yet here we are.



