a

Lorem ipsum dolor sit amet, elit eget consectetuer adipiscing aenean dolor

The Algorithm Goes to War

What the use of Claude AI in the Iran campaign means — and what it means for the world.

Most people of a certain age can tell you exactly where they were on September 11th: the room, the screen, the moment someone said “turn on the TV.” I had relatives who worked in the building. A line was drawn in 2003, when the Iraq invasion arrived as “shock and awe” that looked disturbingly like a video game. Those were clear before-and-after points in American life.We are living through another one now, but it’s harder to see.

In the first 24 hours of the new war with Iran, the U.S. military used AI targeting tools, still powered by Anthropic’s Claude, to help strike more than 1,000 targets. That volume was not generated by human planners working around the clock, but by a machine system able to propose and prioritize targets faster than any human team. The kill list is now a software output.

Unlike 9/11, there is no single image to fix this moment in memory. No tower, no smoke. Just a number: 1,000. And a precedent that is likely to shape how wars are fought from now on.

Claude in the kill chain

Here is the twist: this is happening even after the Defense Department banned Anthropic, following the company’s refusal to let its AI be used for fully autonomous weapons or mass domestic surveillance. Claude is, in effect, an unwilling participant in a kind of war it was explicitly designed to avoid, kept in the loop because critical systems cannot be swapped out overnight.

Meanwhile, OpenAI stepped into the gap. Within hours of Trump’s order blacklisting Anthropic, OpenAI announced a new Pentagon agreement that allows its models to be used on classified military networks, positioning itself as the replacement provider for future deployments. The company argues that it has negotiated sufficient “guardrails,” but unlike Anthropic, it accepted the basic premise that its systems can support a wide range of military uses, including in live theaters of conflict.

The Arms Race Within the Arms Race

What is often missed in the headline about Claude is the competitive dynamic it has set in motion. Anthropic’s refusal to allow its AI to be used for fully autonomous weapons was not only a principled stand it was a market signal. Within hours of the Pentagon’s blacklisting of Anthropic, OpenAI moved to fill the gap, announcing a new classified-network agreement that explicitly accepts a wide range of military uses.The message from OpenAI’s leadership: we will go where Anthropic will not.

This is a new and dangerous competitive structure. AI companies are now racing not just for commercial dominance but for military relevance — and the companies willing to accept fewer restrictions gain access to the most lucrative and strategically significant contracts. The effect is a ratchet: each concession by one company sets a new floor that competitors must match or be sidelined. Anthropic’s principled exit did not stop the march; it simply changed the vendor. The question the world has not yet answered is whether any governance framework exists to constrain what AI companies are permitted to offer militaries — and right now, the honest answer is no.

The Gaza Preview Leading Into the Iran War

This moment did not emerge from nowhere. The Gaza war already showed what happens when militaries lean on AI to generate targets at scale. Israel’s “Lavender” system reportedly flagged thousands of alleged militants with an estimated 10 percent false positive rate — a figure that, in the context of mass targeting, translates into large numbers of civilians wrongly marked for death. @Peter Asaro, vice chair of the Stop Killer Robots campaign, framed the core question: once machines can rapidly produce long lists of targets, to what extent are humans actually reviewing, questioning, and sometimes rejecting those targets before authorizing a strike? When review becomes a rubber stamp, “human in the loop” is a comforting slogan, not a safeguard.

When Machines Outrun Deliberation

The deepest shift here is not technological but temporal. International humanitarian law, diplomatic back-channels, congressional oversight, even a president’s own judgment all of these safeguards were designed for a world in which humans have time to think. The principle of “feasible precautions” enshrined in the law of war assumes that someone, somewhere, is pausing to ask: is this target legitimate? Are we certain enough? One thousand targets in twenty-four hours makes that pause structurally impossible. What AI targeting systems have introduced is a new variable in warfare: the gap between machine execution speed and human deliberation speed. Commanders are being asked to approve at a rate that makes genuine review impossible. The human is technically “in the loop” — but the loop is now too fast to be meaningful.

This is not merely a legal problem. It is a civilizational problem. Societies that have spent decades building norms around when and how lethal force may be authorized are now discovering that those norms count on a pace of war that no longer exists.

The Iran campaign suggests that the lesson has not been fully absorbed. Maven, powered by Claude, is reported to have generated and prioritized hundreds of targets ahead of the opening strikes, supplying coordinates that allowed U.S. forces to move quickly and limit Iran’s ability to respond. The Pentagon’s own Law of War Manual requires “feasible precautions” to verify that targets are legitimate military objectives. Whether AI-accelerated targeting satisfies that standard is a question that has been postponed but I will assume all is fair in love and war will be the answer of most

The Accountability Vacuum

When an AI system recommends a strike and a commander approves it in thirty seconds, who bears moral and legal responsibility?

This is not a hypothetical question. It is being answered right now — in practice, without deliberation, and without any agreed international framework. The existing body of international humanitarian law assigns responsibility to human commanders, but that assignment was designed for a world where commanders actually understood and evaluated what they were authorizing. When the recommendation is a software output and the review is perfunctory, the legal chain of accountability dissolves into abstraction.

No government has yet proposed a binding international standard for AI-assisted targeting. No court has ruled on whether machine-generated target lists satisfy the proportionality requirements of the Geneva Conventions. No independent body has audited the false positive rate of the systems now being used in active combat. The world is not just watching a new technology be deployed in war — it is watching accountability itself be quietly redesigned, with the public excluded from the conversation.

The Domestic Horizon

The Pentagon’s stated reason for blacklisting Anthropic was the company’s refusal to allow its AI to be used for “mass domestic surveillance.” That phrase deserves more attention than it has received. It tells us that someone inside the Defense Department proposed exactly that use — and that the proposal was serious enough to become a negotiating point. The targeting infrastructure being normalized in Iran is not hermetically sealed from domestic application. The same systems that can identify and prioritize foreign targets can, in principle, be turned inward: flagging dissidents, monitoring protest movements, building pattern-of-life profiles on citizens.

History offers little comfort here. Technologies developed and normalized in overseas conflicts have a consistent record of migrating homeward. Drone surveillance began as a battlefield tool and is now used by domestic law enforcement across dozens of countries. Biometric databases built for counterterrorism have been repurposed for immigration enforcement and criminal tracking. The question is not whether AI targeting logic will eventually intersect with domestic governance — it is whether democratic societies will build the legal and institutional guardrails before that intersection occurs, or after.

The Global Proliferation Risk

The United States is not the only actor watching the Iran campaign closely. China, Russia, and a dozen other states with advanced AI programs are drawing lessons from every publicly known detail: what systems were used, how fast targets were generated, what the legal and political costs appeared to be. AI-assisted targeting is going to proliferate — not because any treaty permits it, but because no treaty currently prohibits it, and because the competitive logic of military advantage is irresistible.

The proliferation dynamic matters for reasons that extend far beyond the current conflict. When AI targeting systems spread to states with weaker rule-of-law traditions, less independent militaries, and lower political costs for civilian casualties, the false positive problem identified in Gaza does not stay at 10 percent it could get worse. The international community built the Geneva Conventions and the laws of armed conflict over decades, negotiating after each catastrophe what rules should govern the next war. AI targeting is the first major transformation in how wars are fought since those frameworks were established — and it is outrunning every governance process designed to contain it.

Why You Should Pay Attention

You do not need a defense contract or a presence in the Middle East to feel the consequences of this shift.

First, treat this as an energy and shipping risk event. A war involving Iran, with its ability to threaten traffic through the Strait of Hormuz and Gulf infrastructure, tends to push up energy prices through both direct disruption and higher insurance and risk premiums. If your cost structure is fuel-sensitive logistics, travel, energy-intensive manufacturing identify where you are exposed and what hedges or alternatives exist before your next round of contracts.

Second, map your supply chain with Gulf and broader regional routes in mind. If critical components or finished goods flow through vulnerable ports or choke points, you should understand what redundancy you have and how quickly you can reroute. The worst time to discover that you have a single point of failure is during a crisis.

Third, raise your cyber posture. State-level conflict almost always has a cyber dimension, and Iran has a history of retaliatory operations against Western financial, energy, and commercial targets. If you run critical infrastructure, process payments, or operate any high-profile consumer platform, treat the coming months as an elevated-threat period. That means verifying patching, access controls, backups, and incident response plans not just asking whether they exist on paper.

Finally, expect the AI governance debate to arrive at your door. Once AI-assisted bombings become front-page news, regulators, investors, and the public will ask harder questions about any use of AI in high-stakes decisions: hiring, lending, healthcare, legal services, security. If your organization uses AI in ways that affect people’s livelihoods or rights, you will need clear answers about accountability, oversight, and the boundaries you will not cross.

The real line being crossed

The story here is not only that one AI company’s model is being used despite a ban. The deeper shift is that the gap between machine execution speed and human deliberation has become an explicit variable in warfare. Commanders are being asked, in effect: how much human time are you willing to spend checking what the machine recommends?

That calculus will not stay confined to the battlefield. It will influence how we think about responsibility in everything from energy markets and insurance to hiring decisions and medical triage. The organizations that navigate this period best will be the ones that take the uncertainty seriously now: stress-testing supply chains, hardening cyber defenses, and treating AI governance not as a box-ticking compliance exercise but as core risk management. The algorithm has gone to war. The question now is how far we let that logic travel, what is the cost and who bears the cost when it does. This is a moral and human question that starts with wartime but can apply to our governments and daily lives.

Always happy to talk about issues related to games, gamification and ai. To talk contact me @michael sorrenti

#AIWarfare #MilitaryAI #AITargeting #AlgorithmicWar #AIInConflict #Anthropic #OpenAI #AIEthics #ResponsibleAI #AIGovernance #TechAndWar #IranWar #AIProliferation #NationalSecurity #FutureOfWar #AIArmsRace #InternationalLaw #HumanInTheLoop #KillerRobots #AutonomousWeapons #AIAccountability #AIPolicy #TechEthics #FutureOfConflict #DigitalWarfare #AIAndSociety #AI #Technology #Geopolitics #NatSec #ForeignPolicy #Innovation #MachineLearning

  1. Pabst, Stavroula. “US Used ‘Claude’ to Strike over 1000 Targets in First 24 Hours of War.” Responsible Statecraft, March 5, 2026. https://responsiblestatecraft.org/ai-war-iran/
  2. Whittaker, Zack, and Cat Zakrzewski. “Anthropic’s AI Model Claude Is Central to U.S. Military Targeting in Iran Campaign.” The Washington Post, March 4, 2026. https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/
  3. Dunn, Alexandra. “U.S. Military Relying on AI as Key Tool to Speed Iran Operations.” Bloomberg, March 5, 2026. https://www.bloomberg.com/news/articles/2026-03-05/us-military-relying-on-ai-as-key-tool-to-speed-iran-operations
  4. “Iran War Heralds Era of AI-Powered Bombing Quicker Than ‘Speed of Thought.'” The Guardian, March 3, 2026. https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought
  5. “America’s New War Machines Showcased in Iran War.” Axios, March 5, 2026. https://www.axios.com/2026/03/05/iran-war-anthropic-prsm-drones
  6. “Iran War Expanding: Israel, Lebanon, Gulf, Cyprus.” Axios, March 2, 2026. https://www.axios.com/2026/03/02/iran-war-expanding-israel-lebanon-gulf-cyprus
  7. “AI and Iran War Questions: Expert Analysis.” Japan Times, March 5, 2026. https://www.japantimes.co.jp/news/2026/03/05/world/politics/ai-iran-war-questions-expert/
  8. “Pentagon Bans Anthropic AI Over Military Targeting Dispute.” The Hill, 2026. https://thehill.com/policy/technology/5763323-pentagon-stuns-silicon-valley-with-anthropic-ban/
  9. “Anthropic AI Phase-Out and DoD Negotiations.” BBC News, 2026. https://www.bbc.com/news/articles/cn48jj3y8ezo
  10. Frenkel, Sheera. “Anthropic in Talks with Pentagon Over New Defense Deal.” Financial Times, 2026. https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b148b
  11. “Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid.” The Wall Street Journal, 2026. https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuela-raid-583aff17
  12. “U.S. Military Anthropic AI Model Claude Venezuela Raid.” The Guardian, February 14, 2026. https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
  13. “Lavender: Israel’s Artificial Intelligence System That Decides Who to Bomb in Gaza.” El País, April 17, 2024. https://english.elpais.com/technology/2024-04-17/lavender-israels-artificial-intelligence-system-that-decides-who-to-bomb.html
  14. “Lavender: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/
  15. Rosen, Brianna. “AI Targeting in Conflict.” Just Security, 2024. Referenced via Responsible Statecraft: https://responsiblestatecraft.org/israel-ai-targeting/
  16. Farrouqui, Anusar (“policytensor”). “Iran Drone Production and U.S. Suppression Calculus.” Substack / X, March 2026. https://x.com/policytensor/status/2030047132200145398
  17. “Secretary of War Pete Hegseth and Chairman of the Joint Chiefs of Staff Press Conference Transcript.” U.S. Department of Defense, March 2026. https://www.war.gov/News/Transcripts/Transcript/Article/4418959/
  18. U.S. Department of Defense. Law of War Manual. June 2015, updated July 2023. https://media.defense.gov/2023/Jul/31/2003271432/-1/-1/0/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATED-JULY%202023.PDF
  19. “Iran Energy Prices, Trump, Wiles.” Politico Playbook, March 5, 2026. https://www.politico.com/news/2026/03/05/iran-energy-prices-trump-wiles-00813710
  20. Simon, Steven. “Why Tehran May Have Time on Its Side.” Responsible Statecraft, March 9, 2026. https://responsiblestatecraft.org/iran-war-drones/
  21. Echols, Connor. “Is the US Goading Arab States to Join War Against Iran?” Responsible Statecraft, March 9, 2026. https://responsiblestatecraft.org/gulf-states-iran/