Flash War: Why the next global conflict could be over before humans can intervene
We’ve been here before
On the afternoon of May 6, 2010, the global financial system stared briefly into the abyss. At 2:32 p.m., the U.S. stock market began a precipitous, unexplained defiance of gravity. In a matter of minutes, nearly $1 trillion in market value vaporised. Blue-chip stocks like Accenture traded for a penny, while others priced at $100,000. It wasn’t a terrorist attack, a banking collapse, or a natural disaster. It was a Flash Crash, a cascading failure caused by high-frequency trading (HFT) algorithms battling each other at speeds no human trader could comprehend, let alone arrest.
Now, imagine that same cascading effect happening not with equities, but with lethal autonomous drone swarms in the highly congested Taiwan Strait. Replace the ‘penny print’ with an inadvertent missile launch and you have the makings for a ‘Battlefield Singularity’, the chilling moment when the speed of warfare decouples from human time.
While policymakers in Washington and Beijing reassure the public that “humans remain in the loop” of military decision-making, the technological reality of ‘hyperwar’ is rendering that promise a dangerous sedative. Driven by a desperate, existential race for “decision advantage,” the U.S. and China are deploying systems capable of engaging targets in milliseconds. These speeds edge humans out of the tactical loop entirely, creating the conditions for ‘flash war’, a conflict that escalates from a minor border incident to a catastrophic kinetic exchange before a President can even pick up the red phone.
The strategic depth that once allowed for de-escalation is vanishing. Instead, thousands of disconnected, micro-optimising algorithms, each making ‘rational’ tactical decisions could synthesise strategic madness. Without circuit breakers analogous to those we rushed to install in our financial markets, we risk an accidental nuclear escalation triggered by machines optimising for speed over human survival.
When humans are the bottleneck
The fundamental challenge driving this crisis is biological, not political. Warfare is governed by the OODA loop—the cycle of Observe, Orient, Decide, and Act. The pilot who cycles through this loop faster than their opponent wins the dogfight. But the human brain has a finite speed limit. Our visual reaction time, the time it takes for a photon to hit the retina and for the finger to pull a trigger, is approximately 250 milliseconds. For decades, this was fast enough for fighter pilots, but against electronic circuitry, we’re not even on the playing field.
In a ‘hyperwar’ environment, AI-driven systems are compressing the “Observe” and “Orient” phases to near-instantaneous speeds. Modern computer vision systems can identify a threat, calculate a trajectory, and authorise an intercept in less than a millisecond, making the human operator’s response glacial in comparison, fit for obsolescence.
If a U.S. drone swarm encounters a hostile swarm, the side that insists on routing every firing decision through a human operator will be wiped out before that operator has finished sipping their coffee. This creates an irresistible structural pressure to grant autonomous systems authority to fire without checking back with headquarters. The U.S. Department of Defense’s AI Adoption Strategy and adoption of specific career paths for AI/ML specialists appears to confirm this trend. The old military adage was that war is 99% boredom and 1% sheer terror; the new reality is that the terror will be over before the human participants even realise it has begun.
The Taiwan Strait scenario
Nowhere is this danger more acute than in the Taiwan Strait. This narrow, geopolitical chokepoint is fast becoming the world’s most congested operating theatre for autonomous systems. Advances in unmanned aerial vehicles (UAVs), underwater gliders, and autonomous sensor networks are effectively transforming the Strait from a protective moat into a contested, compressed space. The trend toward unmanned systems dominating future scenarios is clear.
Consider a possible scenario in 2030. It is a humid Tuesday morning. A U.S. autonomous surveillance drone is loitering in international airspace, monitoring People’s Liberation Army (PLA) naval exercises. Suddenly, a Chinese loitering munition, operating in a ‘denied environment’ with damaged comms, drifts too close. A sudden updraft, a glitch in a sensor, or a spoofed GPS signal causes the two machines to collide.
There are no humans present. There is no one to see the accident.
The drone’s defensive AI interprets the collision not as an accident, but as a ‘kinetic intercept’. Operating under standing rules of engagement that prioritise force protection, it immediately beams a “THREAT CONFIRMED” signal to the wider U.S. network. Instantly, nearby autonomous wingmen light up their targeting radars to lock onto the ‘aggressor.’
Simultaneously, the PLA’s local defence grid detects this radar lock. Its own algorithms, trained on decades of ‘anti-access/area denial’ (A2/AD) wargames, interpret the radar spikes as the prelude to a massive decapitation strike. Milliseconds later, Chinese coastal missile batteries switch to auto-fire mode to defend the fleet.
Back in Hawaii and Beijing, human commanders are staring at screens that have just turned a sea of blinking red. They are technically ‘on the loop,’ able to intervene. But studies on ‘automation bias’ suggest they will freeze. When an advanced AI shouts “INCOMING MISSILE”, a human operator rarely says, “Wait, let me check the data.” They trust the machine. By the time the first human admiral picks up a phone to de-escalate, the first barrage of hypersonics is already airborne. The ‘flash war’ has begun.
The Stability-Instability Paradox revisited
Strategic experts analyse this dynamic through the lens of the ‘Stability-Instability Paradox.’ During the Cold War, this theory held that nuclear weapons created stability at the high end (preventing total war between superpowers), while paradoxically making lower-level proxy wars ‘safer’ and more frequent.
Artificial Intelligence is destabilising this delicate balance. Autonomous systems lower the political and human cost of kinetic action. A limited drone strike seems far less provocative than a manned airstrike, there are no letters to write to grieving mothers, no captured pilots to be paraded on enemy TV. This encourages risk-taking. Leaders may feel emboldened to use ‘low-intensity’ autonomous force to ‘send a message’ or punish infractions.
The trap is that these systems are deeply networked. A ‘limited’ skirmish can bridge the gap to high-intensity conflict almost instantly. Because AI systems optimise for ‘winning’ based on programmed parameters, they lack the nuanced understanding of ‘signaling’ that human diplomats possess. To an algorithm, a ‘warning shot’ might just look like a ‘missed shot’ that requires immediate, overwhelming retaliation.
Current U.S. policy mandates “appropriate levels of human judgment” over the use of force. But critics argue this is becoming a Maginot Line, a comfortable legal fiction that ignores the relentless technical pressure to automate fire control. We are writing regulations for a war of gentlemanly duels while building weapons for a bar room brawl in the dark.
The adversarial mind: winning without fighting
The situation is further complicated by the divergent philosophies of the combatants. While some parts of the West agonise over ethical uses of AI, China appears to be actively pursuing a doctrine of ‘Intelligentised Warfare’ and ‘Cognitive Domain Operations.’
This could explain the Pentagon’s apparent fury at Anthropic’s refusal to allow the Department of Defense two specific uses of its tools, one of which being their use in connection with lethal autonomous weapons.
Research into PLA strategy reveals a focus not just on kinetic speed, but on attacking the enemy’s decision-making process itself. The goal is “winning without fighting” by degrading the adversary’s rationality. In the age of AI, this doesn’t just mean propaganda; it means ‘adversarial attacks.’
Imagine a scenario where the ‘flash war’ isn’t an accident, but a hack. An enemy could feed ‘adversarial examples’, subtly manipulated data points invisible to the human eye, to U.S. recognition systems. A few pixels changed on a satellite feed could cause an AI to identify a civilian airliner as a nuclear bomber, or a school bus as a tank. A ‘Fog of War Machine’. By deliberately injecting noise into the opponent’s OODA loop, an adversary could trigger a U.S. system to fire on a false target, destroying its international legitimacy in a single stroke. The algorithm works perfectly, but the reality it perceives is a hallucination.
The Minotaur in the maze
We are standing on the precipice of a new era of warfare, where the speed of battle has outpaced the speed of thought. While Paul Scharre envisions ‘Centaur’ warfighting (humans and machines teaming together), Robert Sparrow and Adam Henschke warn that we risk becoming the Minotaur: teams of humans under the control, supervision, or command of artificial intelligence. In the mythical maze, the human half at least had the capacity to think, pause, and reflect. In the age of the ‘flash war’, the human half is merely a passenger, strapped into a rocket they cannot steer.
Any solution to this issue will not involve a rejection of technology, the genie is well and truly out of the bottle. Instead, we must prioritise specific, technical mechanisms for stability. Like the digital circuit-breakers created to try and prevent another flash crash, we may need ‘Diplomatic APIs’, hard-coded de-escalation protocols built directly into the software of military AI. We need unforgeable identification systems for assets, so autonomous systems know an RV from a tank, a test flight from an attack run.
Without sensible policy, the ‘Battlefield Singularity’ seems inevitable: if we don’t design our machines to know when to stop, they may ensure we never get the chance to tell them. And ‘sensible’ is not a word currently associated with the U.S. administration. Interesting times, to say the least.