When the Rules Start to Slip
AI, command, and the quiet shift in how wars are fought
It doesn’t happen with an announcement.
There’s no signal, no doctrine update, no moment where someone declares that the rules have changed. Instead, it creeps in through operations. A strike here that stretches the definition of a military target. A system there that compresses decision time just enough to bypass reflection. A justification written after the fact, rather than before the action.
As these operational shifts accumulate, the framework that once shaped warfare begins to loosen, leading to broader implications for command and control. Recent operations have already shown how quickly decision timelines can collapse under combined drone, missile, and electronic pressure.
Across Ukraine, the Middle East, and beyond, something is shifting—not the laws of armed conflict themselves, but their role. These laws are still written, taught, and referenced, yet in practice, they are increasingly interpreted quickly, often under pressure and after outcomes are already underway.
This is not lawlessness. It is something more subtle and more dangerous.
It is the transition from rules-constrained warfare to outcome-driven warfare, where legality becomes part of the narrative rather than the decision.
The compression of the decision
Modern conflict is no longer paced for deliberation.
Drones, autonomous systems, and distributed sensors have created a battlespace where detection, classification, and engagement can occur in seconds. The time available to think, question, and challenge assumptions is shrinking.
This is not theoretical. It is visible in how operations are now conducted:
Targets identified through fused data rather than single-source confirmation
Effects delivered at range, often without direct human presence
Civilian infrastructure is increasingly drawn into the operational calculus.
Each of these actions, while individually justifiable, together form a system in which operational decision time exceeds the legal decision process.
When operational tempo overtakes the legal process, the law does not disappear—it lags behind, adapting post hoc to actions already taken.
From constraint to justification
For decades, the laws of armed conflict acted, at least in principle, as a constraint. Commanders operated within them, trained against them, and understood their limits before taking action.
What we are now seeing is a shift.
The question is no longer always:
“Can we do this?”
But increasingly:
“Can we justify this?”
That is a subtle but critical change. The key takeaway: warfare decisions are now often justified after action occurs, not constrained beforehand.
This shift moves the centre of gravity from disciplined decision-making to post-action narrative. In contested, urgent, and fragmented contexts, such narratives often suffice to rationalise outcomes.
The machine-speed problem
This is where AI enters the frame.
Not as the decision-maker, but as the accelerator.
AI-enabled systems are already:
Grading intelligence
Prioritising threats
Recommending courses of action
They are not “fighting the ship” but are shaping its conditions of battle.
And critically, they are doing so at a speed that compresses human oversight.
The result is a growing gap:
Human responsibility remains
Machine-tempo decision-making expands
Closing this gap between expanding machine-tempo decisions and enduring human responsibility has become central to effective, accountable command in modern warfare.
Because legal frameworks were built for a world where:
Information was incomplete
Decisions took time
Commanders could pause
Yet that world—one that allows such pauses—is fast disappearing under pressure from new technologies and operational tempos.
As the Alan Turing Institute puts it, “Responsibility cannot be delegated to AI.” The technology can accelerate insight, but it does not remove accountability. Human judgement remains central, even as systems compress time and increase complexity. The risk is not that machines take control, but that humans are asked to exercise control in conditions they were never designed for.
Command in the grey zone
The risk is not that commanders lose control.
It is that control becomes harder to exercise than it was designed.
A Principal Warfare Officer, an Operations Room, and a Commanding Officer all still sit at the centre of the decision. But the environment around them has changed:
More data than can be fully interrogated
Faster timelines than can be comfortably challenged
Systems that present recommendations with increasing confidence
Within this environment, the responsibilities of command necessarily evolve alongside the systems they govern.
Not from decision-maker to observer, but from actor to arbiter of machine-informed judgement.
And that is where the pressure lies.
Because responsibility does not scale with speed.
The illusion of control
There is a comforting assumption that humans remain “in the loop.”
Technically, they do.
But being in the loop is not the same as being in control.
If the system presents:
A prioritised threat
A recommended action
A narrowing time window
Then the human role becomes one of validation rather than exploration.
And validation under pressure differs from judgment. This means for the future fleet.
None of this suggests that the rules of war are gone.
They still matter. They shape alliances, legitimacy, and long-term outcomes, and define what states claim to stand for.
In the moment of action, these rules are no longer the foremost constraint; technology and tempo now lead.
That role is being overtaken by:
System design
Data confidence
Decision speed
Which leads to a harder, more uncomfortable truth:
In future warfare, ethics will depend less on law and more on whether system design supports, slows, or challenges human decision-making. The key is how these systems influence the decisions humans make in conflict.
The real question
We are not yet at the point where machines make sovereign decisions.
We may not have surrendered decision-making to machines, but we already allow them to define the timelines and contexts in which critical choices are made.
So the question is not:
Will AI follow the rules?
It is:
Will we still have the time and the space to apply them?
Further reading
NATO Allied Underwater Battlespace Mission Network (AUWB-MN)
AI Won’t Replace the General: Algorithms, Decision-making and Battlefield Command


