
The Royal Navy’s new crucible
The First Sea Lord’s 100-day plan is unambiguous: get the Navy ready for warfighting in four years. With shipbuilding cycles measured in decades, the only feasible path is through command innovation — changing how the fleet observes, decides, and acts.
At the heart of that ambition lies TEWA – Threat Evaluation and Weapon Assignment. It is not a glamorous subject, but it is the operational engine that converts awareness into action.
In a force increasingly defined by autonomy and hybrid crews, TEWA becomes the decisive test of whether the Navy can automate responsibly while preserving command authority. Automation can compress decision time; it cannot carry moral weight.
The AI compression problem
For decades, naval warfare has pursued speed: faster links, quicker locks, shorter chains. What’s changing is the density of data — not just from traditional sensors, but from uncrewed systems, coalition feeds, and subsea infrastructure.
Modern combat systems such as TACTICOS already fuse hundreds of streams into a common operating picture. The next step — AI-enabled TEWA — adds predictive analytics, adaptive prioritisation, and automated weapon pairing. The risk, as the Alan Turing Institute warns, is decision compression: AI can synthesise more information than a human can meaningfully interpret, narrowing the space for reflection while expanding the volume of options.
The commander remains legally accountable for each act of force, yet may see only a probabilistic summary of the algorithm’s logic. The speed of insight becomes as dangerous as the speed of engagement.
Human command in a machine-paced battlespace
Command is not a variable that can be optimised; it is an act of judgement under uncertainty. The Turing report distinguishes between control — the technical coordination of sensors and shooters — and command, the exercise of human responsibility.
AI blurs this boundary.
As decision support evolves into decision automation, the risk is that the commander drifts from being in the loop to merely being on it. A recent RUSI session on Military Technology reinforced this same trajectory. Senior officers spoke of the S-DESH model — Sense, Decider, Effector, Host, Connector — as a practical way to visualise how modern combat power is distributed across sensors, algorithms, and people. It’s a simple diagram with profound implications: the decider might sit on a different ship, or even in a different nation, but the responsibility still rests with the human chain of command. TEWA is already that reality in miniature, machine-speed logic, human-anchored authority.
For TEWA, this calls for algorithmic transparency by design:
Explainability: every recommendation must expose its data lineage and rationale.
Interruptibility: systems must default to human confirmation when confidence or legality is in doubt.
Auditability: actions must be logged in tamper-proof records showing who authorised what, and why.
These are not software features; they are command enablers.
Without them, “machine speed” becomes a liability.
The hybrid command post
The 2025 Strategic Defence Review and BAE’s ‘Exploiting the Underwater Battlespace‘ both envision a networked fleet, with crewed, remote, and autonomous systems operating as a single lattice of awareness.
Under this model, TEWA evolves from a shipboard algorithm into a distributed decision fabric. A Type 31 frigate may host the primary node; uncrewed surface vessels may carry delegated TEWA agents for local evaluation; subsea sensors feed acoustic signatures through next-generation underwater networks. Each node makes limited, context-aware decisions — but always under the authority of a human commander.
This is NATO’s emerging C2 Resilience philosophy in practice: decentralised execution, centralised accountability. It requires standards not only for interoperability, but for ethical interoperability — ensuring that an ally’s AI-assisted decision remains transparent and reviewable within UK legal norms.
The challenge is not connectivity but coherence: maintaining a single moral and operational thread across hundreds of semi-autonomous actors.
Command, control, and conscience
The First Sea Lord’s hybrid vision — uncrewed escorts, autonomous air wing elements, lean-manned ships — is technologically credible.
The constraint is cultural.
The Turing report warns of “automation bias”: humans deferring to machine outputs because they appear precise and accurate.
This erodes the moral agency that doctrine depends upon.
Meaningful Human Control is not a slogan; it is the legal mechanism by which responsibility is preserved.
To maintain that standard, the Royal Navy will need:
AI literacy across warfare branches, so operators understand bias, model drift and uncertainty.
Structured dissent within ops rooms — the procedural right to challenge AI outputs before action.
Leadership doctrine that treats digital systems as advisers, not authorities.
Warfighting readiness in four years cannot be achieved through hardware. It will come from a force that understands when not to trust the machine.
The AI assurance framework
A credible TEWA programme will require its own assurance discipline, equal in weight to weapons certification.
That framework should include:
Model governance: version control, bias testing, and retraining cycles certified by MoD AI Assurance.
Operational validation: simulation under degraded comms and adversarial conditions (e.g., REPMUS) to test resilience and explainability.
Legal traceability: continuous audit of every data input and decision path for potential post-incident review.
Ethical red-teaming: independent panels stress-testing algorithms for unintended escalation behaviour.
AI assurance is not an academic luxury; it is the only route to trusted lethality.
From combat system to command system
The Royal Navy’s operational advantage will not come from the number of algorithms it deploys, but from how transparently they serve human intent. TEWA must evolve from a hidden subroutine into a visible, accountable command system — one that supports evidence-based, legally defensible decisions at machine speed.
If done well, automation will extend human command into domains too vast or fast to manage manually. If poorly done, it will corrode trust and blur accountability, until no one is certain who fired first — or why.
In the coming years, the question will not be whether the fleet can act autonomously, but whether it can still explain its actions. As several RUSI panellists recently warned, the next vulnerability isn’t code or compute, but confidence, the assurance that comms will hold and that decisions remain traceable when they do. Autonomy will only prove its worth when we can navigate disruption without compromising our moral or operational control. The future of TEWA isn’t about faster loops; it’s about making sure those loops close with integrity.
That will be the accurate measure of readiness in the AI age.
Further Reading
Royal Navy — 1SL 100-Day Plan (Sept 2025)
“Ready for warfighting in four years.” — Interview with General Sir Gwyn Jenkins aboard HMS Prince of Wales outlining automation and hybrid fleet priorities.The Alan Turing Institute — AI Won’t Replace the General (2025)
Landmark analysis on integrating AI into command decision-making and preserving meaningful human control.BAE Systems Digital Intelligence — Exploiting the Underwater Battlespace (2025)
Details on subsea networking, ISR, and CUI protection forming the backbone of NATO’s “Digital Ocean” initiative.Thales Nederland — TACTICOS Combat Management System White Paper
Explains workflow-oriented HMIs, open architecture, and intelligent automation within modern naval ops rooms.Royal Navy — Type 31 Frigate General Arrangement (2025)
Platform reference for TACTICOS integration, mission bay capacity, and sensor/weapon layout.UK Government — Strategic Defence Review 2025
Focus sections: Digital Maritime Force and Layered Sensor Network — the policy base for hybrid C2 architectures.Policy Exchange — From Seabed to Space (2024)
Research paper framing undersea cable protection and subsea domain awareness as national-security imperatives.