AI Ethics Meets the Real World
U.S. Drone Strikes and the Accountability Crisis, the shifting ethics of warfare are becoming increasingly alarming and demand our immediate attention.
As militaries embrace AI-driven targeting and autonomous surveillance, we’re told that the ethical frameworks are catching up. But recent U.S. strikes in international waters raise a pressing question: what happens when governments sidestep those frameworks in the name of expedience? What happens when accountability disappears in a haze of legal ambiguity and political insulation?
Since September, the U.S. has launched over twenty strikes against small vessels suspected of drug trafficking in the Caribbean and Eastern Pacific. The strikes reportedly involved a layered kill chain using a mix of platforms: MQ-9 Reaper drones, U.S. Navy helicopters, and potentially weapons systems launched from naval vessels. In some cases, the drones delivered the final strike. In others, helicopters or shipboard munitions played the lethal role.
Command authority flowed through a blend of afloat coordination and remote authorisation—some decisions made at sea, others signed off thousands of miles away. This wasn’t a rogue drone war. It was a campaign carried out under deliberate human direction. One incident, a double-tap strike on a wrecked speed boat that left two survivors clinging to debris before being killed, made global headlines. These weren’t shadow ops. They were ordered at the highest levels and defended as part of an “armed conflict” against “narco-terrorists.”
This campaign cuts across every theme we discuss when we talk about AI in warfare. Accountability. Proportionality. Legal oversight.
I’ve been under fire. I know what it means to make a hard call under pressure. But what I see here isn’t the fog of war. It’s the fog of unchecked authority, amplified by a new generation of weapons that place enormous power at the fingertips of decision-makers thousands of miles from the strike.
The strikes reportedly involved multiple platforms, including unmanned drones, helicopters, and potentially naval surface vessels. While details vary, what is clear is that these were human-led operations using advanced targeting systems likely supported by AI-driven surveillance and classification tools. But precision isn’t ethics—proportionality matters. Targeting unarmed survivors floating at sea, who posed no threat, is not just disproportionate. It’s beyond any battlefield rule I’ve ever known.
Let’s be clear. Humans operated these systems. There was no fully autonomous trigger here. But the failure wasn’t technical. It was moral. The presence of a human finger on the trigger does not cleanse a strike of its illegality or ethical failure. That’s where AI governance comes in. The UK’s MoD and institutions such as the Alan Turing Institute have emphasised “meaningful human control.” But what do we do when that control is used to kill unlawfully?
More troubling is the apparent legal insulation wrapped around this campaign. DOJ memos granting immunity. Politicians are calling the victims “combatants” without trial or evidence. And allies pulling away—Britain reportedly stopped sharing intel over concerns of legal complicity.
That alone should ring every alarm bell in NATO headquarters.
Ethical AI in warfare isn’t just about what machines do. It’s about what we do when we use them. Suppose the frameworks we build to constrain lethal autonomy are dismissed the moment they become inconvenient. In that case, they aren’t frameworks at all. They’re optics.
This moment matters. If NATO and U.S. allies allow this precedent to stand, it sets a dangerous tone. Drones and targeting systems can be used for extrajudicial killings beyond any declared battlefield. That political fiat can override humanitarian law. That AI ethics can be rebranded as PR.
I’ve seen what war does. I’ve seen what happens when you lose the thread of discipline, when legality becomes negotiable—technology changes. Human responsibility doesn’t.
Future Navy will continue to track these developments. The tools may evolve, but the question remains the same: can we fight clean with dirty tools? Or are we ready to call it what it is when we fail to do so?
Follow for continued analysis on AI in warfare, accountability, and the reality of combat in the algorithmic age.


