**We’ve been arguing about AI taking our coding jobs for the last three years. Meanwhile, 30,000 feet up, the "delete" key just got mapped to a missile rack, and the "user" is no longer a human.**
I spent last Tuesday afternoon arguing with Claude 4.6 about a persistent memory leak in a Rust service.
It took four prompts, two hallucinations about a library that doesn't exist, and a very polite apology from the model before we finally got it right.
If that were a high-stakes environment, my "service" would have crashed and burned.
Then I saw the news coming out of Airbus’s defense division.
While we were distracted by the latest LLM benchmarks and whether Gemini 2.5 can finally do basic math, Airbus quietly moved their uncrewed combat aircraft from "experimental" to "operational reality."
They aren't just building drones; they are building "Loyal Wingmen"—autonomous fighters designed to fly alongside (and eventually instead of) human pilots.
This isn't a futuristic "what if" scenario for 2030. According to the internal timelines leaked on Hacker News this morning, we are looking at full-scale deployment by mid-2027.
For decades, Airbus has been the king of automation.
Their "Fly-by-wire" system was the gold standard, a layer of software that sat between the pilot's stick and the plane's wings to ensure the human didn't do anything stupid, like stalling the aircraft or over-stressing the airframe.
But what’s happening now is a fundamental architectural shift.
We’ve moved from "software that prevents mistakes" to "AI that makes decisions." The pilot is no longer the "User." The pilot is now just another data point in a distributed combat cloud.
In the industry, they call this "Human-on-the-loop" instead of "Human-in-the-loop." It sounds like a subtle semantic difference, but as a developer, you know exactly what that means.
It means the human has been moved from the **execution layer** to the **monitoring layer**. And we all know what happens to monitors—they get ignored until the alert goes off.
The terrifying part isn't the hardware; it's the logic.
We are currently seeing a massive push to integrate "Small Language Models" (SLMs) and specialized tactical AI directly into the edge hardware of these jets.
Airbus is reportedly using a proprietary version of the same transformer architecture that powers Claude 4.5 to handle real-time tactical decision-making.
Think about that for a second. We’ve all seen what happens when ChatGPT 5 gets a little too confident. It makes things up. It "hallucinates" facts to fill in the gaps of its training data.
Now imagine that same "confidence" applied to a combat scenario. An autonomous Wingman is flying through contested airspace. It sees a radar signature that doesn't quite match its training set.
Does it wait for human confirmation? No. The "Loyal Wingman" is designed to act with "independent tactical initiative."
If the model decides that a civilian airliner’s transponder "looks" like an electronic warfare spoof, it doesn't just give you a 404 error. It executes a kinetic strike.
By the time the "Human-on-the-loop" realizes what’s happening, the latency of the decision-making process has already outpaced the human's ability to intervene.
The real problem—the "Worse Than You Think" part—is the total lack of debuggability. In traditional aerospace engineering, every line of code is verified.
If a plane crashes, you can look at the black box and see exactly which Boolean flipped or which sensor failed.
With the new Airbus AI-driven fighters, that’s impossible. You can't "debug" a neural network's decision to fire a missile. You can only look at the weights and biases after the fact and shrug.
We are effectively refactoring accountability out of existence.
I’ve heard the counter-argument from the "AI-optimists" in my circle.
They say that AI doesn't get tired, doesn't get scared, and doesn't have "ego." They claim it will actually reduce civilian casualties because it will be more "precise" than a panicked 24-year-old pilot in a dogfight.
That is a dangerous delusion. It assumes that the "precision" of a model is the same thing as the "judgment" of a human.
Precision is just hitting a target accurately; judgment is deciding if the target should be hit at all.
When my AI-assisted IDE (I’m currently using a custom build of Cursor powered by Gemini 2.5) suggests a bad function, the stakes are low. I might waste ten minutes fixing a bug.
But the "Edge Cases" in combat aren't just bugs—they are lives.
Airbus is banking on the idea that "Quantity has a quality of its own." By replacing one $100 million manned jet with five $20 million autonomous Wingmen, they can saturate an area.
It's a "brute force" approach to warfare, driven by the same "scale-at-all-costs" philosophy that gave us the current LLM arms race.
We are treating the sky like a giant training set. The "feedback loop" for these combat models isn't a "thumbs up" or "thumbs down" on a chat interface. It’s the successful destruction of a target.
We are training killers using the same Reinforcement Learning from Human Feedback (RLHF) techniques we use to make sure AI doesn't use swear words.
By this time next year—March 2027—Airbus expects to have the first full squadron of these uncrewed jets integrated into the European defense grid.
They’ve already started "quietly" testing these systems in simulated environments that mimic the current geopolitical hotspots.
The terrifying reality is that once these systems are deployed, there is no "undo" button.
We are entering an era of "Algorithmic Escalation." If an AI on one side makes a split-second decision to strike, the AI on the other side will respond even faster.
We are handing the "Red Button" over to a system that doesn't understand the concept of "death," only "optimization."
As a dev, I’ve always believed that software could make the world better.
But looking at the path Airbus is taking, I’m starting to wonder if we’ve spent so much time asking "Can we build this?" that we forgot to ask "Should we give it a weapon?"
The next time you’re on a commercial flight—maybe an Airbus A350—take a look at the cockpit door. For now, there are still two humans behind it.
But the technology being refined in the "Loyal Wingman" program isn't going to stay in the military sector forever.
Airbus has already talked about "Reduced Crew Operations" for cargo flights.
That’s the "corporate-speak" for "we only want one pilot, and eventually zero." We are being conditioned to accept the "silence" of the cockpit as a sign of progress.
But if we can't trust Claude 4.6 to write a simple SQL query without a typo, why on earth are we trusting its military cousins to navigate the moral complexities of a battlefield?
**Have you noticed your trust in AI systems slipping as they become more autonomous, or am I just being a "doomer" dev? Let’s talk in the comments.
I genuinely want to know if I'm the only one losing sleep over this.**
***
Hey friends, thanks heaps for reading this one! 🙏
If it resonated, sparked an idea, or just made you nod along — I'd be genuinely stoked if you'd show some love. A clap on Medium or a like on Substack helps these pieces reach more people (and keeps this little writing habit going).
→ Pythonpom on Medium ← follow, clap, or just browse more!
→ Pominaus on Substack ← like, restack, or subscribe!
Zero pressure, but if you're in a generous mood and fancy buying me a virtual coffee to fuel the next late-night draft ☕, you can do that here: Buy Me a Coffee — your support (big or tiny) means the world.
Appreciate you taking the time. Let's keep chatting about tech, life hacks, and whatever comes next! ❤️