Pilot programme1 February 2024
AI has already demonstrated the ability to overcome human pilots in training scenarios – and air forces around the world have increasingly been looking into ways in which AI can be included in their operations. Andrea Valentino talks to Stacie Pettyjohn, director of the defence programme at the Center for a New American Security, and to a spokesperson from the US Air Force, on whether we can expect to see AI pilots become the norm in the coming years, as well as explore the technical and practical challenges to successful implementation.
The First World War was never a conflict really marked by chivalry. How could it be? Between the mortars, the rats and the hopeless charges against enemy wire, this quintessentially modern struggle soon put paid to any lingering hopes Europeans might have had that killing could be honourable. At least, this is almost true. For while the Tommys were being butchered in the fields, their allies in the air did perhaps recall the romance of pre-modern battle. Evocatively known as the ‘knights of the sky’, early fighter pilots still relied on talent and pluck to defeat their foes, even as the nature of the fight meant they avoided the grubby reality of life in the trenches. “The heavens are their battlefield,” is how David Lloyd George, then-prime minister of the UK, vividly put it in 1917. “They are the Cavalry of the Clouds.”
Much has changed, of course, since the sepiatinged days of wobbly biplanes – and it would be wrong to overly glamorise a profession still ultimately based on death. Even so, the intervening century has seen fighter pilots broadly retain the reputation they gained back in 1918: one where individual skill, and intensive teamwork, matter just as much as missiles and bombs. Quite aside from the latest Top Gun, that’s clear enough from the accounts of pilots themselves. Between boasting fantastic situational awareness, even at 1,300mph, and withstanding forces of up to 9G, contemporary pilots need to be as tough, disciplined and courageous as ever, even if their aircraft are infinitely more complex than an old Sopwith Camel.
Soon, however, fighter pilots are likely to step into fundamentally different aircraft. Shadowing the automation revolution that’s transforming every facet of military life, the fighters of tomorrow seem destined to be supported by AI co-pilots, with engagement tactics and aerial manoeuvres left to ones and zeroes. At the same time, pilots may soon dispense with human wingmen, instead relying on a plethora of drone-like machines to beat the enemy down. Not that the airmen themselves are vanishing entirely. Particularly given the ethics of air combat, and perhaps also reflecting the intimate nature of old school dogfights, the knights of the sky are unlikely to vanish for good.
AI in the sky
Over the past few decades, a single word has been on the lips of military planners everywhere: automation. From robotic sentries to crewless submarines, this is clear across different services, with the Pentagon spending around $7.5bn on such platforms in 2021 alone. With this in mind – and reflecting a global sector expected to enjoy a CAGR of over 8% by 2032 – it makes sense that the US Air Force is moving in the same direction. As Stacie Pettyjohn at the Center for a New American Security puts it, pilots and officers alike “don’t want to miss out” on an opportunity that could “improve their ability to operate”.
Appreciate the technology involved here, and this argument is easy to understand. Thinking in a strikingly different way to humans, displaying more aggression, more situational awareness and faster reactions than even veteran pilots, AI can fundamentally alter the balance of forces in air combat. An excellent example here involves the AlphaDogfight competition, run by the Defence Advanced Research Projects Agency (DARPA), a wing of the US government. In a simulated dogfight, an AI-powered plane beat its experienced human foe by 16 hits to zero. Even more strikingly, the computer-powered plane employed so-called ‘forward-quarter’ gunshots, a manoeuvre where a pilot flies directly at the enemy before firing – and one banned by the Air Force for being too dangerous.
Apart from the machinery itself, you get the feeling that military planners are equally adopting AI for another reason: to protect pilots. “Similar to drone warfare, autonomous technology is used to minimise operator workload,” is how a US Air Force spokesperson laconically puts it.
Pettyjohn agrees. “[In] the US, as a democracy and as a military, the services put a tremendous amount of value on people’s lives,” she says. “And so, at times, if there are ways that you can automate things, you can save costs, and you can potentially save lives as well.”
Both these points feel fair. For if removing fleshand- bone pilots from the equation can obviously do much to keep personnel safe, it’s equally clear that losing pilots poses challenges in hard-nosed financial terms. According to work by the RAND Institute, for example, it costs roughly $11m to train an F-22 pilot – and that’s before you factor in the cost of housing them, feeding them and providing their pensions when they retire.
The amount received by the US Pentagon to fund unmanned systems in the Air Force, Navy and Army in 2021.
Beyond these theoretical tests, what might a future shaped by AI pilots actually look like? One difficulty here involves the plethora of programmes currently in motion, both at the Pentagon and elsewhere. Launched by the US Air Force in March, for instance, is the ‘Collaborative Combat Aircraft’ programme. DARPA, for its part, is focused on the ‘Air Combat Evolution’ scheme, even as countries as varied as Italy and Japan are investing in their own automated aerial systems. And with military sources typically tight-lipped about the specifics, getting a grip of what dogfights in 2030 might look like is tough – but Pettyjohn is happy to do some educated guesswork.
As far as the Collaborative Combat Aircraft programme is concerned, for example, she envisages drones acting like a gaggle of “unmanned, loyal wingmen” for pilots flying nearby. Likely exerting some control over their “three-to-five” robotic allies, humans would be able to fire their weapons even as the drones were left to fly and position themselves. There’s some evidence, meanwhile, that researchers are already making progress here. With a design somewhere between a fighter jet and a speedboat, for instance, is the Kratos XQ-58 Valkyrie, an experimental loyal wingman capable of scouting enemy squadrons, or else absorbing fire when battle commences. In a way, DARPA’s Air Combat Evolution scheme is even more ambitious. Giving AI full command over actual aircraft, a computer flew multiple F-15 sorties over several days in December 2022. Among other things, the artificial pilots proved adept at take-offs and landings, and at dealing with a range of simulated enemies.
Aside from the technical accomplishments here, at any rate, it’s not hard to see how useful these developments could be. In the first place, that’s clear from a financial perspective. As the Air Force spokesperson puts it: “Utilising automated technology, teamed with crewed aircraft, to project larger force training engagements at a lower cost, is one possible approach.” For her part, Pettyjohn echoes the results of that AlphaDogfight competition, arguing that AI algorithms can help warfighters “make decisions” around when weapons are used, or else what engagement profile to use.
The cost of training an F-22 pilot.
All this is doubly urgent, meanwhile, when you remember that it’s not only friendly countries like Italy rushing to build AI-powered aircraft. In March, to give one example, Chinese military researchers announced that an AI drone had bested a human pilot in a real-world setting. With this in mind, it’s no wonder the US Air Force spokesperson is keen, past all the talk of finances and efficiencies, to emphasise the way computers can help their country defeat “adversaries” in the sky.
In the cloud
Given the sophistication of all those ones and zeroes, it’s tempting to imagine that the aerial duels of tomorrow will be fought without any humans at all. But that’s not quite true. In large part, this is a question of ethics. To put it differently: can a computer really be trusted to fire a missile primed to kill? Recent reports don’t offer much reassurance here. In May 2023, for instance, Colonel Tucker ‘Cinco’ Hamilton, the US Air Force’s chief of AI test and operations, claimed that an AI drone, programmed to destroy an enemy air defence system in an exercise, ended up attacking anyone who interfered with the initial order. Later, the US Air Force would go on to deny the initial reporting, suggesting the colonel’s statements had been taken out of context “and were meant to be anecdotal”.
With that in mind, however, it’s unsurprising that the air force’s spokesperson should highlight the need for robust oversight whenever robots are used. “AI activities will augment, not eliminate, human intelligence,” they stress. “The Department of Defense requires that all weapons systems, including those that incorporate autonomous elements, must include appropriate levels of human judgement.”
This more subtle approach to AI integration makes sense in other ways, too. In a field so predicated on individual skill – and on deep reliance on human comradeship – Pettyjohn worries that talented pilots would baulk if their wingmen became wing-robots overnight. “Flying is a very human endeavour,” is how she puts it, adding that “developing that trust” between man and machine is a crucial first step. Fair enough: in a situation as intense as a dogfight, the last thing pilots want to worry about is that their partner will shoot them down to fulfil some abstract command. All the same, and especially given the rising prospect of a shooting war in the Pacific over the coming years, the robots look here to stay – even if they should probably be viewed as steeds, and not knights all by themselves.