All eyes on AI

1 February 2024



AI offers vast potential for defence forces, from enhancing speed and efficiency in decisionmaking to protecting personnel by automating hazardous tasks, and more. Nicholas Kenny speaks with Hugh Woodward head of HQ and strategy, Defence Artificial Intelligence Centre (DAIC), UK Ministry of Defence (MoD), to learn about the DAIC’s work to drive responsible AIenabled capabilities and the benefits that this can offer the UK’s armed forces.


We live in an era of increasing global competition, where any military’s strength is only as robust as its ability to exploit cutting-edge technological advances. Indeed, it was Frank Whittle, English engineer and Royal Air Force (RAF) officer, best known as the inventor of the jet engine, who noted: “A nation’s ability to fight a modern war is as good as its technological ability.” We’ve come a long way since the jet engine, but his point remains no less relevant today, as militaries around the world compete to stay ahead of technological developments. AI is one such key technology, viewed as an essential part of defence modernisation – and it’s not hard to see why.

AI can enable defence forces to make use of previously unimaginable quantities of data, helping to improve decision-making while boosting effectiveness and efficiency across the entirety of defence operations. Of course, AI is also a doubleedged sword – offering the same benefits to adversaries as it does to allies, making it all the more important to stay ahead of the race.

In the UK, the defence sector – with its existing legal, safety and regulatory compliance regimes – is already well-prepared for the challenge of adopting new technologies, and the Ministry of Defence (MoD) has clearly stated that it is developing AI ambitiously, safely and responsibly by implementing its AI Ethics Principles. Working alongside allies and industry partners, this approach will ultimately result in more robust, reliable and effective AI-enabled capabilities, thereby advancing the UK’s military edge.

Champion, enable, innovate

The MoD has been quick to embrace the potential AI offers from the battlespace to the business space, laying out its goals in June 2022’s ‘Defence Artificial Intelligence Strategy’. In the report’s foreword, then-Secretary of State for Defence Ben Wallace noted the dangers of falling behind on AI, even as he laid out the UK’s strategy for implementing the technology across its defence forces. “AI has enormous potential to enhance capability, but it is all too often spoken about as a potential threat,” he wrote. “AI-enabled systems do indeed pose a threat to our security, in the hands of our adversaries, and it is imperative that we do not cede them a vital advantage.”

With this in mind, the MoD has announced plans to invest more than £6.6bn by 2025 in defence research and development with specific focus given to emerging technologies including artificial intelligence and AI-enabled autonomous systems, according to the 2021 Integrated Review, ensuring the UK remains competitive in this space in the coming years.

One key aspect of the MoD’s AI strategy lay in the formation of the Defence AI Centre (DAIC), which was first announced as a prime ministerial priority project back in the 2020 Strategic Defence and Security Review (SDSR). The DAIC was formally established by the Integrated Review and began operation in 2022, tasked with transforming the MoD into an ‘AI-ready’ organisation for the strategic advantage of its armed forces.

“There was a sense that we needed a central UK hub to lead the conversation and thinking around AI for defence. And when I say defence, I don’t just mean the UK MoD, but also the wider ecosystem – international partners, cross-government teams […] industry and academia,” says Hugh Woodward, the DAIC’s head of HQ and strategy. “You could think of the DAIC as the central catalyst to help accelerate the adoption and development of responsible AI across defence.”

In this role, the DAIC embraces a mantra of “champion, enable, innovate”. It champions AI in defence by advancing the understanding and development of AI capabilities across the MoD, while providing the department of over 200,000 personnel with a focal point for this technology.

At the same time, and at the heart of its purpose, the DAIC is tasked with enabling other MoD teams to develop AI systems by adding value wherever it can – providing common AI services and good practice, alongside a critical mass of centralised AI-related expertise.

That’s ‘champion’ and ‘enable’ covered, but the DAIC is also focused on innovating, leading and directly working on several MoD projects aimed at providing AI solutions to real defence challenges. “When we say ‘innovate’, we didn’t mean just coming up with good ideas,” Woodward notes. “Defence innovation is about putting capabilities into service – actually driving a genuine change.”

£6.6bn
The amount the UK MoD intends to invest in defence R&D for emerging technologies.
UK MoD

The DAIC leads AI development projects itself for two main reasons. First, because it makes sense for the central hub to drive those projects that can benefit the entire MoD, such as in the case of ‘MODGPT’ – the DAIC’s large language model project. Second, because the project addresses a challenging use case that will help the DAIC to better understand barriers to deploying AI across defence, and to develop common enablers to overcome those barriers. This is the case for Project SPOTTER, which explores the use of automated detection and identification of objects within classified satellite imagery. Directly working on AI projects, either in a leading or supporting role, gives DAIC credibility when providing advice and other products services in the ‘enable’ function.

“If we were not safe and responsible, then we’d [...] lose the trust of end users and the public. And [...] that would actually make us less ambitious.”

Problem solver

The DAIC sees six main problem spaces in defence that AI has clear potential to deliver step changes in performance, which it lays out in the ‘Defence AI Playbook’ – recognise, comprehend, predict, simulate, generate and decide. The first, ‘recognise’, involves using AI to help detect subjects of interest within the vast volumes of sensor data the MoD currently collects – this would include the previously mentioned Project SPOTTER, alongside other projects concerned with object detection in satellite images, analysing radio signals, and so on.

‘Comprehend’, then, covers AI deriving insights from large unstructured or semi-structured datasets, which Woodward explains is associated more with back-office functions – this would include the Intelligent Search & Document Discovery program, which aims to simplify policies and boost understanding by enabling efficient searching and navigating through documents, uncovering valuable insights and identifying previously unseen connections.

Woodward describes the ‘predict’ function as “forecasting what might happen in the future, based on what’s happened in the past”. This is something AI excels at, covering uses like predictive maintenance and parts failure, or training regime optimisation.

‘Simulate’, on the other hand, involves running scenarios and generating data to inform planning and different courses of action. Here, Woodward notes that AI-enabled tools can come to different conclusions than a human would in certain contexts, citing the Playfun Learnfun AI system created by computer scientist Tom Murphy back in 2013.

6
The number of problem spaces in defence where AI offers great potential.
DAIC Defence AI Playbook

Designed to ‘solve’ how to play classic Nintendo Entertainment System (NES) games like Super Mario Bros., the AI’s methodology completely failed when it came up against Tetris. While attempting to gain a higher score by laying blocks randomly on top of each other, the AI would pause the game as soon the screen had filled up – just before it was about to lose – leading its creator to quote the 1983 film WarGames, noting “the only way to win is not to play”.

The exploding space, however, is ‘generate’, which concerns the use of generative AI to create new content that appears natural or human-made and has been grabbing headlines in the public sphere for a while now. For the DAIC, the focus here is on the huge amount of work required in order to use this form of AI in “an ambitious, safe and responsible manner,” Woodward says, and these are the watchwords that the DAIC applies across all of its work.

Finally, we have ‘decide’, which refers to creating autonomous or automated behaviours by selecting the actions to achieve a goal – the kind of system that would drive how drone swarms behave, or last-mile resupply solutions. Here, Woodward cites November’s DAIC-sponsored hackathon, which saw 40 programmers demonstrate how AI-enabled robotic dogs could carry out potentially dangerous tasks that would otherwise be undertaken by army bomb-disposal experts.

Ultimately, the AI solutions applied to these six problem spaces enhance the UK’s defence capabilities in key areas highlighted in the Defence AI Strategy: boosting the efficiency of business processes and support functions; increasing the quality of decision making and speed of operations; improving the security and resilience of interconnected networks; enhancing the mass, persistence, reach and effectiveness of the UK MoD’s armed forces; and protecting its personnel from harm by automating “dull, dirty and dangerous” tasks.

The human touch

Returning to the DAIC’s watchwords, established in its eponymous ‘Ambitious, safe and responsible’ policy paper, Woodward believes they serve as the perfect statement of the UK MoD’s approach in this space. The paper lays out the MoD’s aspirations to accelerate best-in-class AI enabled capabilities in accordance with five ethical principles jointly developed with leaders in the field, but crucially stresses the need to incorporate AI only where it is the appropriate tool for the task at hand, concluding, “We will not adopt AI for its own sake; it is not an ‘end’ in itself.”

“We want to be ambitious – we want to move at pace and make sure that defence can harness this technology,” notes Woodward, reflecting on what these watchwords mean to the DAIC’s work. “But at the same time, if we were not safe and responsible, then we’d rightly lose the trust of end users and the public. And ironically, that would actually make us less ambitious, because we’d effectively be slowed down.”

The ‘Ambitious, safe and responsible’ paper extended these values to any potential future use of AI in weapon systems, noting that while the MoD would not rule out doing so, some concepts and capabilities may prove impossible to deliver in a safe and responsible manner. Indeed, it states that there must be “context-appropriate human involvement in weapons which identify, select and attack targets”, citing the need for real-time human supervision or another form of control.

While noting AI offers great potential to augment the performance of personnel and enhance defence capabilities, the paper also states that, given ethical concerns and risks over delegating certain decisions to AI, the best outcomes will come from human-machine teaming – where AI augments human capabilities rather than autonomously carrying out tasks.

“We feel that human judgement will always be necessary in nearly all aspects of defence work. That’s why human-machine teaming is our default approach to AI adoption – that’s our firm position,” notes Woodward.

Tying defence together

In the long term, the DAIC aims to respond to developments in the technology and then help teams across the MoD bring those advances into operational use. Across the MoD, frontline commands have started to build their own focal points for AI – in the form of the Army AI Centre and the Navy AI Cell, with the RAF possessing its own hubs of AI expertise – which Woodward welcomes, seeing this growth as the beginnings of a federated model across the MoD, working in synergy.

“For the MoD, if you didn’t have the DAIC, then I think everyone would try and develop AI solutions in isolation – and therefore there would be a risk of duplication,” he notes. “There’d be a risk of not being able to scale [AI systems] because they would be built on different architectures and platforms. […] I see the long-term evolution of the DAIC in being a recognised central hub where people know they can come and talk about the technical implementation of AI and interface with partners, whether from the MoD or from industry.”

Recent events, like the DAIC’s robot dog hackathon or large language model hackathon, have been particularly valuable in terms of building DAIC’s profile as a focal point in this area. Similarly, November’s inaugural DAIC Connect event and the sixth iteration of AI Fest in Autumn 2024 – which Woodward describes as “our big blue ribbon flagship engagement event” – will also go far to forward thought leadership, coherence and collaboration around AI and its use in across the MoD.

This need for collaboration extends to commercial processes as well, which can often be frustratingly slow and time-consuming in the defence sector. The DAIC has teamed up with Commercial X, which is an initiative aimed at identifying and breaking down some of the barriers within MoD procurement processes, speeding up the delivery of new defence technologies to the user.

One of the advantages that the DAIC and Woodward see in improving procurement processes is that it makes defence a more attractive sector to work with, which benefits both the supplier and the MoD itself. “One of the challenges with AI is that is that it’s a very crowded space, and every sector wants access to what is quite a limited talent pool,” he adds. “That remains a bit of a challenge for the defence sector.”

Recognising the importance of industry to the UK’s defence capabilities is important to Woodward, who notes that “a government or army doesn’t go to war alone – it’s the whole country”. Even in peacetime, the deterrence factor stems not only from an army or the MoD, but from the nation as a whole. For Woodward, this underlines why the DAIC wants to generate a much more dynamic and integrated relationship with industry, beyond simply making it easier to get in contact and running more effective contract competitions.

In his eyes, achieving the DAIC’s goal of fostering a thriving AI ecosystem in the UK, across government, academia and industry, will ultimately serve as both a huge deterrent to adversaries and a powerful demonstration of defence capabilities. As long as the centre continues driving AI in an ambitious, safe and responsible manner, Woodward is confident that it’s already well on its way.

TITAN is a large unmanned ground vehicle (UGV) that provides support for small, dismounted military operations. Image Credit: MoD Crown Copyright News/Editorial Licence.
Boston Dynamics’ Spot provides valuable insights into routine operations, site health or potentially hazardous situations. Image Credit: MoD Crown Copyright News/Editorial Licence.


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.