• Home
  • About Us
  • Reports
  • Lens Blogs
  • Books
  • News & Events
  • Contact Us
  • About Us
  • Reports
  • Lens Blogs
  • Books
  • News & Events
  • Contact Us

Big Data, Broken Trust and Britain’s AI Dilemma

March 2, 2026 by

Big Data, Broken Trust and Britain’s AI Dilemma

Back to Lens Blog
Published 2nd March 2026
Written by Imeltine van Essen

In 2017 a now somewhat infamous Economist headline read; ‘the world’s most valuable resource is no longer oil but data.’ Nearly a decade later, we’ve seen this truth come to pass. Data has been refined and commoditised, transformed into artificial intelligence systems capable of diagnosing cancer, generating essays (but not this one) and reshaping labour markets. Yet in Britain, this technological transformation is unfolding largely beyond public awareness: fewer than 6% of adults say they know a lot about the UK’s AI strategy. Nearly half have never heard of it.

Britain is trying to build an AI-led growth strategy in a country where almost half the population has never heard of the government’s AI plan. This disconnect between strategic ambition and public understanding reveals what can be described as an emerging AI legitimacy gap.

Britain has played a foundational role in modern computing from Alan Turing’s formalisation of computation to the development of ARM processor architecture, now embedded in billions of devices worldwide. While the United States dominates frontier AI development and private investment, the UK maintains a strong academic position, with top research coming out of places such as Google’s DeepMind AI lab in London, and the Alan Turing institute. Despite the quality of our research, adoption of AI in people’s personal lives is one of the lowest in Europe. Only 20% use AI in their daily life compared to, for example, 33% in the Netherlands and 29% in Spain. Despite the UK’s role in shaping modern computing and AI research, public engagement with these technologies now lags behind several European peers.

This gap in adoption is likely a reflection of the publics sentiment towards AI. The UK registers the highest level of AI negativity among comparable European economies with 45% expressing negative sentiments, compared to 20% in Spain and 25% in the Netherlands and 29% in Germany. These sentiments translate into adoption and eventually what we politically believe we should invest in. When the British public were asked what infrastructure development should be prioritised, only 3% of the UK population believed digital infrastructure should be a priority. Meanwhile, 59% prioritise healthcare and 33% energy, sectors that feel immediate and tangible.

Digital infrastructure, by contrast, feels abstract. But this distinction doesn’t capture the full picture. AI driven cancer diagnostics, hospital resource planning and smart energy grids depend on precisely the data and compute capacity housed in data centres. Far from being an elite technical add-on, digital infrastructure is becoming the operating system of modern public services we use everyday. So what explains the hesitation?

One explanation might include our lack of trust. Technology ought to serve as a collective social asset, but it is increasingly shaped within institutional and corporate spheres that sit beyond the reach of ordinary democratic participation.
Additionally, there have been multiple scandals which stick in people’s minds. Something which may be explained by a well-defined psychological phenomenon, the availability heuristic: people are more likely to recall high profile failures or dystopian headlines than the quieter, incremental gains AI delivers within hospitals, energy systems or logistics networks. Limited use of AI in private contexts reinforces this distance, when individuals do not meaningfully interact with a technology, its benefits feel abstract while its risks loom larger. The gains may occur at the level of markets, research labs and public systems, but without visible personal benefit, public confidence struggles to take root.

To understand this scepticism, we need to look backwards. British scepticism toward AI does not emerge in a vacuum. It follows a decade marked by the Cambridge Analytica scandal, repeated data breaches, and public backlash against NHS data-sharing initiatives. These episodes exposed how personal data could be harvested, mishandled or monetised without meaningful consent. In this context, AI appears not as a clean technological advance, but as the latest phase of a system many feel they never fully authorised. AI is not perceived as a fresh opportunity but as an extension of systems that have already eroded public trust. The loss of privacy has become common place, while the advantages aren’t obvious or felt by most people. Our data sensitivity is apparent, 59% of brits say they are worried about their data being collected, while 38% state they believe they no longer know how to protect it.

Britain cannot build an AI economy without public literacy, trust and establishing visible benefit. Although the UK is one of the biggest nations in terms of data centre construction, we risk falling behind in terms of developing the necessary social currency to keep up with the pace of technology. If data is the new oil, Britain risks drilling the wells without explaining the engines it powers or ensuring that citizens share in the returns. There are legitimate sources of distrust, from the perceived excesses of big technology firms to scepticism toward central government and these cannot simply be overridden in the name of growth. A democratic society cannot quietly push through technological transformation on behalf of a public that neither understands nor feels ownership over it. Capacity can be engineered in data centres, but legitimacy must be earned in public life. Perhaps Britain could take a leaf from the political playbook of its European neighbours and establish permanent citizens’ panels or assemblies on AI and digital infrastructure, giving people a visible voice in shaping how these systems develop. If Britain is to lead in AI, it must give citizens not only access to its benefits but a meaningful sense of agency in how it unfolds

Imeltine van Essen is an associate researcher at Global Future. With a background in biology and plant science, she completed a master’s in biotechnology at Imperial College London. Having worked in research labs firsthand, she believes technology and scientific innovation can help address some of the world’s most complex challenges when developed with consideration for people and the environment. She writes about science, sustainability and the social implications of emerging technologies.

Back to Lens Blog

Inflation Anxiety: Why inflation is much more than prices rising

The Rising Temperature of Global Politics

Footer

© 2025 Global Future Foundation. All Rights Reserved.
  • Contact Us
  • Privacy Policy
Stay up to date with Global Future Foundation
x
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Thanks for subscribing!
Stay up to date with Global Future Foundation
x
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Thanks for subscribing!