Sleepwalking Over the Brink? The AI Risks We’re Ignoring

The emergence of artificial intelligence from a niche academic pursuit into the mainstream phenomenon we see today presents a broad array of risks. Yet, as profit motives and techno-optimism drive development at breakneck pace, we are sleepwalking into a very uncertain future.

I have been simultaneously fascinated and alarmed at the evolution of such a powerful and consequential technology. I’ve been in and around IT since the nineties, and would have first read Douglas Hofstadter’s Gödel, Escher, Bach in the same decade. Since then, we seemingly made little progress in determining how ‘mind’ emerges in humans or how it might be replicated in silicon. This all changed when the 2017 paper, Attention is All You Need introduced the transformer, and suddenly we were off to the races.

Whether or not LLMs (large language models) are truly ‘thinking’ in any real sense, they certainly exhibit competence in tasks that we generally attribute to intelligence. This has propelled huge optimism to people who believe that a technology generally smarter than humans could soon emerge. This optimism, along with profit motive, is driving progress at a pace that excludes the Precautionary Principle, the quiet voice that says “What if it goes bad?”

It’s most likely going to be great. There’s this sub chance, that could be 10% to 20%, that it goes bad. The chances aren’t zero that it goes bad.
— Elon Musk

Going bad, to be clear, is a reference to existential risk. Musk has been saying that AI is the greatest existential risk facing humanity for over a decade (I’m not sure how he squares that with the development of Grok). His position exemplifies the Faustian pact that he and others have struck: racing to build a mechanical Mephistopheles while fearing its power; a fear desperately tamped down by a thirst for dominance.

It is not only the potential for artificial general intelligence that presents us with risk. AI is already driving changes that will have profound impacts on individuals, society, and the natural environment. It was not until I tried to categorise the various concerns posed by the emergence and potential advances in AI, that I realised just how many there are. I will necessarily be providing a 30,000ft view of these here, but will examine each more thoroughly in future articles.

I will only covering what I consider to be direct risks. There are second-order risks that I will need to return to later. Foremost among these is the massive opportunity cost of spending vast amounts of intellectual and physical resources to chase AI, diverting them from issues we need to address more urgently, with climate change being the most obvious example. Tragically, AI is simultaneously exacerbating climate change (more below), while distracting us from addressing it.

I have grouped the main risks into three broad categories: personal, societal, and existential. Clearly there are overlaps; job replacement impacts the person replaced (or more likely, not hired) but, multiplied by the million, will have a much wider impact on society.

Personal AI Risk

Reasoning and Cognitive Skills. We are in danger of becoming over-reliant on AI tools for memory, analysis, and problem-solving. Research already indicates that AI dependence can erode these skills over time. Students who overuse chatbots, for example, tend to score lower on critical thinking and information recall tasks. In the face of the serious challenges facing humanity, including the proliferation of fake news, we ought to be enhancing our own thinking skills, not offloading them to a tool. There is a qualitative difference between looking up a fact to support your own work, and letting a tool do that work for you. Given that LLMs are trained on the corpus of existing thought and ideas, we potentially risk a reduced capacity for real innovation. This risk to human cognition will be a huge challenge to our education systems. If students are turning in essays writing by ChatGPT, to have them marked by Claude, who is actually doing any learning?

Mental Health and Wellbeing. The use of AI chatbots for emotional support can introduce harmful biases and potentially worsen mental health issues. We are seeing LLM-based tools exhibit stigma toward certain conditions, and failing to provide adequate therapeutic guidance, notably when the individual is already in crisis, for example when considering self-harm. Chatbots are known for reflecting and reinforcing ideas, creating an echo chamber for anxiety, self-doubt, and even mania. This amplifies individual struggles without the crucial empathy and safeguards of professional human care.

Societal Risk

Job Replacement. AI is rapidly transforming the landscape of work. We’ve already seen hiring freezes and layoffs as corporations switch focus to AI development, or flat out replace staff with agentic systems. Ironically, some workers have designed the AI implementations that eventually replaced them. Undergraduate computer engineers struggle to find junior roles as they find they’ve simply been replaced by Claude.

While some projections anticipate net job growth globally by 2030, a significant proportion of all jobs face exposure to generative AI, risking automation or restructuring. On an individual level, the impact of a layoff to a working parent is huge, especially if there’s no ready alternative employment. Multiply this by hundreds of thousands, possibly millions, and it’s easy to picture a rival to the Great Depression, with the accompanying social cost. We’re told that AI will create new jobs in other sectors, but outside of building energy plants and data centres it’s hard to see where this will happen. Some AI leaders talk about an impending age of abundance, where AI and automation create so much wealth that we can all essentially have the lives we want, with some sort of universal basic income to cover our essential needs. This beneficence would be an absolute first for capitalism, which is not known for its altruism.

Misinformation, Disinformation, and Social Cohesion. AI significantly reduces the cost and effort involved in creating realistic false content, such as deepfake video and audio. Even a small amount of AI fakery can “turbocharge” misinformation campaigns, enabling mass-production of high-quality, targeted fake content. This is already having a direct impact on democracy, undermining trust in the election process, and breeding cynicism that some political actors are willing to exploit to entrench growing division.

i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now
— Sam Altman on X

Bias, Fairness, and Ethical Concerns. AI models ingest vast quantities of data with its own inherent biases. We have seen this perpetuate and amplify discrimination in critical areas like hiring, policing, and lending. AI recruitment tools have demonstrated unfair biases based on gender and race, mirroring existing societal prejudices. In the near term, this means automated systems could systematically disadvantage minorities or the poor in job markets and credit decisions, worsening existing injustices. In the longer term, we could see systemic inequalities simply deepen.

Global Inequality and Development. A significant ‘digital divide’ exists, with wealthier nations vastly outspending and out-investing in AI research, development, and infrastructure compared to lower-income countries. Without effective oversight, rich countries will capture the majority of AI-driven gains, exacerbating global wealth gaps. Emerging and developing countries, often reliant on manufacturing or services, lack the resources for AI investment or workforce reskilling, making them vulnerable to job displacement. This threatens development models, risking increased international inequality without targeted aid and policy interventions.

[A]ll of Africa has less than 1 per cent of global data centre capacity and accounts for less than 1,000 GPUs
— Amandeep Singh Gill, Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, United Nations

Privacy, Surveillance, and Governance. AI makes large-scale surveillance and monitoring cheaper and more effective, enabling advanced facial recognition and data-mining systems for near real-time tracking. This promotes authoritarian drift, even in democratic societies where politicians run on a platform of crime reduction. Biases inherent in training data have led to facial recognition systems that are less accurate for some people, exacerbating the risk of wrongful arrests based on misidentification. On a societal scale, this creep towards ever more pervasive surveillance poses a trade-off between security and liberty, with some content to erode individual freedoms for authoritarian control.

Exacerbating Climate Change. AI development consumes vast amounts of energy and water, along with critical minerals. In the US alone, corporations have committed over $1 trillion in spending on data centres in the short to medium term. Power demand for US data centres is projected to reach over 100 gigawatts, up from 4 gigawatts last year. Globally, energy demand for data centres is expected to account for 8% of total global greenhouse gas emissions by 2040, overtaking air travel. In other words, if data centres were a country, they could overtake India as the third largest emitter of CO2.

Likewise, the water demands of data centres, where air cooling is insufficient for AI chipsets, is staggering, with many data centres being built in areas of high water stress. The UK already faces a projected daily water deficit of nearly 5 billion litres by 2050, equivalent to over a third of the current public water supply.

Sir Keir Starmer’s promise that the UK can simultaneously lead in AI and meet its legal Net Zero commitments by 2050 represents magical thinking at the highest levels of government.
— Professor John Naughton (Foreword to MCTD Cambridge’s “Big Tech’s Climate Performance and Policy Implications for the UK”)

Globally, the International Energy Agency (IEA) estimates that the data centre sector consumes over 560 billion litres of water annually. Projections indicate this figure could rise dramatically, reaching as high as 1.2 trillion litres by 2030. The Stockholm Resilience Centre assessed that the planetary boundaries for fresh water had already been crossed in 2023. Water stress negatively impacts ecosystems, public health, agriculture, and economies, leading to increased pollution concentration, reduced water quality, and food and energy shortages.

Existential Risk

Geopolitical Competition and Security. Major powers are racing to embed AI into military and strategic systems. Despite assurances that ‘human in the loop’ will be a feature of strategic decisions, autonomous weapons systems are proliferating. An unchecked move towards greater systems autonomy presents what some diplomats and weapons manufacturers refer to as an ‘Oppenheimer moment’, a potential Rubicon to an era of unchecked destructive power.

This is the culmination of our fears about technology and the changing character of warfare—that we can now be targeted by something relentless, a thing that does not sleep.
— Thom Hawkins and Alexander Kott

Globally, an unchecked AI arms build-up could spark conflicts, lower thresholds for force, and leave little time for diplomacy when the inevitable AI-driven errors occur.

Alignment and Precautionary Governance. Another type of ‘AI Arms Race’ is also underway, as the ‘tech bro’ billionaires constantly remind us. We’re told that ‘our’ AGI will be benevolent while China’s will be malign; a convenient fiction that distracts from a more terrifying truth: an unaligned superintelligence has no nationality.

What happens when an AGI decides that some / most / all humans are an unnecessary resource, and installs itself somewhere out of physical reach (e.g. Starlink, Kuiper), and begins assuming control of drones and missile systems? Or simply water treatment facilities and nuclear plants? Alarming AI ‘doomerism’, or potentially realistic scenarios? If a single teenager can hack the Pentagon, what chance have most cyber-security systems currently got against a swarm of dedicated AI agents, increasingly unshackled from their sandboxes to interact with internet browsers and file systems. We’ve already seen an autonomous AI pentester (Xbow) take the top spot in finding cyber insecurities.

The risk of a truly intelligent AI being misaligned with human values is almost incalculable and, in some scenarios, existential. Yet despite strident warnings from researchers, any serious discussion in the regulatory sphere is being set aside in the race to ‘beat China’. It’s as though we’re content to risk extinction, provided the tools to facilitate it originate in Silicon Valley.

Researchers have already introduced a Psychopathia Machinalis to catalogue the manifold ways that an AI’s goals and behaviours can drift from alignment with human values, resulting in reward hacking, blackmail, and worse.

We need the precautionary principle more than ever

Despite its utility in dealing with environmental risk, and codification in conventions and charters, the developed world largely winks at the precautionary principle. We continue producing millions of tons of plastics annually, with virtually no understanding of the effects of microplastics on the biome. So it’s hardly surprising that the Precautionary Principle is virtually invisible in discussion around AI risk. This really needs to change rapidly.

Is there any hope?

The widespread failure of humankind to address approaching earth’s system boundaries in any meaningful way suggests we won’t do any better with AI risk. Combining our collective lethargy with the rapidity of development in the AI space seems like a sure recipe for crossing red lines, likely before we’re even aware of them.

About my only reason for clinging to any optimism is knowing that there are serious researchers already trying to raise the alarm. However, we ignored and undermined climate whistle-blowers for decades, so goodness knows whether we will respond any better to anyone sounding the alarm on AI, who are already being dismissed as ‘doomers’. Given the hugely compressed timelines we’re looking at in AI development, we don’t have long to really start paying attention.