I look at the prospects for artificial intelligence (AI) in 2026 with a cloud of unease smattered with faint glimmers of hope.
We are in an AI bubble, with Morgan Stanley estimating almost $3 trillion will be spent buying computer chips and building data centres by 2028, but relatively little revenue arising from AI-based services to pay back the debt financing those investments.
The Bank of England recently warned that a “sharp correction” could mean a market crash similar to the 2008 financial crisis. However, the AI industry may be considered ‘too big to fail’: governments around the world, including here in the UK, are betting big on AI-driven productivity and economic growth. It seems more likely they will seek to reassure investors by offering incentives and long-term contracts; reducing barriers to data centre construction; and retaining a low-regulation environment.
Read more:
- Big tech wants to mine human creativity for AI training. We say no
- Labour wants to use ‘Minority Report’ AI to predict murder. This could have chilling consequences
- How do we stop AI becoming an ‘engine of inequality’, and make it a force for good?
If the bubble doesn’t burst, we will still be trapped in it, along with AI firms increasingly desperate to bring in revenue (and stay on the right side of Trump) regardless of the impact on our information environment. We are already subject to low quality AI-generated websites and dodgy search summaries. OpenAI’s Sora 2, which some have dubbed “SlopTok” has been used to churn out deep fake videos that stoke racial tension. Elon Musk’s Grok has been used to create a version of Wikipedia riddled with conspiracy theories. AI is accelerating social media’s assault on our shared understanding of the world.
We will also see increasing evidence of the negative impacts of AI on people’s lives. I expect further examples will surface of emotional and cognitive reliance on AI chatbots, particularly amongst young people. No doubt there will be more reports of what some call AI psychosis, where chatbots engage in a folie à deux with their users, leading in extreme cases to mental breakdowns and suicide. AI business models incentivise dependence-by-design.









