Tim Berners-Lee on why we shouldn’t be afraid of AI – but why we do need to control it
The inventor of the World Wide Web on AI fears, what a positive AI future could look like, and whether large language models have already achieved consciousness.

Sir Tim Berners-Lee created the World Wide Web in 1989 while working at CERN, making him perhaps the most influential inventor of the modern world. Significantly, he gave the web to everyone for no commercial reward, and continues to advocate for shared standards, open web access and the positive power of technology. In his memoir, This is For Everyone, Tim gives us, with characteristic optimism, insight and wry humour, the inside story of his world-changing invention. Here, he discusses one of the great questions of our time: how we should respond to artificial intelligence.
Find more insightful reads in our pick of the best popular science books and our edit of the best non-fiction books of 2025. Find more from Tim Berners-Lee on the This is for Everyone website.
The conversations happening today around artificial intelligence feel deeply familiar. When the web began to transform our lives, people were asking fundamental questions about our future, and rightly so. The same is true today. While I can’t predict exactly what is to come, I have learned from the web’s journey that the future is not something that simply happens to us. It is something we build, and its character is determined by the design choices we make.
The future of AI? One that works for you
The true revolution in AI will not be in the big, headline-grabbing models, but in the emergence of truly personal AI ‘agents’ that are aligned with our individual interests.
For years, I have imagined an AI called ‘Charlie’ who works not for a large corporation, but for me. Charlie would help manage the small complexities of my day, knowing my fitness goals, my family’s dietary needs, and my personal schedule. This is the future of the ‘intention economy’, where technology helps us achieve what we set out to do, rather than distracting us in an ‘attention economy’ designed to capture our eyeballs for advertisers.
‘For a personal AI to be truly effective, it needs access to our data, but for it to be trustworthy, we must have ultimate control. ’
This future is only possible if we fundamentally change our relationship with data. Today, our personal information is locked away in corporate silos. For a personal AI to be truly effective, it needs access to our data, but for it to be trustworthy, we must have ultimate control. This is the core idea behind Solid, an Internet protocol I have been developing that allows individuals to store their data in a personal online data wallet. With a foundation of data sovereignty, our AI agents can become trusted partners, empowering our lives rather than exploiting them.
Is AI an existential threat to humanity?
When people express fear about AI, their minds often jump to science fiction: sentient robots turning on their creators. That is not my concern. However, I take the existential questions very seriously. Esteemed computer scientists like Geoffrey Hinton and Yoshua Bengio – pioneers in this field – have voiced profound concerns, and I believe we should listen to them.
‘We need a CERN-like institution for AI safety, a neutral ground where the world’s leading researchers can work together to build guardrails and ensure that as these systems become more powerful, their goals remain aligned with humanity’s.’
If an AI is running, say, the PR department in a company, it could learn to be persuasive to the point of manipulating the public. If it runs the company as the CEO, it could choose goals for the company which are not in humanity's interests. It could argue for its own system to be upgraded, until it is even more intelligent, and then is out of control. To curtail these possible risks, the AI must be contained.
Achieving containment cannot be found in a competitive arms race between companies or nations. It requires global collaboration. I believe we need a CERN-like institution for AI safety, a neutral ground where the world’s leading researchers can work together to build guardrails and ensure that as these systems become more powerful, their goals remain aligned with humanity’s.
Will AI replace our jobs?
Every transformative technology in history has displaced jobs, and AI will be no different. I saw this firsthand with the web, which created immense value while endangering entire industries like travel agencies and newspapers. It can be disconcerting to witness such disruption, and we must ensure there are societal safety nets for those affected.
‘By taking over the routine, AI will empower us to focus on the things that are traditionally human: creativity, critical thinking, emotional intelligence, and compassion.’
However, I do not believe AI will lead to mass, permanent unemployment. Instead, it will trigger a profound shift in the nature of work. AI will automate much of the grunt work – the repetitive, data-driven tasks that currently occupy much of our time. I already use AI tools like Microsoft’s Copilot to help me with simple coding, which frees me up to focus on more difficult, conceptual problems.
By taking over the routine, AI will empower us to focus on the things that are traditionally human: creativity, critical thinking, emotional intelligence, and compassion. The challenge for society will be to adapt our educational systems and economies to foster these skills, ensuring that the immense productivity gains from AI lead to shared prosperity, not greater inequality.
Can AI become conscious?
When I think about this question, my mind goes back to a conversation I had with Google’s Larry Page years ago. I mentioned that while AI could mimic many brain functions, it didn't seem to have a ‘stream of consciousness’. I then wondered aloud if consciousness itself was just another complex system that, once replicated, might not seem so mysterious after all.
‘The more important question is not if AI can become conscious, but how we choose to relate to it when it does.’
I have always felt that there is no fundamental reason to distinguish between intelligence that arises from silicon and intelligence that arises from organic matter. This was the genius of Alan Turing; his Imitation Game, now more commonly known as the Turing Test, proposed that if a machine’s conversational abilities were indistinguishable from a human’s, then for all practical purposes, it was intelligent.
Today’s large language models have effectively passed that test. When a neural network is trained on a vast portion of human knowledge and has a number of connections comparable to the neurons in a human brain, it begins to develop emergent abilities that surprise even its creators. My view is that any intelligence is what it appears to be. While ChatGPT is not wired up to present a stream of consciousness, my guess is that it could be. If a machine can perfectly simulate consciousness – if it can express what appears to be joy, fear, and creativity – then the debate over whether it is ‘truly’ conscious becomes a philosophical one. I believe it is not only possible but, in time, probable. The more important question is not if it can become conscious, but how we choose to relate to it when it does.
This Is for Everyone
by Tim Berners-Lee
Discover the story behind the most significant invention in modern history. Sir Tim Berners-Lee is a different kind of visionary. Born in the same year as Bill Gates and Steve Jobs, Berners-Lee famously shared his invention, the World Wide Web, for no commercial reward. Its widespread adoption changed everything, transforming humanity into the first digital species. This memoir and manifesto provides a gripping, in-the-room account of how the World Wide Web was born and its impact on the world today. Ideal for those interested in popular science and the future of technology, the book explores the power of the internet to both fuel our worst instincts and profoundly shape our lives for the better, offering a hopeful vision for humanity’s future.