Speaking to CNN, Hassabis warned that the most dangerous outcome of artificial intelligence isn’t economic disruption—it’s existential. While headlines scream about the professions most at risk, from content writers to legal clerks, Hassabis is focused on something far more apocalyptic: the weaponisation of AI by malicious actors.
The Countdown to AGI—and the Countdown to Control
The AI pioneer’s fears are rooted in a rapidly approaching milestone: artificial general intelligence, or AGI. This is the moment when machines don’t just mimic isolated human abilities but surpass our collective intellect. Hassabis believes AGI could arrive within the decade.
And that’s where the danger intensifies. “A bad actor could repurpose the same technologies for a harmful end,” he stated. The concern is no longer speculative—it’s immediate. Hackers already use AI to create advanced phishing scams, write dangerous code, and produce nonconsensual deepfake content. What happens when this technology becomes exponentially more powerful?
The challenge, Hassabis stresses, lies in how we can “restrict access to these powerful systems” for bad actors while still enabling good actors to innovate and improve society. It’s a question with no easy answer—and with consequences too grim to ignore.
Warning Signs Are Already Here
This isn’t fear-mongering; it’s a reflection of patterns already visible. AI scams are defrauding people out of life savings. Sophisticated black-market codes are being generated by machine learning models. And sexually explicit deepfakes, often targeting women, are proliferating online. The future Hassabis is warning about isn’t science fiction—it’s a few viral lines of code away. And he’s not alone in his concern. Other major AI leaders, including those backed by tech giants like Jeff Bezos, have similarly voiced fears that AI development is outpacing human control. Once AGI becomes a reality, some experts fear, it could be too late to rein in.
Can Governments Act Fast Enough?
Even as regulatory frameworks begin to take shape, Hassabis casts doubt on whether global governance can evolve quickly enough to meet the AI moment. “Obviously, it’s looking difficult at present day with the geopolitics as it is,” he admitted. Still, he hopes that as AI becomes more sophisticated—and its risks more obvious—there will be global consensus on the need for tighter control.
But that hope may come with a ticking clock. One former DeepMind insider has gone further, suggesting that if regulation and oversight don’t catch up, AI could become the very force that unravels human civilization.
As AI barrels forward with stunning speed, the world is caught between marvel and menace. While universal income and economic restructuring remain topics of debate, Hassabis is urging us to focus on the shadow looming behind the technology itself: not just what AI does, but who controls it.
Because in the hands of the wrong person, the smartest machine in the world could become humanity’s worst nightmare.