AI Godfather Geoffrey Hinton Urges Caution as Tech Giants Downplay AI Risks

Geoffrey Hinton voices concern over unchecked AI growth, urging more transparency and responsibility from major tech players, especially around safety.
Renowned artificial intelligence pioneer Geoffrey Hinton, often called the “Godfather of AI,” has voiced strong concerns over the rapid and unchecked evolution of artificial intelligence systems. In a recent episode of the One Decision podcast, Hinton criticized major tech companies for publicly downplaying the serious risks associated with advanced AI development.
"Many of the people in big companies, I think, are downplaying the risk publicly," Hinton stated during the conversation. He singled out Demis Hassabis, CEO of DeepMind, as one of the few leaders in the field who “really do understand the risks and really want to do something about it.”
Hinton’s remarks come at a time when AI capabilities are accelerating at an extraordinary pace. He warned that these systems are not only becoming increasingly intelligent but are also starting to learn in ways that humans can’t fully comprehend. “The rate at which they’ve started working now is way beyond what anybody expected,” he said.
In 2024, Hinton was awarded the Nobel Prize in Physics along with John J. Hopfield for their groundbreaking work on artificial neural networks, which laid the foundation for today’s generative AI and deep learning technologies. Yet, Hinton now finds himself reflecting on the unintended consequences of these advancements.
“I should have realized much sooner what the eventual dangers were going to be,” Hinton admitted. “I always thought the future was far off and I wish I had thought about safety sooner.”
Hinton spent over a decade at Google before stepping down in 2023. His resignation sparked widespread speculation that he was protesting against the company’s aggressive AI strategies. However, he set the record straight in the podcast, saying the media narrative around his departure was overblown.
“There’s a wonderful story that the media loves — this honest scientist who wanted to tell the truth so I had to leave Google. It’s a myth,” Hinton explained. “I left Google because I was 75 and I couldn’t program effectively anymore, but when I left, maybe I could talk about all these risks more freely.”
He further elaborated on the challenges of remaining objective while being employed by a tech giant. “You can’t take their money and then not be influenced by what’s in their own interest,” he said.
In the same discussion, Hinton praised Demis Hassabis, founder of DeepMind — acquired by Google in 2014 — for his commitment to AI safety. Hassabis, who now leads AI research at Google, has frequently raised alarms about the misuse of powerful AI tools.
In an earlier interview with CNN, Hassabis expressed his own worries about AI, not in terms of job displacement, but in regard to potential misuse by malicious actors. “A bad actor could repurpose those same technologies for a harmful end,” Hassabis said. “And so one big thing is how do we restrict access to these systems, powerful systems, to bad actors but enable good actors to do many, many amazing things with it?”
As the AI landscape continues to evolve, Hinton’s message is clear — greater transparency, responsibility, and proactive safety measures are critical to ensuring AI serves humanity without unforeseen consequences.














