Is AI getting too dangerous? The “Godfather of AI” definitely thinks so!

Godfather of AI Geoffrey Hinton quits Google, warns dangers of AI
Honor X9b Ad
Honor X9b Ad

Geoffrey Hinton, who is also known as the “Godfather of AI”, has recently quit his decade-long job at Google and has warned about the dangers of AI. In a recent interview with the New York Times, Hinton discussed his concern about the rapid pace of AI development.

Godfather of AI: Geoffrey Hinton

Geoffrey Hinton is among the pioneers in the field of artificial intelligence, neural networks, and deep learning. He and his students were the first to build the underlying technology that powers most state-of-the-art AI tools you see today.

Everything from autonomous vehicles to chatbots like ChatGPT uses neural networks to identify patterns in data. In 2012, Dr. Hinton and two of his students, Ilya Sutskever and Alex Krishevsky, built a neural network that could process thousands of images and identify common objects like cats, windows, and cars. Although the concept of neural nets was not invented by Hinton and his team, their neural network was a breakthrough in the field of computer science.

His contributions in the field earned him the prestigious “Turing Award” with Yann Lecun and Yoshua Bengio in 2018. In case you’re unaware, the Turing Award is akin to getting a Nobel Prize in the field of computer science. Anyway, Google bought the company they founded (DNNresearch Inc) in 2013 for USD 44 million. And he had been working for them ever since.

Why did the Godfather of AI quit Google?

So after such a long commitment in this field, why has Dr. Hinton now chosen to step back? While he clarifies that “Google has acted very responsibly” in the field of AI, the recent competition from Microsoft and OpenAI has challenged Google’s core business in such a way that this competition could force Google to sway from its original AI principles.

AI computing

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain”, says Dr. Hinton. He also says that a part of him regrets his life’s work. He has always had some guilt about developing a potentially dangerous technology.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”

And when asked about why he continued to work on something potentially dangerous, he used to paraphrase Robert Oppenheimer (one of the key figures in the research and design of an atomic bomb):

“When you see something that is technically sweet, you go ahead and do it.”

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Dr. Hinton now says that he left Google so that he could openly voice his concerns about AI without any conflict of interest.

His concerns 

Dr. Hinton’s concerns with AI are diverse. Firstly, he is worried about the bad actors of AI. With such easy access to Large Language Models (LLMs), spreading propaganda and misinformation can become child’s play.

And unlike nuclear research, there is no telling who is developing such technology in secret. This makes “global regulation” of such tools extremely unlikely. Instead, Dr. Hinton proposes “a global collaboration among leading scientists to control the technology”.

Deepfake of President Zelinksy

He is also worried about the pace of development of such AI tools. Looking at where we’ve come in the span of just the last 5 years, it’s scary to guess what’s coming next. The Godfather of AI shares his concerns about false images, videos, and texts flooding the internet and being in-distinguishable for an average Joe.

People’s job security is another risk such AI tools possess. Personal assistants, social media marketers, paralegals, translators, and voice actors might all be partly—or completely—replaced by AI in the near future. Dr. Hinton shared a sentiment on how it takes away just the boring work today. But it might take away a lot more than that soon.

Another concern he posed was regarding the unintended impacts. When neural nets learn from a large data set, they often pick up unexpected behaviors that might have unaccounted consequences. As such, companies should be really careful when allowing such technology to not just write its own code but execute it as well. If the AI gets smart enough, it might use this exploit to fire weapons and act autonomously. Our inability to properly understand AI and predict its thought process could be devastating.

So what’s next?

When someone honored as the “Godfather of AI” calls it quits at Google, you know things are getting serious. This could seriously be the point of no return in the world of generative AI. As mentioned before, Google had been approaching AI with proper caution not releasing powerful and potentially harmful technology publicly.

But with ChatGPT and Microsoft’s Bing Chat posing such a big threat to Google, the company began making bad decisions to not get left behind. Such mammoths in the industry with massive resources and talent releasing untested technology could have big implications for the future of human civilization itself. And how elegantly (or not) this “AI race” goes on remains to be seen.

  • Meanwhile, check out our review of Nothing Ear (2)