Geoffrey Hinton, a prominent computer scientist also referred to as the “godfather of AI,” has quit Google and now says he regrets what AI could mean for misinformation and people’s jobs. Along with many other tech luminaries, Hinton is concerned about the implications of artificial intelligence, according to an interview with The New York Times published Monday.
Hinton said he fears that average people won’t be able to tell the difference between real and AI-generated photos, videos and text and that AI might also kill jobs, upending not just rote work or number crunching but also more advanced careers.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Taking the current AI trajectory a step further, Hinton fears that AI could generate its own computer code, become autonomous and weaponize itself. And now that AI has been unleashed, he said, there’s no way to really control or regulate it. While companies may agree to a set of terms, countries may continue developing AI tech in secret, not wanting to cede any ground.
Hinton, along with two of his students, built a neural network, or a mathematical system that can learn new skills by analyzing an existing dataset, that could teach itself to identify objects in photos. Google acquired the company in 2013 for $44 million. Hinton, along with Yoshua Bengio and Yann LeCun, won the Turing award in 2019 for their work on neural networks.
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google,” Jeff Dean, chief scientist at Google, told CNET in an emailed statement. “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!”
Dean went on to say that Google was one of the first companies to publish AI principles and that it’s “continually learning to understand emerging risks while also innovating boldly.”
Microsoft’s chief scientist and AI expert, Eric Horvitz, says a pause wouldn’t be feasible and rather that development should be accelerated, in an interview with Fortune.
AI chatbots like ChatGPT took the world by storm late last year by being able to answer just about any question with human-like responses. From writing poems to resumes, generative AI can return unique and novel responses each time. It upends the internet search paradigm of typing in a query and filtering through a list of website links to find an answer. Generative AI does this by combing through massive datasets and putting together sentences that make the most sense. It’s been referred to as autocorrect on steroids. While generative AI tools can make research a less laborious chore, they are also prone to making errors.
Since the launch of ChatGPT, many companies have integrated AI into their products. Microsoft revamped Bing to include the same tech powering ChatGPT. Apps like Photoshop, Grammarly and WhatsApp are also embracing AI. Google responded by releasing its own AI-powered chatbot named Bard, a launch that it fumbled. And when compared to Bing and ChatGPT, Bard hasn’t impressed, though Google is reportedly working on an AI-powered search engine. AI will likely be a key topic at this month’s Google I/O, where if the company doesn’t plant its flag firmly, it could be left behind.
Microsoft is also looking to ensure responsible use of AI. On Monday it published a blog post about embedding guidelines within the company and investing in a diverse talent pool to help future development.
Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.