TORONTO - Artificial intelligence pioneer Geoffrey Hinton has become synonymous with doomsday-like warnings that predict the technology could pose an existential threat, but on Wednesday, he put the spotlight on another risk he said is even more pressing for humanity.
"The existential threat is the one I've talked about the most, but that's not the most urgent part," he told a standing-room only audience at the Collision tech conference in Toronto.
"I think surveillance is something to worry about. AI is going to be very good at surveillance."
It's of particular worry to the British-ºÚÁϳԹÏÍø computer scientist because he believes it could help authoritarian regimes stay in power. In some instances, he thinks there will be few protections against these regimes, whose power sometimes even supreme courts can't counter.
AI's ability to spy on humanity's every move has long been on Hinton's lengthy list of worries he's rattled off at appearances over the last year designed to get the world thinking more cautiously about AI and considering what guardrails the technology needs as it explodes into usage at businesses and beyond.
The worries he named Wednesday include the rise of lethal autonomous weapons, which he said are coming soon, along with fake videos, corrupted elections, cybercrime and job losses that could increase the gap between the rich and poor.
There's also an "alignment problem" because humanity can't always agree on what is good and that could have repercussions when powerful technology is in our hands.
"Some people think it's good to drop 2,000-pound bombs on children and other people don't," he said.
"They've both got their reasons, but you can't align with both of them."
But the risk Hinton has bandied around that has generated the most attention is his sci-fi-like predictions that warn of battle robots and humanity's very existence being in peril because of AI.
The prognostications have divided the tech community with some saying an existential crisis is a far-off possibility and others thinking it won't materialize at all because humans will always be able to pull the plug on AI.
Hinton left his job at Google, which bought a neural network business he co-founded with two students in 2013, just as AI worries were swirling around his mind.
"I left Google because I was 75. I wanted a break and to watch a lot of Netflix," he quipped.
"But also as I left Google, I figured I could just warn ... that in the long run, these things could get smarter than us and might go rogue. That's not science fiction like Aidan Gomez thinks. That's real."
Gomez is the co-founder of Cohere, a Toronto-based enterprise AI company Hinton has backed. He told the Collision audience on Tuesday that he feels the technology is not bound to exceed human capabilities any time soon, and if it does he's skeptical any sci-fi like scenarios will arise.
Asked how the world can counter AI's problems, Hinton conceded "for most things, I have no idea what they should do."
But on the matter of existential threat, he called on governments to conduct large safety experiments because they're "the only thing powerful enough to make the big companies invest significant amounts of money."
When it comes to fake videos and attempts at skewing elections, he also has an idea meant to "build up resistance" to AI-generated material spreading falsehoods.He said he recently shared the idea with billionaire philanthropists who solicited his advice.
"My suggestion was pay for a lot of advertisements where you have a very convincing fake video, and then right at the end of it say, 'This was a fake video,'" he said.
Hinton’s talk was the most anticipated at Collision, where organizers expected 37,832 people and a record number of female founders to take in talks over three days.
Hinton’s 20-minute appearance was slimmed down from the roughly hour-long interview he gave at the event the year before, but both talks were equally hyped.
His views carry considerable weight in the tech world because he won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018 with Yoshua Bengio and Yann LeCun, who disagrees with Hinton's existential threat theory.
This report by ºÚÁϳԹÏÍø was first published June 19, 2024.