AI will get better than humans at cyber offence by 2030: Hinton Lectures speaker

Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California, is seen speaking in Toronto an event hosted by the Global Risk Institute in a Monday, Oct. 28, 2024, handout photo. THE CANADIAN PRESS/HO-GRI, Elaine Fancy, *MANDATORY CREDIT*

TORONTO - Artificial intelligence will be able to beat humans at cyber offence by the end of the decade, predicted the keynote speaker at a series of lectures hosted by computer science luminary Geoffrey Hinton this week.

Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California, made that projection Tuesday, saying it was based around his belief that AI systems will eventually become "superhuman" when tasked with coding and finding exploits.

Exploits are weak points in software and hardware that people can abuse. Cyber criminals often covet these exploits because they can be used to gain unauthorized access to systems.

Once a criminal has access through an exploit, they can run a ransomware attack where they encrypt sensitive data or block administrators from getting into software, in hopes of extracting cash from victims.

To find exploits, Steinhardt said humans would have to read all the code underpinning a system, so that they can find an exploit and carry out an attack.

"This is really boring," Steinhardt said. "Most people just don't have the patience to do it, but AI systems don't get bored."

Not only will AI undertake the drudgery associated with finding an exploit, but it will also be meticulous with the task, Steinhardt said.

Steinhardt's remarks come as cybercrime has been increasing

A 2023 study from EY Canada of 60 ºÚÁϳԹÏÍø organizations found that four out of five had seen at least 25 cybersecurity incidents in the past year and experts say some companies face thousands of attempts every day.

Many have hailed AI as a potential solution because it can be used to quickly identify attackers and gather information on them, but Steinhardt said it is just as likely to be used by people with nefarious intentions.

Already, he said the world has seen instances where bad actors have harnessed the technology to create deep fakes -- digitally manipulated images, videos or audio clips depicting people saying or doing things they have not said or done.

In some instances, deep fakes have been used by bad actors to make calls to people suggesting it's their loved one reaching out and they are in need of money quicky.

Businesses have been victims, too.

Earlier this year, media reported that a worker at Arup, the British engineering company behind prominent buildings including the Sydney Opera House, had been duped into handing over US $25 million to fraudsters making use of deep fake technology to pose as the company's chief financial officer.

"I've been trained to watch out for scams and phishing emails and I think I would have confirmed before sending $25 million over but I'm not sure," Steinhardt said, explaining how new this phenomenon is and how realistic the fakes seem.

"This is not something that we're used to and this isn't the only use of digital impersonation to create problems."

Steinhardt's talk concluded the Hinton Lectures, a two-evening series of talks put on by the Global Risk Institute at the John W. H. Bassett Theatre in Toronto.

The event's namesake, Geoffrey Hinton, who is widely known as the godfather of AI, introduced Steinhardt earlier in the evening, describing the professor as the most popular choice to debut the lecture series.

The evening before Steinhardt had told the audience he sees himself as a “worried optimist,†who believes there’s a 10 per cent chance AI will lead to human extinction and a 50 per cent chance it will cause immense economic value and “radical prosperity.â€

This report by ºÚÁϳԹÏÍø was first published Oct. 29, 2024.

ºÚÁϳԹÏÍø. All rights reserved.

More Science Stories

Sign Up to Newsletters

Get the latest from ºÚÁϳԹÏÍø News in your inbox. Select the emails you're interested in below.