Robert Hannigan, the former head of GCHQ, has claimed that there is really tiny evidence of synthetic intelligence (AI) becoming made use of in cyber crime or terrorism.
Hannigan was speaking at an party hosted by the London Office For Quick Cybersecurity Advancement (LORCA), the place he shipped a keynote on the so-named ‘myths’ and ‘buzzwords’ all over AI in cyber security.
In his opinion, when AI has transformed many features of modern-day everyday living, it is yet to prove all that beneficial to state-sponsored hackers. He prompt there were being not more than enough added benefits to outweigh the “issues” of investing in the technology for malicious uses.
“The cyber sector is excellent at scare tales, and I’ve browse heaps and a lot of scare tales about prison teams and even terrorists utilizing AI, and to be honest, I’ve found just about no proof for this at all, with a couple of exceptions,” Hannigan explained. “I would say that I think it really is once again a confusion with automation.”
He extra that AI would most likely form a portion of a hackers arsenal in the close to potential, but right now it simply presented way too considerably “risk”. As an example, he cited the SolarWinds hack, which he stated was sophisticated but also appeared to be “hand-curated”.
“You can fully grasp why the attackers may well have required to do that, in purchase to cover themselves,” Hannigan stated. “And executing it at the scale, and heading to the difficulty of accomplishing it by means of AI would possibly be at superior risk for them.”
From there the matter of AI in cyber security flipped, with Hannigan expressing concerns about the security of AI. He said the issue was “substantial on everyone’s record” for the reason that systems these kinds of as driverless cars and automated health care diagnostics had been fast becoming the norm.
“The information is a enormous vulnerability, and there have been tons of reports on so-termed knowledge poisoning, adversarial versions, which mainly say, we can trick the device into misdiagnosing, for case in point, an MIT examine on upper body X rays,” he stated.
“And if you have a malicious actor, or even an accidental actor, it is flawlessly possible to see how info poisoning or incorrectly categorised info can lead the equipment to do something wholly incorrect with likely extremely significant consequences.”
Some pieces of this report are sourced from: