McAfee shows how deepfakes can circumvent cybersecurity
You can no longer believe what you see. Deepfakes, which use artificial intelligence to make people appear to say and do things in videos that they haven’t said or done, have been growing more realistic at an alarming rate. And it’s a matter of time before they’re used to try to circumvent cybersecurity.
Steve Grobman, chief technology officer at cybersecurity firm McAfee, and Celeste Fralick, chief data scientist, warned in a keynote speech at the RSA security conference in San Francisco that the tech has reached the point where you can barely tell with the naked eye whether a video is fake or real. They showed a video where Fralick’s words were coming out of a video of Grobman’s face, even though Grobman never said those words.
“I used freely available, recorded public comments by you to create and train a machine learning model that let me develop a deepfake video with my words coming out of your mouth,” Fralick said. “It just shows one way that AI and machine learning can be used to create massive chaos. It makes me think of all sorts of other ways in the social engineering realm that AI could be used by attackers, things like social engineering and phishing, where adversaries can now create automated targeted content.”
That helps adversaries create targeted “spear phishing” attacks — which are personalized and more successful — with the scale of automated attacks.
“In fact, most people don’t realize how fragile AI and machine learning can really be,” Fralick said. “There’s an entire technical area that my team is involved in called the adversarial machine learning. We study ways that adversaries can invade or poison machine learning classifier.”
In an interview on Monday evening, Grobman said that Fralick and he were able to create the deepfake video in a weekend, without trying that hard to make it perfect.
“One of the points we want to make is that how easy it is if you had the objective of creating a video that was completely fabricated,” he said.
One way to trick people and AI alike is to take a photo that is mostly real and change a very small part of it in a way that would be imperceptible to humans. Fralick showed an example where a photo of penguins could be interpreted by AI as a frying pan, thanks to a small manipulation.
“One of the questions that I think is in the mind of the folks in the audience is: Could the same techniques that are being used to confuse an image classifier be used to confuse our cyber models?” Grobman asked. And Fralick answered in the affirmative.
“False positives can have catastrophic results,” Grobman said. “The quintessential example of this occurred during 23 tense minutes on September 26, 1983. It’s the height of the Cold War. Several international incidences have both the United States and the Soviet Union on heightened alert. Against this backdrop nearly four decades ago, the Soviets detected five U.S. missiles launched against them. But with sirens blaring and screens flashing, Lt. Col. Stanislav Petrov decides to report the incident as a malfunction.”
Petrov reasoned that the U.S. would not start a world war by launching just five missiles. He ignored his training manuals to escalate, and his intuition was correct, preventing a nuclear war. It turned out the root cause was the rare alignment of sunlight on high-altitude clouds producing a rocket-flare-like effect. Grobman said it’s important to recognize the power of AI to solve problems but also the abilities it gives to cybersecurity’s adversaries.
Grobman said he believes the technology itself is neutral, and it can be applied for good or bad. For instance, a crime map with data about where crimes happen and where arrests are made could be used to help understand crime — or it could be used by criminals to avoid getting arrested. In another example, airplanes have been used to bomb civilians, resulting in 2 million casualties in World War II. Asked if he regretted inventing the airplane, Orville Wright said in 1948 that he viewed it the same as the use of fire, which can cause tremendous damage yet be used for thousands of important uses.
“Orville’s insight is that technology doesn’t comprehend morality,” Grobman said. “Something that our industry, cybersecurity, constantly struggles with.”
Tags: Artificial intelligence • Computer security • McAfee • Machine learning • Steve Grobman
0 Comments