Speaking at Black Hat Europe 2018 in London Vijay Thaware, security response lead at Symantec and Niranjan Agnihotri, associate threat analysis engineer at Symantec, explored the rise of a threat called ‘Deep Fakes.’
According to the speakers, Deep Fake defines the theft of the human face (a crucial means of identity) for malicious gain in the form of videos. Deep Fake technology uses AI-based human image blending methods to create such things as revenge porn, fake celebrity footage or even cyber-propaganda.
“This is happening now,” said Thaware. “As AI has progressed, if it has been trained well with sufficient sources, it can create seemingly real fake and deceptive videos.”
Deep Fakes can be created to target people of interest and importance, but also the general public, he added.
“The videos are created in such a manner that they can fool the human eye and a human can easily get tricked and believe what they see, or believe what the threat actor wants them to believe,” Thaware said.
In terms of the various hazards of Deep Fakes, they include:
- Cyber-propaganda
- Fake news = Deep Fake news
- Trust issues
- Disinformation campaigns
- Emotional stress
- Can become ubiquitous
- Morality vs legality issues
Preparing a Deep Fake video simply requires a laptop, internet connection, some passion and patience and, of course, some images (often easily obtained online), Thaware explained.
With Deep Fakes so prevalent on social media “it becomes very important to develop systems that monitor the quality of content on social media,” said Agnihotri.
“The presence of Deep Fake videos on the internet can skyrocket at any time, therefore we need to make technology that has the ability to scan the content that we upload on social media, and flag it or block it,” he added.
To conclude, Thaware’s advice for curbing the rise of the Deep Fake threat was:
- The use of watermarking on video content
- Users should think before they forward videos
- Users should assess the credibility of a source
- Lawmakers must introduce robust laws against Deep Fakes
Also, both speakers urged the security community to come forward and create an awareness that Deep Fakes exist in the world, and do it all it can to create as many hurdles possible to stop malicious actors creating Deep Fake videos.