By Joey Ricard - March 9, 2021
Have you come across the public service announcement video of Barack Obama where he was seen cursing and calling the now ex-president of the United States of America, Donald Trump, names and fell for it?
Or how about the presidential campaign of Tom Cruise! We bet you’ve definitely seen that!
Or did you see the video of Facebook founder Mark Zuckerberg where he reveals how the social networking platform ‘owns’ its users and uses the data to manipulate them? Was it not deeply convincing?
Well, none of these videos are original. They are the deeply fake ‘deepfake’ videos.
The deepfake or the face-stitching technology lets people digitally insert someone in scenes they were never really in. From casting Nicholas Cage as Superman’s lady love Lois Lane to making Obama say things he would never say or even including famous Hollywood actresses Scarlett Johansson and Gal Gadot in porn movies, AI-enabled deepfake videos have become a serious concern.
And now the very technology that has been used to make deepfake videos, have been tried as a solution to end the horrors of deepfake and the catastrophic consequences it might lead to.
Image Source: Techspot
Deepfakes are simply fake or falsified videos. To be more specific they are synthetic media where a person in an existing video or image is replaced or swapped with someone else’s likeness.
The term deepfake was coined from ‘Deep Learning’ and ‘Fake’. As deepfakes are created exploiting the deep learning technology which is a subset of AI and where massive data sets are replaced by the neural net simulation to create a fake, Deepfakes are named so.
Savvy news consumers have become pretty accustomed to seeing images getting manipulated by different photo-editing software. Even people, over the last few decades, have become aware that everything that they see on the internet can not always be believed. But when it comes to deepfake videos, that’s a completely different story!
Deepfakes aren’t just some low-quality fake videos where a person’s face is replaced by another person’s face cut out and can easily be identified as fakes with just one look. As huge technological advances came through the application of GANS or generative adversarial networks, the synthesis engine was taught to create better forgeries that are hard to detect.
Image Source: Daily Sabah
You must have come across many memes or videos made using the deepfake technologies that are rather entertaining and have made you laugh. Even there are instances where people applauded the deepfake videos and found them amusing, such as the character of Princess Leia from Star Wars Rogue One being remastered by deepfake or the deepfake video of Tom Cruise playing Marvel’s Iron Man.
But unfortunately, the use of deepfakes didn’t stay limited to creating only fun and amusing content. They garnered worldwide attention for their negative use such as creating pornographic videos, fake news, revenge porn, financial fraud, and hoaxes.
Yes, just as creating pornography featuring a person without his/her consent is possible with deepfake; so is creating political videos where political leaders are seen saying things they never really said. Take for example the deepfake video of the US President Donald Trump insulting Belgium.
While men are inserted in deepfakes mostly as a joke, for example, Nicolas Cage’s face is superimposed onto President Donald Trump’s or the parody of Better Call Trump: Money Laundering, etc.; the deepfake videos of women tend to be pornographic.
Anita Sarkeesian, an American-Canadian media critic was assailed online and eventually inserted in a deepfake porn video on Pornhub (an adult video site) for her feminist critiques of video games and pop culture.
So, now there’s no doubt that if not restrained or taken necessary methods to detect and prevent, deepfakes can be used to threaten the reputation, cybersecurity, corporate finance, political elections, and individuals.
GIF Source: https://venturebeat.com/
Deepfakes are a subset of AI or Artificial Intelligence. They leverage the powerful techniques of AI and machine learning to generate and manipulate visual as well as audio content to deceive others. However, to fight this deceiving AI-based technology, only AI can come to the rescue.
Wondering how AI can help to detect deepfakes when a trained neural network could not? Then check out the most recent attempts of AI that can help to combat deepfake.
Image Source: ResearchGate
A researchers’ group from the University at Albany, SUNY, has found that unnatural eye movements or odd eye blinking speeds or patterns of the video’s subject can be a way to detect deepfake videos with the help of AI.
As GANs or Generative Adversarial Networks use still images as source material; the blinking speeds or patterns cannot be replicated well in the generated deepfake videos. And it is with AI networks that Siwei Lyu, a computer scientist, and his team of researchers succeeded in identifying this flaw.
However, after only a few months of releasing this research, new deepfake videos started to emerge where this detection issue had been addressed. Though analysis of blink patterns and blinking rates or speeds can still help in detecting deepfake videos, the accuracy level of detection is decreasing.
Image Source: Papers With Code
Due to the present constraints related to processing resources, it is necessary to process the still images, used to make a deepfake video, at a common fixed resolution. However, as the camera angle zooms out and in or shifts, the resolution and size of a face in a video keeps on changing which turns the use of still images into a limitation of deepfake Generative Adversarial Networks (GANs).
And it is by exploiting this weakness that the group of researchers from the University of Albany has found a way to detect deepfake videos. Yes, they trained their Artificial Intelligence networks in such a way that they can detect artifacts in the facial features warping that is left behind as a result of the changing face size, therefore, resolution. Through this AI algorithm, the researchers were able to achieve above 90% accuracy in detecting deepfake videos.
Another team from the University of California, Riverside, or UCR, led by Electrical and Computer Engineering professor, Amit Roy-Chowdhury, had developed a neural network architecture where manipulated images can be detected at the high precision pixel level. Using good AI against bad AI i.e. deepfakes, this team was able to show that when a video is tampered with; unnatural smoothening or feathering would appear in pixels on the boundaries of the objects in it.
Imagine what could happen if we could analyze the complete context of a deepfake video instead of just analyzing single frames of it for pixel artifacts?
Well, early in 2019, joint researchers from USC and UC Berkeley released a study where AI was trained to identify the patterns of posture and gesture changes regarding the information that they were conveying.
Since, GNAs depend on still images to produce deepfake videos, learning and reproducing these behaviors (posture, gesture, and tone) is virtually out of question for them.
As a result, naturally, this AI technique was announced to help in detecting deepfake videos with a 95% accuracy rate (expected to go as high as 99% soon).
Image Source: Analytics Insight
It can’t be denied that the deepfake detection techniques are really impressive where the same AI has been used against the AI that has helped in developing deepfakes. But when it comes to using AI to disrupt and destroy the very creation of the deepfake videos, the results are still unknown. Once disinformation is out there in the public domain in the form of deepfakes, it is difficult to combat them and erase their existence.
One by one, the entire world is waking up to the threats of deepfakes. But are we ready to fight this AI-enabled technology? The answer is still unknown.
Though scientists and technology experts are tirelessly trying to put a stop to this ‘tech weapon’, even different ways have been already discovered to detect deepfakes using AI; any final solution is yet to come. While the deepfake creators are leveraging the AI capabilities to come up with more hard-to-detect fake videos; the defenders are trying to use the same technology to detect as well as disrupt the deepfake activities.
Whether combating Deepfake AI with AI will be successful or not, only the future can unfold. With more and more research being done on using AI against deepfake, we can only hope for the best.
Klizo Solutions was founded by Joseph Ricard, an American who has spent the past 10 years working in India, developing good teams and good processes. We have a team of over 40 people, and we develop high level technology in multiple frameworks.