Deepfakes- The Artificial Intelligence Attack

Sahil Rikhy
3 min readAug 12, 2020

Remember when Mark Zuckerburg boasted about having total control of all his users’ stolen data, or when Barack Obama cursed Donald Trump publicly, or even Elon Musk’s recent video singing the soviet space song. Well, you have definitely been targeted by a Deepfake.

These are simply AI-generated fake videos, photos, or audios that look and sound just like the actual person. This sophisticated AI technology produces fabricated images and sound to convince people something is real when it is not. It can flawlessly stitch a person’s face and voice into an event that they never even participated in. And because they are so realistic, they can scramble the truth in many ways.

Above is an example from the researchers at The University of Washington to show how deepfakes really are becoming a threat to society. They used artificial intelligence to precisely model how Obama moves his mouth when he speaks. This technique allows them to put any words into their synthetic Obama’s mouth. Even though this is just an example of the technology, it is mostly used to manipulate and scam us to believe scenarios that never took place and plays a huge role in eroding trust in all news media.

How are Deepfakes created?

Machine Learning produces “persuasive counterfeit media by studying photographs and videos of a target person from multiple angles and then mimicking their behavior and speech patterns.” reported CNBC. The artificial intelligence is given many hours of real video footage of the person so it gets a realistic “understanding” of what they look like from various angles and under different lighting.

The process then exploits GAN’s (generative adversarial networks) where 2 machine learning models fight against each other to reach perfection with the Deepfakes and making them flawless. While 1 model uses data and trains to create the lookalike, the other model tries to detect any mistakes and faults. Thus model 1 continues to keep creating fakes until model 2 is unable to detect any faults in it, and this process is repeated until the deepfake is perfected and is able to fool model 2.

The Biggest Threat

Right now, Deepfakes pose the biggest threat to the 2020 US Elections. They can easily be used to tarnish the reputation of a political candidate by making him/her appear to do or say things that never actually occurred. It will definitely be used to drop a fake audio or video recording of a candidate days before voting begins, and play a huge role in manipulating the media and shifting votes.

Even though new laws are trying to stop people from making and distributing them, with Facebook and Twitter banning such videos from their networks in an attempt to remove any misleading manipulated media, the detection of Deepfakes plays a huge role in this process. However, many AI researchers have stated that it is not technically possible to detect Deepfakes before they spread virally on social media.

How to stop them?

Only a few countries and companies are leading the efforts in detecting them and punishing the creators. America and Facebook top this list and are leading the pack. In 2019, President Trump signed the nation’s first federal law related to Deepfakes as part of the National Defense Authorization Act. Facebook, on the other hand, has recruiter researchers from Oxford, Berkley, and other top institutions in an attempt to build a Deepfake detector using artificial intelligence itself.

Some research labs have used the concept of watermarks and even blockchain technology to identify manipulated videos. However, Deepfake detection techniques will never be perfect. As a result, in the Deepfakes arms race, even the best detection methods will often lag behind the most advanced creation methods says Villasenor. With the Deepfake technology getting more advanced and the so-called “tells” to identify them becoming even harder, it is being a big challenge to identify them.

To make things worse, not many countries or organizations are taking this issue seriously, with the European Union not seeing this as an important issue compared to the other kinds of misinformation happening online. Even those taking some action, there’s little evidence that the laws/legislation being put forward are enforceable or even have the correct emphasis.

Originally published at https://www.linkedin.com.

--

--

Sahil Rikhy

Meltwater | HubSpot | Blockchain Founders Fund | Podcast Host