Let’s talk about Deepfakes.

Did he really say that?

Abhi Avasthi
4 min readMay 23, 2022

If you’ve watched the above video, it wouldn’t be surprising to you that Kim Jong-Un doesn’t rate democracy at all, in fact you would expect it, the only thing that might be surprising to you would be that he speaks English, and you might be right in suspecting that, because he isn’t very fluent although he speaks it, and surely not comfortable enough to speak with so much poise. The above video is doctored and it is called a Deepfake.Wikipedia defines deepfakes as :

Deepfakes (a portmanteau of “deep learning” and “fake”) are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.

This definition is not extensive but it manages to capture the essence of the technology with the word ‘likeness’, as for where it misses an aspect is that deepfakes could also be audio files or doctored voices over phone calls, and it can have devastating consequences. A group of individuals combined forged emails and deepfake audio managing to convince an employee of a United Arab Emirates company that a director requested $35 million as part of an acquisition of another organization.

Now that we’re aware of the power of this technology, we can delve into the aspects as to why it is so effective. Audio and visual communication are deeply personal ways of communication in that they’ve been the only media of communication for a very long time, and it’s almost impossible to perceive any replication of it to such an extent that all the mannerisms can be captured, and unless you’re a keen observer of expressions, you can be fooled easily. It can almost be thought of as an evil twin but the twin exists only virtually. The major problem it can cause is major distrust in communication over all sorts of media , and as it becomes tougher to detect deepfakes, it will be very difficult to call out propaganda from truth, as seen in the video above.

However, it’s not all doom and gloom, with multiple helpful use cases as well.

Reviving past heroes : We could experience “conversations” with heroes of the past. A recreation of the great painter Salvador Dali has been set up at a museum in Florida, it is aptly name Dali lives, in reference to his quote about not believing in the “death of Dali”. This could have multiple other applications as well, with heroes of history being revived in classrooms to tell their own stories, this would make for a great learning experience, listening to a person describing their own life with their own words over reading it from a textbook. This could also find great applications in audiobooks, with the authors reading them to you, it would be quite an experience to go through Mein Kampf, wouldn’t it?

Freedom of Expression : This could be a big one. It could be a big win for freedom of expression as well as freedom of press, journalists could use synthetically generated faces ( a visual pseudonym if you will) to report information as is without facing threat from dictatorial regimes, it would also give citizens a lot of power to voice their opinion without fearing for their safety. However, this could easily spiral into a propaganda war thereby negating the credibility of such videos altogether.

Personalised Journalism : This could be another way to cater news to the individual to reduce propaganda and keep the users engaged. But again, this could easily turn into a race to get most clicks and monetise. All advantages of this technology come with drawbacks.

Lets look at how this could go wrong :

Pornography : Fake celebrity clips has become a theme in itself, according to an unconfirmed statistic, 96% of all deepfakes are pornographic in nature. This gets really edgy when deepfakes are used for creating sexual content involving children. These videos certainly spark the consent debate and even if this debate is addressed, there is no sureshot way to prevent such videos from existing.

Political Propaganda : There was this video of Donald Trump that caused massive controversy, with the video garnering widespread criticism and public outcry. It was then said to be a deepfake, but neither could be confirmed. Now this is precisely where the problem lies, in heavily politically charged environments, a deepfake does not have to be real to increase polarity, people are already jumpy and they find another reason to think the way they do, even if the video turns out to be a fake, this news will not reach the same amount of people as the misreporting, and with social media algorithms pushing fake news more, transparent and unbiased reporting will take a nosedive. This has been summed up really well by Chesney and Citron :

“As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deepfakes,” they wrote. “Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence.”

As the public learns more about the threats posed by deepfakes, efforts to debunk lies can instead seem to legitimize disinformation, as a portion of the audience believes there must be some truth to the fraudulent claim. That is the so-called “dividend” paid out to the liar.

Technology is a great amplifier, it amplifies everything. It amplifies innovation, discussion, transparency. On the other hand, it also amplifies hatred , divisiveness, propaganda and deceit. The better a technology is the higher the amplifying power, and higher the polarity it can cause. The biggest advantage of technology is that it is democratic, and that is sometimes its biggest shortcoming.

--

--

Abhi Avasthi

I write about things that fascinate me, and make me think.