IAS Gyan

Daily News Analysis

Countering deepfakes, the most serious AI threat

30th October, 2020 Editorial

Context: Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarisation, and in some cases, influencing an election outcome.


  • These are a new tool to spread computational propaganda and disinformation at scale and with speed.
  • Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes.
  • Deepfakes, hyper-realistic digital falsification, can inflict damage to individuals, institutions, businesses and democracy.
  • They make it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly without consent and bring threat to psychology, security, political stability, and business disruption.
  • Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes, with easy and unprecedented reach and scale.

A cyber Frankenstein

  • Synthetic media can create possibilities and opportunities for all people, regardless of who they are, where they are, and how they listen, speak, or communicate.
  • It can give people a voice, purpose, and ability to make an impact at scale and with speed.
  • But as with any new innovative technology, it can be weaponised to inflict harm.

Targeting women

  • The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual.
  • Pornographic deepfakes can threaten, intimidate, and inflict psychological harm and reduce women to sexual objects.
  • Deepfake pornography exclusively targets women.
  • Deepfakes can depict a person indulging in antisocial behaviours and saying vile things.
  • These can have severe implications on their reputation, sabotaging their professional and personal life.
  • Even if the victim could debunk the fake via an alibi or otherwise, it may come too late to remedy the initial harm.
  • Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes.
  • Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.
  • Deepfakes can cause short- and long-term social harm and accelerate the already declining trust in news media.
  • Such an erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric.
  • The distrust in social institutions is perpetuated by the democratising nature of information dissemination and social media platforms’ financial incentives.
  • Falsity is profitable, and goes viral more than the truth on social platforms.
  • Combined with distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society.
  • Deepfake can be used to cause riots and, along with property damage, may also cause life and livelihood losses.
  • A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country.
  • It can be used by insurgent groups and terrorist organisations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people.

Undermining democracy

  • A deepfake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy.
  • False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
  • A deepfake of a political candidate can sabotage their image and reputation.
  • A well-executed one, a few days before polling, of a political candidate spewing out racial epithets or indulging in an unethical act can damage their campaign.
  • There may not be enough time to recover even after effective debunking. Voters can be confused and elections can be disrupted.
  • A high-quality deepfake can inject compelling false information that can cast a shadow of illegitimacy over the voting process and election results.
  • Deepfakes contribute to factual relativism and enable authoritarian leaders to thrive.
  • For authoritarian regimes, it is a tool that can be used to justify oppression and disenfranchise citizens.
  • Leaders can also use them to increase populism and consolidate power.
  • Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.
  • Another concern is a liar’s dividend; an undesirable truth is dismissed as deepfake or fake news.
  • Leaders may weaponise deepfakes and use fake news and an alternative-facts narrative to replace an actual piece of media and truth.

Major solutions

  • To defend the truth and secure freedom of expression, we need a multi-stakeholder and multi-modal approach.
  • Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deepfakes.
  • Media literacy for consumers and journalists is the most effective tool to combat disinformation and deepfakes.
  • Media literacy efforts must be enhanced to cultivate a discerning public. Improving media literacy is a precursor to addressing the challenges presented by deepfakes.
  • Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes.
  • There is also need for easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.
  • Deepfakes can create possibilities for all people irrespective of their limitations by augmenting their agency.
  • However, as access to synthetic media technology increases, so does the risk of exploitation.
  • Deepfakes can be used to damage reputations, fabricate evidence, defraud the public, and undermine trust in democratic institutions.

Counter the menace

  • To counter the menace of deepfakes, there is need to take the responsibility
  • To be a critical consumer of media on the Internet, think and pause before sharing on social media, and be part of the solution to this infodemic.
  • It is crucial to enhance media literacy, meaningful regulations and platform policies, and amplify authoritative sources
  • Access to commodity cloud computing, algorithms, and abundant data has created a perfect storm to democratise media creation and manipulation.