As deepfakes of Indian actors such as Alia Bhatt, Kajol, Rashmika Mandanna, and Katrina Kaif spread on social media platforms, Prime Minister Narendra Modi recently highlighted the challenges posed by technology at a public rally, calling it a “new crisis.”
Deepfakes, as the name implies, use a type of artificial intelligence (AI) known as deep learning to generate images of fictitious events or to convincingly replace one person’s likeness with that of another. They are now being used for pornographic content, election propaganda, and spreading false information about current events such as the Israel-Hamas conflict. Their increasing sophistication makes detection difficult, allowing for the rapid spread of disinformation.
A recent surge of deepfakes in India has paved the way for public debate. Minister of Information and Technology Ashwani Vaishnaw recently emphasized the need for legislation to combat deepfakes, while Minister of State Rajeev Chandrasekhar stated that existing laws are adequate.
“All platforms and intermediaries have agreed that the current laws and rules, even as we discuss new laws and regulations, enable them to deal with deepfakes conclusively (sic),” Chandrasekhar said at the Digital India Dialogue session on November 24, 2023.
Logically Facts contacted experts to investigate the significance of the threat posed by deepfakes and the effectiveness of India’s current legislation in dealing with the issue.
‘Anticipatory not reactive approach’
In November of last year, Chandrasekhar had said, “For those who find themselves impacted by deepfakes, I strongly encourage you to file First Information Reports (FIRs) at your nearest police station and avail the remedies provided under the Information Technology (IT) rules, 2021.” He added that platforms have a legal obligation to remove such content within 36 hours of receiving a report from a user or the government.
However, experts contend that the current approach is reactive. They propose a cohesive anticipatory measure to protect against the misuse of AI.
Apar Gupta, Lawyer and Executive Director of the Internet Freedom Foundation, told Logically Facts that filing complaints at a police station or through the online cybercrime coordination portal is a reactive application of the law for a specific type of deepfake that is obscene. What if it isn’t obscene? What if it’s just a deepfake doing a minor thing? It is also shifting the entire burden of enforcement onto an individual victim.”
Gupta emphasizes the risks associated with deepfakes and suggests looking at what other countries are considering.
For example, regulation in the European Union, supported by the Digital Services Act, requires a code signed by dozens of big tech signatories, including Google, Meta, and TikTok, to improve transparency. It requires social media companies to comply or face a fine equal to 6% of their global turnover.
Raman Chima, Asia Policy Director at Access Now, suggested a similar measure for India, saying, “India’s not engaging in conversations, which means that we’re actually starting from a very problematic approach where we’re trying to create new legislation of our own without understanding the problem or even trying to make sure our laws are consistent with other democracies whom we depend on to have network effect on these platforms.”
He claims that the basic criminal tools for prosecuting complaints exist, but what is needed is something that “matches with the legal frameworks being adopted in other countries.”
According to Chima, the platforms should not bear the sole responsibility for addressing the issue. “It’s extremely problematic because why should Google and other platforms, which aren’t necessarily creating this content, automatically remove it without a legal framework? For example, I’d be concerned if someone made a parody video of a public figure and then platforms automatically removed it without explanation before the general election,” he says.
On October 30, the White House issued an executive order establishing rules to reduce the risks associated with AI in the United States, including creating guidelines for content authentication and clearly watermarking AI-generated content. However, experts believe that bad-faith actors can get around this.
“Things like image markers are a soft commitment from companies, but we must also recognize that only good users will use that signal. “Bad actors would always say, what’s the point… they don’t want it to be attributed to them,” says Ashish Jaiman, director of product management at Microsoft.
Emerging Threats
Sensity AI, a company that tracks online deepfake videos, discovered that since December 2018, approximately 90% of deepfake videos are non-consensual porn, primarily aimed at women.
In fact, a quick Google search using relevant keywords reveals that the top web searches for pornographic websites include the word ‘deepfake’ in their URL and are prominently displayed on such pages.
This is despite the Indian government’s advice to all social media and internet intermediaries to take strict action against deepfake images and videos, including those featuring Indian female celebrities.
“When the Mandanna incident happened, I looked online, and the top three hits on Google showed porn websites dedicated to deepfake technology,” Gupta said in a statement.
A report by the United States Department of Homeland Security reiterates the prevalence of deepfakes in non-consensual pornography and includes examples from Russia and China to demonstrate how synthetic personas are used to gain credibility and promote regional issues.
“It’s going to get more tricky to identify them as the technology gets more sophisticated,” Gupta said. “And, when you think about it more broadly, such technologies can be used for sectarian purposes to portray minorities in a negative light. It will be used for community activities. It will be used in politics to depict politicians performing or saying things they never did. “It’s already a problem and will have a significant impact on human trust,” he said.
The U.S. Department of Homeland Security’s September 2023 report highlights how the scope of media manipulation has grown dramatically over time. And, as the distinction between what is real and what is synthetic blurs, Jaiman warns that good actors should be wary of the liar’s dividend, which occurs when an actual bad act or event is discounted as a deepfake.
The expert cited the example of former US President Donald Trump calling his tape denigrating women as “fake,” saying that “bad actors can use this moniker to hide behind their bad acts.”
Even if tech platforms implement stricter measures to combat the spread of synthetic content, closed platforms such as Telegram have multiple channels and bots dedicated to providing a paid-for deepfake service, adding that “technology enables you to not only create something harmful, but you can actually make it very personalized.”
The Way Forward
Chima believes that instead of outsourcing the problem to the private sector and holding platforms accountable, the Indian government should first engage with developers of such tools to understand how they can affect everyday people.
“You need to find out what ordinary people or people in vulnerable communities are experiencing, which is not what is happening. Instead, we’re seeing a rush to pass legislation because well-known actors or politicians are being targeted,” he said.
Gupta proposes that the government collaborate with domain experts (from academia, civil society, and human rights) to assess the social impact of artificial intelligence.
“I believe such a body is necessary because the impact of AI-based technologies on various sectors in India will be massive. “What we need is heuristic policy-making, not the blunt force of censorship that results in content takedowns,” he says.
However, not all deepfakes serve malicious purposes. They are also widely used in satire, art, and branding, making it difficult to determine intent.
According to Gupta, judging intent is a secondary consideration, but consumption of increasingly convincing synthetic content depicting individuals in a certain light or saying something can cause significant harm. “We are all taught in media literacy to look behind the curtain and question our own biases, but this is often a heavy cognitive burden to impose on people. It affects social trust, as people are naturally inclined to believe what they see. “A very clear, crisp 4K video of me saying something I never did will essentially harm a person’s ability to trust what they see,” he said.