Burning Questions About Deepfakes Answered by Tech Law Expert, Akash Karmakar

In this interview with Akash Karmakar, we dig into how regulation and emerging tech-solutions can curb deepfakes. We also debate about who should be held responsible for the spread of synthetic content and challenges with source tracing. Karmakar is a partner with the Law Offices of Panag & Babu, in Delhi, India. He leads the technology media and telecommunications practice and specializes in tech-transactions, data privacy, cyber security incident response, and digital economy regulation. 


1. Ashima: Are there any rules or guidelines for using AI to create synthetic media—like deepfakes, or similar content?

Akash: Currently there is no legislation that specifically prohibits the use of generative AI to create ‘deepfakes’ i.e. media including photos, audio or videos that are digitally altered or even generated often with the use of artificial intelligence. 

However, there are provisions under the Information Technology Act that thematically prohibit the application of AI for illicit end-uses. While the Information Technology Act 2000 and Indian Penal Code 1860 had not contemplated each type of possible end use, they did set out overarching prohibitions on indecent representation of women, identity theft and impersonation and the publication of defamatory, misleading, or illicit content.  

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code). Rules, 2021 also requires platforms to take down content that was patently false or misleading or impersonates another person. The challenge today is applying these overlapping regulations to increasingly unusual and unpredictable end-uses of technology.


2. Ashima: I think that raises a broader question—do we need laws to stop misuse of technology? What are the key challenges for regulators and legislators?

Akash: As with any technology, with evolution, the avenues for creative misuse multiply. What we’ve learnt from precedent is that you cannot reasonably expect legislation to be passed reactivelyto specifically prohibit certain end uses of generative artificial intelligence, or any technology for that matter. 

Preempting and prescribing measures to prevent an ever-increasing list of avenues for the misuse of technology is the underlying challenge, which cannot be resolved with mere legislation. In the absence of any specific legislation, the challenge courts face today is adapting and stretching well-settled definition and concepts from archaic laws to fit the contours of multiple avenues of misuse of evolving technology. 

There is a delicate balance between guiding enforcement with overarching guideline-based legislation and being overly prescriptive.


3. Ashima: While laws alone may not suffice, to what extent can specific legislation aid in combating deepfakes?

Akash: Addressing deepfakes with legislation necessitates a two-pronged approach. First, legislation should aim to tackle the spread of deepfakes, which multiplies misinformation or deceives recipients. Second, it should be effective in tracing and punishing the source of the deepfake. 

Source tracing is already how fake news is being traced, and this helps identify the perpetrator rather than people who may spread misinformation, believing the content shared to be true. Proving the malafide intent of an individual disseminating deepfakes is a far higher standard of proof from an evidentiary standpoint.

Just last December, the Ministry of Electronics and Information Technology, issued an advisory to all intermediaries to adopt due diligence obligations to ensure compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. 

However, the emerging trend in enforcement has been to shoot the messenger, rather than punishing the perpetrator of the crime. Legislative intervention here may help redirect the focus of the prosecution to the perpetrator by mandating traceability and eliminating anonymity for content creators since misinformation campaigns are typically run by a strawman.


4. Ashima: Why have deepfakes become so ubiquitous now, and isn’t video, image or audio manipulation an issue that’s been simmering for a while? 

Akash: Till a couple of years ago, deepfakes were limited to a photo or video being morphed frame by frame. Being able to do this required a certain degree of technical expertise and meticulousness, and even so, images or videos morphed frame by frame were fairly crude and easy to detect. This meant an adjudicating authority could review the original video or photo against the morphed one and from an evidentiary standpoint, the unaltered video or photo established a baseline to compare morphed derivatives. 


5. Ashima: How have developments in AI changed how deepfake content is admitted as evidence?

AI has advanced to the extent that synthetic media can be fully generated using artificial intelligence. This poses a range of entirely new challenges since the earlier yardstick of proving media to be fake by submitting or comparing it against the original media no longer applies. Ironically, to detect deepfakes we have turned to other AI-based solutions, which are able to detect content manipulated by AI. 


6. Ashima: AI detection by AI! Could you elaborate on the concept of AI-powered detection of deepfakes?

Developments in technology have led to an increase in capabilities of generative AI to create entirely synthetic media that could not have been envisaged a decade ago. This presents an emerging and evolving threat, not just in the form of misinformation, but also as a destabilizing influence in democracies to mislead public opinion and meddle with the outcome of an election.
 
As a consequence, the sheer volume of deepfake content available online today necessitates the use of automated screening and detection methods as manual detection would be overwhelmed by the sheer quantum of such content. 


7. Ashima: Liability for Misuse: Who is held liable in cases of deepfakes used for defamation, harassment, or other harmful purposes—the creator, the distributor, or the platform hosting the content? 

Akash: Quite simply the liability for misuse of generative AI should first lie with the creator. In instances where there is a concerted effort to distribute and amplify deepfake content, these ‘persons acting in concert’ to borrow a definition from our insider trading laws, should be held liable as there is a common intention to spread misinformation. 

As for platforms hosting deepfakes, their non-cooperation negligence or refusal to take down fabricated content should certainly result in liability being imposed on the platform. Unfortunately, the platforms that host the impugned content are held liable as if they are actively distributing fabricated content. 

We’ve seen an unfortunate trend whereby platforms have amorphously defined compliance thresholds coupled with vague threats of losing immunity from liability for hosting third party content. A platform, unlike a publisher of deepfake content, should become complicit in a crime by their passiveness, only where the platform, has met a bright-line threshold of complicity or negligence. This bright-line threshold needs to be defined by the MEITY to carve out where mere passiveness ends and where active collusion to spread content begins. It is incumbent upon the MEITY to define this clear distinction if it intends to make intermediary safe harbour conditional. I don’t think it would be absurd to expect the MEITY to define service levels mandated via the App Store or Play Store, to be met by intermediaries after being notified by government authorities or private affected parties of fabricated content.

As for crimes committed using AI, the multiple end-uses of technology are not preemptable in a manner that laws can be prescriptive about restrictions on end-uses – so it would be unfair to expect law to proverbially play catch up to technology in this particular instance. Indian criminal laws already have well defined offences and the commission of these offences using AI as the means would not necessitate redefining these offences, be it commission of fraud, impersonation or defamation. 


8. Ashima: How can we effectively clamp down on the rapid spread of misinformation?

Akash: Eliminating safe harbour protections (i.e. exemption of a platform from liability for hosting third party content) from social media and messaging apps if they're slow to remove fake videos could push them to act faster, helping to stop false information from spreading. However, this often distracts regulatory attention and police machinery from pursuing the perpetrator to suppressing dissemination of fakes. There's also a question about whether enforcement efforts in India are mainly aimed at stopping the spread of fakes online because it's hard to trace where they come from. Trust and safety on online platforms have been a growing concern owing to the need to find a balance between keeping users anonymous and being able to trace where content comes from. Perhaps prescriptive statutory obligations to mandate user verification would help traceability of content. But it's important to remember that deepfake content shared offline also poses a serious risk, even though it's easier to control and doesn't spread as quickly.


9. Ashima: How do intellectual property laws apply to deepfakes, especially when they involve celebrities or copyrighted characters? Can only those who have protected their personality rights like Anil Kapoor have a right to sue for misuse?

The use of generative AI to create content that incorporates licensed characters from Disney, SEGA or Nintendo for example, is very similar to an illustrator creating such content. The use of a copyrighted character, or person’s ‘personality rights’ in a commercial context would need consent or licensing. As for the use of such characters or likeness to such characters are bound by the same principles of intellectual property laws as someone who creates content in traditional media formats. The challenge with generative AI is that it could generate a voice, character or image that is vaguely similar to a well-known celebrity or character to bear resemblance, but not so closely related that it would allow for an action for misappropriation of intellectual property, moral or personal rights, trademarks etc.

It's not just celebrities who can protect their personality rights or sue for misuse. The use of any person’s likeness or their face, voice and their personal characteristics to create content, would need their consent, unless intended to be a parody. Any unauthorised use of a personality’s personality rights would be reasonable grounds for a claim of misappropriation, defamation, and/or passing off. The distinction here is that generative AI is able to recreate and mimic the voice of a famous personality merely from the data available on the internet, a feat impossible a couple of years ago. There are a lot of grey areas in the realm of intellectual property rights and personal privacy laws, which need to be filled in to prevent generative AI from appropriating a personality’s likeness, voice etc. 

This risk of appropriation is exacerbated for celebrities and public personalities, since they typically have more photos videos and clips of their voice in the public domain.

The ease of availability of their videos, photos and clips of their voice in the public domain allow AI models to learn intonation, facial expressions etc. from such 'training data’ with the output being refined with the quantum of data it is fed to learn. This allows the creation of voice clips, and even entire videos generated synthetically (without any similar baseline video). 


10. Ashima: What legal recourse can an individual seek from the court in case of a violation of personality rights?

Akash: Unless a person consents to their face, and voice being used in a specific context, every individual has the ability to approach a court to obtain an injunction to stop the dissemination of content they i.e., their character features in without their consent. Enforcement patterns have focussed on preventing dissemination of such content via social media or messaging platforms to prevent the harm caused by the video – this highlights the need for why messaging platforms need to be approachable by their users and be cooperative to take down deepfake videos. However, punishment of the offender as I mentioned earlier, has seen scarce enforcement. For public personalities, any content published other than from their official or personal verified accounts, reputable third-party news and government sources, should be treated with professional scepticism. It’s likely that from a trust and safety standpoint, all user accounts would need to be verified. Paradoxically, generative AI is threatening the veracity of remote user identity-verification by being able to defeat traditional video and voice verification techniques.


11. Ashima: From an evidentiary standpoint, how difficult is it to differentiate between deepfakes and the original content? Will deepfakes pose issues with the presentation of evidence by creating challenges that are not solvable by traditional courts? 

Akash: Yes, initially, deepfake images were discernible with the human eye. Today, detecting content entirely generated by AI (without reliance on any other video, photo or audio) poses new challenges since pixels in an image or video are generated fresh every time, new and cannot be mapped with other content, to detect baseline videos that were manipulated. Expecting reliance on human intervention for each analysis is likely to be overwhelmed by the volume of deepfakes.

Earlier, the existence of an unaltered baseline video was proof that all other variants were altered. Courts may need to turn to trusted deepfake detection tools or use external forensic experts, at least during the nascency of detection tools, as an amicus to assess whether a video is synthetic and generated by AI. 

Deepfakes will certainly pose challenges with the generation and manipulation of images, videos or audio that could be unwittingly admitted as evidence owing to the difficulty in distinguishing between altered and original media. A protocol will likely need to be devised when admitting digital evidence, to either use a third-party validation tool, or devise an indigenous deepfake detection tool for adoption by the judiciary.


12. Ashima: Given that deepfakes represent just the tip of the iceberg, how can the ever evolving technology sector be potentially regulated?

Akash: I believe legislative efforts should not be expended to regulate technology at all. All technology has the capability for misuse, so expecting the Indian legislature to pre-empt each end-use would be absurd. However, the criminal law enforcement machinery in India would welcome procedural guidance, whether by way of rules or legislation that lays down protocols for tracing the source, intercepting distribution and prosecuting individuals responsible for generating illicit deepfake content. One could argue that this would still rely on existing principles of law to decide what is illicit, but I believe the black letter of the law would help bridge interpretative ambiguity and provide valuable procedural guidance on how to tackle malicious end uses generative AI.

Trying to regulate technology rather than a specific end-use, as with blockchain technology and cryptocurrencies often can only be achieved by blockading access to certain technology or limiting the ability of a few to access technology. Access to the internet makes the implementation of any access controls to technology virtually impossible. Trying to limit or regulate technology also stifles innovation and acts as a general prohibition on creative end-uses that may not be malignant. I think a reasonable balance may be achieved by implementing a guideline-based regulation - a good way of setting out broad principles that would be used by the prosecution in determining what end uses were intended to be permitted or prohibited. 

The nimbleness required to issue responsive regulations is unfortunately something that cannot be saddled upon Parliament, so it would probably need to be delegated to a non-partisan regulatory authority for the regulation of emerging technologies. Unlike the Indian Computer Emergency Response Team which has been overwhelmed, any authority to regulate emerging technologies would need to be one that is both technically equipped to address emerging technology as well as legislatively nimble to issue responsive regulations.


13. Ashima: Globally government agencies appear to be taking stringent measures to respond to deepfakes. With rising geopolitical tensions in many parts of the world and deepfakes getting more advanced by the day, what is a long-term solution to preventing deepfakes from disrupting law and order?

Akash: The long-term solution is for all publishers of news to be held liable for content that they broadcast and mandating that such content is verified and validated as being true. The biggest threat to democracy and public order emanates from the reliance on manipulated content relating to key decision makers, world leaders and politicians. 

Inevitably, the traceability of the origination of manipulated content is obfuscated by anonymity. It appears that both from an age-gating standpoint, to protect children from evolving threats in the online domain, as well as for the identification of proxy users, end-user verification may need to be mandated for any platform where a user is allowed to upload content. This could be coupled with video, voice and image fingerprint methods which allow for validation of the credentials of the person uploading the media and validating that the media is unaltered. 

The last solution, and this is an imperfect one, lies in using deepfake-image detectors to detect false or misleading content that is AI generated. The vast volumes of data uploaded into social media platforms every day necessitate automated content moderation mechanisms. 

A challenge with this, is that algorithmic bias that trickles into the training models for such tools, could result in discriminatory or arbitrary takedowns. This model is predicated on the belief of the self-regulating platform’s independence, presuming the neutrality of content review or moderation mechanisms which could in fact be compromised and used discriminatorily. 

Perhaps the balance between perfecting deepfake-image detectors to prevent the suppression of free speech and public order lies in grading deepfake content by risk of harms. Those with the potential to cause irreversible damage to the outcome of an election, decision of a court, or to individual’s reputation, or has the ability to instigate violence, could be taken down till it is verified in the interest of limiting the risk of democratic elections. Perhaps in some limited instances, we must not let perfection not be the enemy of the good.

Also Read

Stay in the know with our newsletter