Law on the heels of illegal deepfakes

There are laws to manage deepfakes, but one that augments individual’s right to compensation is also needed

Akshaya Suresh

Aravindini Uma Magesh

Wondering why you received a call from a deceased politician endorsing a political candidate or why Virat Kohli was marketing a betting app? Welcome to the world of deepfakes, where voice calls, videos and images duplicating features of dead or living individuals are possible with Generative AI. The proliferation of deepfakes has jolted governments worldwide to address the threat of deepfakes by introducing new legislation or via amendments to existing laws. Given its ability to generate realistic synthetic media, deepfakes affect individuals through breach of privacy, identity theft, fraud, defamation, misinformation, etc. A report notes that 99 per cent of deepfakes target women and more than 98 per cent are pornography.

Current Indian laws

India’s existing legal regime provides various laws to manage deepfakes. The Indian Penal Code (IPC) has provisions such as Sections 509 (acts intended to insult the modesty of a woman), 499 and 500 (criminal defamation) and 153A (spreading hatred on communal lines) to prosecute deepfake crimes.

Similarly, the Information Technology Act punishes publishing or transmitting sexually explicit or obscene materials involving adults or children vide provisions 66E, 67, 67A, and 67B. Sections 66C and 66D prevent identity theft and impersonating any person using a computer to cheat, respectively. These provisions attract imprisonment and/or a fine.

To address deepfakes, MeitY issued an advisory asking intermediaries to remove deepfake content within 36 hours of reporting. Specifically, Rule 3(2)(b) of the IT Rules mandates intermediaries to take down any graphic, profane or impersonating material (including morphed images) within 24 hours of reporting. Another recent MeitY advisory mandates intermediaries and platforms to label under-trial/unreliable AI models, identify AI-generated or modified content and inform users of the AI model’s unreliability before subjecting them to the AI’s output.

The upcoming Digital Personal Data Protection Act (DPDPA) aims to protect personal data stored in digital form. Where an individual’s facial images, voice or other personal data are used without consent, the person can approach the Data Protection Board of India, to be established under the DPDPA. Unfortunately, DPDPA excludes the use of publicly available data from its ambit, with no provision for compensating the individual for harm caused by misuse of their data.

Therefore, while an individual can lodge a complaint resulting in civil/criminal penalties against perpetrators, provisions for compensation are limited under Indian law.

Celebrities and deepfakes

The judiciary is proactively addressing the rising commercialisation of a celebrity’s persona (name, image and likeness) via deepfakes, without the celebrity’s authorisation. In Anil Kapoor’s case, the Delhi High Court recognised the growing concern around deepfakes and their interplay with the right of publicity. In May 2024, the Delhi High Court passed an interim order restraining the chatbot that responds like Jackie Shroff to protect the economic value held by him.

Positions in other countries

The US Congress recently introduced two Bills (NO FAKES Act and NO AI FRAUD Act) that recognise every individual’s right to protect their persona. The US is currently facing an influx of deepfakes-related suits. A recent plaint alleged misappropriation of a deceased stand-up comedian’s voice. The EU has imposed transparency obligations on AI providers through the AI Act and the GDPR regulates unauthorised usage of personal data.

The application of deepfakes is not problematic in its entirety since it can have varied commercial applications — example, in training, marketing, sales, support services, etc. Start-ups with applications to create deepfakes have attracted investments of $187.7 billion in 2022, which is expected to grow tremendously.

Where deepfakes are employed illegally, the individual must be empowered to rectify the harm. Besides providing civil and criminal penalties for deepfakes, the individual’s right to publicity and compensation should be augmented by law, unambiguously. Until then, if you are signing an agreement to “lend” your image or voice to generate synthetic media, ensure you state the dos and don’ts of how your persona is portrayed.

Suresh is Partner, and Magesh is Associate,

JSA Advocates & Solicitors