The emergence of Generative Artificial Intelligence has made it easier to alter or re-create an individual’s persona in the digital space with remarkable accuracy, and this raises questions on the ethical implications of the evolving technology. Indeed, the use of AI assists in the creation of deep fake images and videos, which subsequently impacts the Intellectual Property rights that individuals may have over their persona in the digital space. In this article, we discuss the intersection among AI, deep fakes, and intellectual property rights.


Deepfakes are a type of synthetic media created using AI technologies, particularly generative adversarial networks (GANs). GANs involve two neural networks: a generator that creates synthetic content like images, videos, or audio, and a discriminator that checks these outputs against a dataset of real data to determine their authenticity. Deepfakes can take various forms, such as face swaps, attribute edits, or face re-enactments, allowing for significant manipulation in digital media. Although the term "deepfake" commonly refers to videos, it can also include altered images and audio. This technology's unique feature is its high degree of realism, often making it difficult to distinguish deepfakes from genuine content. The potential consequences of deepfakes are serious, ranging from privacy violations and reputational harm to political misinformation and other forms of deception.


Recently, the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE) issued cautionary notices after deepfake videos falsely depicting their CEOs giving stock/ investment recommendations circulated online. In politics, deepfake videos caused confusion during the Lok Sabha elections, with a fake video of Bollywood actor Ranveer Singh criticizing a political party going viral, and a deepfake of Aamir Khan appearing to support a specific party. These incidents highlight the technology's potential to spread misinformation, influence public opinion, and impact democratic processes.

Personality rights for celebrities or high-profile individuals are protected under IP laws. In Indian jurisprudence, the recognition of publicity rights occurred in the Auto Shankar case1. Supreme Court has recognized a person’s right of control over the commercial use of their identity. Personality rights attach to those who have attained the status of celebrity2, and the infringement of the right of publicity requires no proof of falsity, confusion, or deception, especially when the celebrity is identifiable.3 Where the voice, face, and other personal attributes are protected within the ambit of personality rights, they are not distinctively recognized in the Indian jurisprudence, rather they attract the application of copyright and trademark laws. This inference is supported vide the case Anil Kapoor Vs. Simply Life India & others where the court examined the unauthorized use of actor Anil Kapoor's persona, including deep fake videos and merchandise featuring his image without consent and ruled that this unauthorized use violated his personality and publicity rights, emphasizing that celebrities' livelihood often depends on endorsements and public image. Judge Prathiba M. Singh's decision reinforced the point that celebrities are entitled to legal protection against such unfair practices, which can infringe upon their rights and impact their earnings.


The AI-created deep fakes may potentially (i) infringe the moral rights of the celebrity; and/or (ii) be considered as “passing off” under the existing laws. Passing off, in this context, implies the unauthorized use of a person’s name, image, or other similar personal attributes to create a false association with any matter or material with an intent to deceive the public. However, the case of passing off may hold water only if goodwill is demonstrated, which could potentially be done only if the celebrity or the individual’s goodwill has commercial implications in a specific jurisdiction.

Also, in India, the IP rights of performing artists are assigned to production houses. The artists should ensure that contractually they provide specific rights, including if the production houses can re-generate or use the performance using any GenAI solutions.


While the above-mentioned solutions hold the possibility to address some of the potential challenges, in case of unauthorized creation of deep fakes by unknown miscreants, the actors may have to consider other potential legal solutions attracting the following laws:


  1. The Constitution of India: Right to Privacy: Deepfakes infringe upon an individual's right to privacy by manipulating their identity, face, or features. Article 21 of the Constitution of India guarantees the right to life and personal liberty, which has been interpreted to include the right to privacy. Landmark judgements, such as Justice K.S. Puttaswamy (Retd.) v. Union of India, have cemented this right as fundamental.
  2. The Indian Penal Code, 1860: Several sections within the Indian Penal Code (IPC) may be used to address deepfakes, focusing on issues like defamation, forgery, sedition, and criminal intimidation.
  3. The Information Technology Act, 2000: The Information Technology (IT) Act provides additional avenues to address deepfakes, especially those involving computer-related offences and privacy violations. Section 66D of the IT Act targets identity theft and impersonation via technology. Deepfakes used to cheat or impersonate fall within this.
  4. Digital Personal Data Protection Act, 2023 includes an exemption for personal data that is voluntarily made publicly available, which creates a challenge when it comes to protecting against deepfakes. This exemption, detailed in Section 3(c)(ii) of the Digital Personal Data Protection Act (DPDPA), means that data shared publicly by a "Data Principal" falls outside the scope of the Act's protective mechanisms. This poses a particular risk to celebrities, public figures, and anyone with a significant social media presence, whose publicly shared images or videos could be exploited to create deepfakes. As social media users increasingly share personal content, this exemption complicates efforts to enforce privacy rights and prevent the unauthorized manipulation of digital media. To address the risk of deepfakes effectively, additional legal measures are needed to close this loophole and protect against the misuse of publicly available data.
  5. Intermediaries' Liability: The liability of intermediaries, such as social media platforms, is addressed under Section 79 of the IT Act and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Intermediaries must take down infringing content upon receiving notice or a court order. The rules require certain platforms to appoint personnel responsible for monitoring and identifying inappropriate content.


It is important that the use of AI to generate content, especially deep fakes, does not remain unfettered. Until the complexities posed by the intersection of AI and IPR are resolved by legislature, it is reasonable to apply the existing IPR principles and laws to address these novel issues in the courts of law.