This note, the third of a multi-part series on investing in the Indian artificial intelligence (“AI”) sector, discusses a set of advisories (“AI Advisories”) issued by India’s Ministry of Electronics and Information Technology (“MeitY”) with respect to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the “Intermediary Guidelines”).

In the previous note of this series, we outlined certain key regulatory developments in AI in India, including in respect of intermediary liability, digital competition and telecommunication law. In this note, we focus on the AI Advisories related to intermediary liability.

Background

The Intermediary Guidelines were issued under the Information Technology Act, 2000 (the “IT Act”).

Intermediaries

The concept of an ‘intermediary’ is established under the IT Act as any entity that:

  • receives, stores, or transmits electronic records, messages, data, or other content (together, “Content”) on behalf of another entity; or
  • provides any service with respect to such Content.

In general, the definition of intermediaries under the IT Act may include telecommunications service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online auction sites, online marketplaces, social media platforms, e-commerce platforms, online gaming platforms, and similar entities.

Safe harbor

The safe harbor principle is contained in Section 79(1) of the IT Act. Being only passive transmitters of Content in respect of, and/or related to, users – intermediaries have been provided immunity from liability under the IT Act with respect to unlawful Content hosted on their platforms.

However, this safe harbor with respect to unlawful Content generated, uploaded and/or shared by users on intermediary platforms may be available to intermediaries only as long as such intermediaries satisfy certain conditions. For instance, intermediaries should not select or modify the Content being transmitted. Broadly, safe harbor benefits are contingent upon an intermediary’s due diligence in complying with prescribed obligations in respect of hosting third-party information.

In terms of potential liability for AI-generated Content hosted on an intermediary’s platforms, it may be possible for such intermediary to argue that it is immune under the IT Act’s safe harbor principle as long as it can demonstrate compliance with the Intermediary Guidelines. However, the AI Advisories make several recommendations in respect of such compliance, some of which, although not legally binding, may impose additional obligations on intermediaries.

The Intermediary Guidelines

In 2011, the Information Technology (Intermediaries Guidelines) Rules, 2011 (the “2011 Rules”) were notified to provide clear due diligence requirements for intermediaries under Section 79 of the IT Act. Further, the 2011 Rules prohibited content of a specific nature on the internet and required intermediaries such as website hosts to block such prohibited content.

Subsequently, the Intermediary Guidelines were notified, replacing the 2011 Rules. Key changes under the Intermediary Guidelines included additional due diligence requirements for certain types of intermediaries – including social media intermediaries and ‘significant social media intermediaries’. In addition, the Intermediary Guidelines introduced a framework for regulating the content of online publishers with respect to news, current affairs, and curated audio-visual content. The Intermediary Guidelines were further amended in 2022, including for the purpose of extending such additional due diligence obligations to online gaming intermediaries.

Thereafter, through a notification dated April 6, 2023, the MeitY notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (the “2023 Amendment Rules”). The 2023 Amendment Rules amended the Intermediary Guidelines, including in relation to (a) online gaming, (b) due diligence by online gaming intermediaries, and (c) grievance redressal in this regard. However, the validity of the 2023 Amendment Rules, which also empowered the government to identify ‘fake’ information through a ‘Fact Check Unit’ (“FCU”), was judicially challenged. In a judgement delivered by the Bombay High Court (acting as a reference court) on September 20, 2024, Rule 3(1)(b)(v) of the Intermediary Guidelines, as amended by the 2023 Amendment Rules, which authorized the MeitY to establish an FCU, was struck down.

The proposed Digital India Act

The IT Act is expected to be soon replaced by the “Digital India Act”. Since the Intermediary Guidelines were framed and issued under the IT Act, the proposed Digital India Act and its corresponding rules may replace the Intermediary Guidelines as well. Reports suggest that until the rollout of the Digital India Act, the MeitY may further amend the Intermediary Guidelines in connection with AI.

Meanwhile, the Indian government has been actively considering the advisability of issuing a dedicated regulation with respect to AI. Such regulation may potentially be introduced through a separate chapter (or via specific provisions) of the Digital India Act. According to pre-election statements issued by the MeitY, a draft regulatory framework on AI may be released over the next few months. For an analysis of key themes which could be included in the Digital India Act, see our note here.

Going forward, the newly-elected central government is likely to continue the ‘Digital India’ drive, including by funding and supporting various initiatives related to AI and emerging technologies.

Revised intermediary framework and new technologies

Media reports from 2023 had suggested that the Digital India Act will distinguish across types of intermediaries and impose varied responsibilities on them based on their business models. Accordingly, it is possible that AI tools or AI-enabled platforms will be treated as an independent category and differentially regulated under the Digital India Act itself.

In that regard, the Digital India Act may become the country’s default law for technology-related issues, including with respect to online, digital, and social media platforms, as well as devices and internet-based applications which rely on new technologies. The development and deployment of such new technologies may be subject to rigorous requirements, including by subjecting high-risk AI systems to quality-testing frameworks, algorithmic accountability, threat and vulnerability assessments, as well as content moderation.

Statements on deepfakes

Concerns about ‘deepfakes’ (i.e., artificially generated content) affecting the 2024 parliamentary elections prompted the MeitY to address AI-related issues through advisories issued under the Intermediary Guidelines.

Starting in November 2023, the MeitY made several efforts and issued multiple statements related to proposals on regulating deepfakes, along with multiple advisories and dialogues in respect of intermediary liability involving social media and other internet platforms. When questioned in Parliament about the transmission of deepfake images on social media platforms, the MeitY has maintained that the current legal regime under the IT Act is adequate for the purpose of addressing existing issues related to deepfakes.

AI-related Advisories

In December 2023, the MeitY issued an advisory (the “December 2023 Advisory”) for the purpose of advising intermediaries to adhere to due diligence obligations under the Intermediary Guidelines, including with respect to:

  1. communicating rules, regulations, privacy policies and user agreements in a user’s preferred language;
  2. making reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating or sharing any information related to listed user harms or prohibited content; and
  3. identifying and promptly removing misinformation, false, or misleading content, as well as any material that impersonates others – such as deepfakes.

On March 1, 2024, building on the December 2023 Advisory, the MeitY issued another advisory (the “March 1 Advisory”) with specific reference to due diligence requirements under the Intermediary Guidelines. Among other things, the March 1 Advisory advised all intermediaries and platforms to (i) label any under-trial/unreliable AI models, and (ii) secure explicit prior approval from the government before deploying such models in India.

Pursuant to social media posts made shortly after the issuance of the March 1 Advisory, the then-Minister of State of the MeitY clarified that (i) such advisory was aimed only at ‘significant’ and ‘large’ platforms and would not apply to ‘start-ups’; and (ii) legal consequences under existing laws for platforms that enable or directly produce ‘unlawful content’ – as specified under the Intermediary Guidelines – would continue to apply, subject to the safe harbor principle of the IT Act and the rules thereunder (such clarification, the “First MeitY Clarification”).

Meanwhile, the MeitY Union Minister further clarified that (i) the March 1 Advisory was applicable to AI models available on social media platforms, and not to those AI models that apply to agriculture, healthcare, etc.; and (ii) the March 1 Advisory does not have binding effect (such clarification, the “Second MeitY Clarification”).

In general, the March 1 Advisory remained consistent with the December 2023 Advisory, which focused on misinformation under the Intermediary Guidelines, as well as on concerns related to deepfakes, grievance redressal and prohibited content. However, given its perceived ambiguities and related compliance difficulties, the March 1 Advisory was superseded by a follow-up advisory issued on March 15, 2024 (the “March 15 Advisory”). The March 15 Advisory replaced the March 1 Advisory without modifying the December 2023 Advisory.

Key recommendations under the March 15 Advisory

  • Intermediaries have been advised to ensure that the use of AI models, large language models (“LLMs”), generative AI (“Gen AI”) technology, software or algorithms (collectively, “Restricted AI”), on or through their platforms and computer resources, do not allow:
  • -users to host, display, upload, modify, publish, transmit, store, update or share any content in violation of either the Intermediary Guidelines, provisions of the IT Act, or any other law in force; and
  • -any bias or discrimination, or threat to the integrity of the electoral process.
  • Restricted AI, including foundation models, that are under-tested, ‘unreliable’ or under development, should only be made available to Indian users after labeling the possible inherent fallibility or unreliability of the output generated, e.g., through ‘consent popup’ or equivalent mechanisms which explicitly inform users about the fallibility or unreliability of such AI-generated output.
  • Any content generated through synthetic creation/modification of text or audio-visual information using an intermediary’s resources (potentially facilitating misinformation or deepfake content) should be labeled or embedded with permanent unique metadata or identifiers to enable identification of the fact that such information has been created using the computer resources of that intermediary. Further, if any changes are made by a user, the metadata should be configured in a way that enables the identification of such user (or computer resource) that has brought into effect such change(s).
  • Non-compliance with the provisions of the IT Act and/or the Intermediary Guidelines could result in consequences, including prosecution under the IT Act and other criminal laws for intermediaries, platforms and their users.

March 15 Advisory vs. March 1 Advisory

While retaining certain elements from the March 1 Advisory, the March 15 Advisory removed key requirements contained in the former, such as those related to prior government approval and submission of an action taken-cum-status report. This removal is expected to ease the obligations of those intermediaries and platforms which make AI models available to users in India. However, the scope of prohibited content has been expanded to include all content considered ‘unlawful’ under any law in force – as opposed to content deemed unlawful only under the IT Act and/or the Intermediary Guidelines.

Further, the March 15 Advisory appears to have extended the scope of due diligence requirements to all intermediaries and platforms – as opposed to only ‘significant’ and ‘large’ platforms, as informally clarified through the First MeitY Clarification. In addition, while the requirement of being able to identify the creator or first originator of misinformation/deepfake content has now been removed, the March 15 Advisory nonetheless stipulates that, if changes are made by a user, the metadata should be configured to enable the identification of such user or the computer resource responsible for such modification.

It thus appears that labeling requirements with respect to misinformation and deepfakes, as contained in the March 1 Advisory, have now been extended to include the identification of the user or computer resource that causes changes to such data.

Final comments and key takeaways

Despite the changes made to the March 1 Advisory following criticism from industry experts, questions about implementation of the March 15 Advisory persist, including in respect of the meaning of certain terms, such as “under-tested“ or unreliable” AI; “bias”; “discrimination”; and “inherent fallibility”, as well as the manner in which adherence to such advisory is to be achieved and monitored – such as the acceptable forms of labeling which intermediaries are expected to follow.

Further, the term ‘platform’ is not defined under the IT Act or the Intermediary Guidelines. Similarly, the terms ‘significant and large platforms’ and ‘start-ups’, as mentioned in the First MeitY Clarification, are not defined under the IT Act or the Intermediary Guidelines.

Nevertheless, ‘significant social media intermediaries’ have been defined under the Intermediary Guidelines (i.e., social media intermediaries with a registered user base above a government-specified threshold – which is five million users at present). In that regard, the Second MeitY Clarification, as well as subsequent statements issued by MeitY officials, suggest that the AI Advisories apply only to entities that satisfy such definition. However, this understanding has not yet been officially incorporated in the March 15 Advisory nor subsequently clarified.

While the MeitY sent the AI Advisories to a few specific significant social media intermediaries, it is still unclear whether, and to what extent, the March 15 Advisory is legally binding – the Second MeitY Clarification, which indicated that the advisory is not legally binding, was issued in the context of the March 1 Advisory only.

Furthermore, the language of the March 15 Advisory is broad enough to cover all kinds of AI tools and AI-generated content, as may be used/generated by different users for various purposes. Such broad-based application could have significant consequences for all types of players in the AI space since the standards of due diligence that relevant intermediaries need to abide by with respect to AI technologies appear to be high.


This insight has been authored by Rachael Israel and Dr. Deborshi Barat from S&R Associates. They can be reached at [email protected] and [email protected], respectively, for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.