As per a report of the International Data Corporation, India’s artificial intelligence (“AI”) market is expected to reach USD 7.8 billion by 2025 at a compound annual growth rate of 20.2% since 2020. There is growing investor interest in technology companies in India, including start-ups, that are building products and services using AI and machine learning (“ML”).
In terms of M&A and PE activity in this space, potential target companies may include those which:
- build AI systems or develop associated technologies;
- are involved in research and development (“R&D”) related to AI;
- are engaged in generating revenue from a product or service driven mainly through AI/ML-based algorithms/models;
- primarily have AI-enabled business models; or
- perform discrete activities within, and/or otherwise operate across, the AI value chain.
For a discussion on AI-related opportunities in India, especially in the context of creating ‘synthetic’ (i.e., artificially generated) content through the use of AI/ML, see our note here.
While investments in the AI sector present significant opportunities, they also present a unique set of risks. Investing in an AI company in India requires careful attention to an evolving legal and regulatory landscape, coupled with several industry-specific concerns. In this note, the first of a multi-part series on AI, we discuss some of the legal and practical issues that need to be considered by prospective investors and acquirers while contemplating investments into Indian AI companies.
The next few notes in this series will outline some of the key regulatory developments in AI in India, including in respect of intermediary liability, digital competition and telecommunications law. We will also discuss the interface between India’s new personal data protection regime and AI.
Intellectual Property Protection
An AI company’s ability to safeguard intellectual property (“IP”) should be a critical consideration while evaluating investment opportunities. AI systems rely on algorithms, data and software, all of which can be protected through various IP rights.
Patents
Patents can be obtained to protect the novel and inventive aspects of an AI system’s algorithms and underlying technology. As part of due diligence, an investor should review the target company’s existing patents and pending applications in India. Specifically, it must ensure that key AI innovations are patented under Indian law (e.g., training algorithms; optimization methods that make existing models better or cheaper; niche use-cases or specialized AI applications).
However, the patentability of AI-generated inventions is a complex and evolving area of law, including in India. The Indian Patent Office has not yet provided clear guidance on whether an AI system can be named as an inventor. This is an important consideration, as it may impact the ownership and enforceability of any patent related to AI technology. For a discussion on AI-generated inventions, see our note here.
Copyrights and trademarks
Copyrights can protect the source code and other creative elements of the AI software. Trademarks may be used to protect the branding and identity of the AI company and its products. Accordingly, an investor should ensure that the target company’s software and related content are protected by copyright. An assessment of the extent to which the AI system’s algorithms rely on open source software would also be useful. Further, the strength of the company’s trademarks, as well as the status of registration, should be examined.
Trade secrets
Trade secrets can safeguard the confidential information and know-how underlying the AI system. In that regard, a prospective investor should evaluate the measures taken by the target company for the purpose of protecting its trade secrets, including through non-disclosure agreements and internal security protocols.
IP portfolio assessment
Investors should carefully review the AI company’s IP portfolio and ensure that relevant rights are properly secured, registered, and enforceable in India. Any gaps or vulnerabilities in IP protection should be addressed prior to the investment or acquisition. Further, an investor should understand the practices and timelines of the Indian Patent Office, which may differ from other jurisdictions. Clear ownership of the target’s IP portfolio is important, and the investor should ensure that all IP created by employees and contractors has been properly assigned to the company, including through waiver of moral rights. As part of its IP due diligence, an investor should also confirm general matters such as actual or alleged infringement by third parties of the target company’s IP, any allegations of infringement of third party IP against the target company and use of the target company’s IP by related parties.
Data Rights and Compliance
Since AI systems rely heavily on data – including personal information – the data used for training AI models is a critical asset for AI companies. Investors should conduct due diligence to ensure that the target has necessary rights to collect and use such data. This includes verifying that the target company has (i) obtained valid consents from relevant individuals and entities; (ii) complied with personal data protection laws, including purpose limitations and data minimization principles; and (iii) established appropriate data governance and security measures to prevent unauthorized access to data and data theft.
At present, certain provisions of the Information Technology Act, 2000 (the “IT Act”), including Section 43A and the rules framed thereunder (collectively, the “IT Rules”), are relevant for data-related compliances in India.
Such provisions of the the IT Rules will be soon replaced by the Digital Personal Data Protection Act, 2023 (the “DPDP Act”). The DPDP Act was passed by the Indian Parliament and published in the official gazette in August 2023. Further, the IT Act is expected to be replaced by the “Digital India Act” in the coming months.
Investors in AI should assess the target company’s compliance with these regulations, including with respect to potential liabilities for non-compliance. In addition, the use of data for AI development may be subject to sector-specific regulations, such as those governing the healthcare, financial, or telecommunications industries. The target company should have the necessary licenses, approvals, and consents to collect, store, process and use data in compliance with these regulations.
For an overview of the DPDP Act, including on issues related to data governance and consent management, see our notes here.
Regulatory Landscape and Compliance
Industry-specific regulations
AI applications in India may relate to various industries. Each of such industries is likely to have its own regulatory framework. For instance, an AI model that offers personalized medicine and predictive diagnostics may need to be in compliance with applicable healthcare regulations – such as the Clinical Establishments (Registration and Regulation) Act, 2010, which requires a patient’s electronic medical records to be maintained; or the Digital Information Security in Healthcare Act, 2018, which seeks to govern data security in healthcare services for the purpose of protecting the confidentiality of digital health data.
FDI
India’s foreign direct investment (FDI) policy contemplates prior government approval for investments (including FDI above specified thresholds) in sectors that involve national security, strategic interests or regulatory concerns. India’s Press Note 3 of 2020 (see here and here) seeks to protect domestic sectors (including defense, telecom and IT) by scrutinizing inbound investments from certain neighboring countries (for a discussion, see our notes here and here). AI-related FDI may be subject to government review based on the sectoral application.
AI regulations
India currently lacks a dedicated regulatory framework for AI. For a discussion on the challenges and considerations with respect to AI regulation in India, see our note here. For a discussion on India’s past initiatives on regulating AI, see our note here.
However, the Indian government is actively considering specific regulation of AI, potentially through the proposed Digital India Act. The Ministry of Electronics and Information Technology (“MeitY”) has recently confirmed that the Indian government may regulate high-risk AI for purposes of protecting online users from harm. Until the rollout of such law, the MeitY may further amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the “Intermediary Guidelines”) with respect to AI – including for the purpose of regulating ‘deepfakes’ and other unlawful content. For analyses on deepfakes and synthetic content, see here, here and here.
India may adopt a light-touch approach to AI regulation to allow room for innovation, including through exemptions for start-ups. However, the EU’s AI Act is likely to have significant consequences for companies in non-EU countries (including India) on account of the law’s extraterritorial application. For a broad overview of the global implications of the EU’s AI Act, including on Indian entities in the AI global supply chain, see our note here.
Further, the MeitY, the Telecom Regulatory Authority of India, NITI Aayog (the government’s policy thinktank) and other agencies of the Indian government issue guidelines and recommendations from time to time with respect to the development and deployment of AI. Other government departments, such as those dealing with consumer affairs, may also formulate their own policies on AI use in the future.
Importantly, between December 2023 and March 2024, the MeitY issued a set of advisories on AI under the Intermediary Guidelines. Investors should review the target company’s compliance with respect to such advisories, which cover areas such as:
- Ensuring that AI systems do not enable unlawful content, bias, or discrimination
- The labeling and identification of AI-generated content, such as deepfakes
- Implementing appropriate security and privacy measures
- Complying with sector-specific regulations (e.g., healthcare, finance, telecommunications)
Operational and Contractual Considerations
Investors should also review the AI company’s operational and contractual arrangements.
Consents and licenses
The AI company should have obtained all necessary consents and licenses to operate its business, including for the use of third-party software, data and other IP. Investors should verify the validity and enforceability of these consents and licenses.
Insurance
The AI company should have appropriate insurance coverage for risks associated with its business, such as errors, omissions, security and privacy breaches, as well as regulatory issues. Investors should review the adequacy of such insurance policies.
Vendor and customer contracts
The AI company’s contracts with vendors, suppliers and customers should be carefully reviewed to ensure that such contracts adequately address issues such as data rights, liability, and indemnification. Any contractual limitations or risks should be identified and mitigated.
Talent and employment
The AI company’s ability to attract and retain talent in a rapidly evolving field is crucial. Investors should review the company’s employment contracts, compensation structures, and employee retention strategies.
Technical Due Diligence
A technical due diligence is crucial for understanding the viability and potential of the target company’s technological capabilities.
Technology assessment
Investors should evaluate the target company’s technology, including its scalability, robustness, and competitive advantage.
Development roadmap
Further, investors need to review the target company’s development roadmap to ensure that it aligns with market needs and investor expectations.
R&D practices
Investors must assess the target company’s R&D practices, including its innovation pipeline and collaboration with research institutions.
Potential Liabilities and Risks
AI companies face unique liability risks, including those related to the underlying technology. Investors should carefully consider such risks before investing in or acquiring an AI company in India.
Product and professional liability
As AI systems become more autonomous and make decisions with real-world consequences, the issue of liability for such decisions becomes increasingly complex. Investors should understand the potential legal risks and ensure that the AI company has appropriate risk management practices and insurance coverage in place to mitigate product and professional service liability risks, including through rigorous testing and validation of AI systems.
Algorithmic bias and discrimination
AI systems can perpetuate or amplify biases present in the training data or the algorithms themselves, leading to discriminatory outcomes. For instance, algorithmic biases could impact the lending decisions of fintech companies. As these could have legal and reputational implications, investors should assess the AI company’s measures to identify and mitigate such biases, including through diverse datasets and regular bias testing. They should also evaluate whether the company adheres to an ethical AI framework that promotes fairness and accountability.
Cybersecurity and data breaches
AI companies need to safeguard the data which they collect and process. Further, AI systems can be vulnerable to cyberattacks, which can lead to data breaches and other security incidents. Accordingly, investors should evaluate the AI company’s data security protocols (including encryption and access controls), cybersecurity measures and incident response plans.
Market risks
The AI market in India is dynamic and competitive. Investors need to analyze this competitive landscape to understand the company’s market position and potential threats. They should also monitor regulatory changes that could impact the company’s operations and market opportunities.
Reputational risks
The use of AI technology, especially in sensitive domains like healthcare or finance, can carry significant reputational risks if the technology fails, or produces unintended consequences. Investors should assess the AI company’s risk management and public relations strategies.
Transparency and explainability
AI models should be transparent and explainable to build trust and ensure accountability. Investors should check if the target company’s AI models are designed to be explainable, particularly with respect to regulated industries. They should also assess the company’s practices for maintaining transparency, including clear documentation related to AI development processes and decision-making criteria.
Other considerations and conclusion
As the new frontier of business innovation across the world, AI is poised to shape the future of human work. The risks associated with investments in this sector will vary based on the scope of the AI solution, as well as the sector in which it is deployed; however certain common risks can be addressed by undertaking a comprehensive due diligence exercise.
Investors may be more inclined to fund AI companies that are able to demonstrate a clear and well-planned policy for protecting their IP rights. Protected IP can provide a clear competitive advantage in the AI market, where exclusive rights can help companies to (i) maintain their market position, (ii) charge premium rates, and (iii) establish themselves as market leaders in niche areas. Further, AI companies can monetize IP assets through licensing agreements, allowing other companies to use their AI technologies for a fee. This can be a promising revenue stream and further expand their market reach.
Investors may also draw comfort from AI companies being prepared with a litigation strategy with respect to defending themselves in potential court proceedings. Unresolved legal issues on AI include questions on the limits of copyright protection and fair use, and whether AI-generated inventions can be patented; who owns the rights to AI-generated works; and how to attribute authorship or inventorship in the event that an AI system autonomously creates or invents something new. AI companies should work closely with IP experts who specialize in AI-related issues.
Investors should ask for comprehensive representations and warranties from the target company regarding its AI technology, IP and regulatory compliance, including in respect of data processing and security safeguards. Investors should review all IP licensing agreements to ensure that the company has all necessary rights to use third-party technology and data, and verify that the target’s data licensing agreements: (i) comply with India’s personal data protection regime, and (ii) protect the company from liability. While they would be well-advised to independently keep track of regulatory developments in the sector, investors should also seek ongoing covenants to ensure that the target company is contractually bound to comply with applicable AI regulations as may be notified or updated from time to time.
Please look out for our second note of this series, where we will outline the key regulatory developments in AI in India.
This insight has been authored by Rachael Israel and Dr. Deborshi Barat from S&R Associates. They can be reached at [email protected] and [email protected], respectively, for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.