Over the last decade and more so in the recent years, there has been rapid advancement in capabilities of Artificial Intelligence (“AI”) with the potential to transform the society and its functioning. However, this has also highlighted the need to regulate AI’s development, deployment, and governance to mitigate the associated risks, which can perpetuate or exacerbate inequitable outcomes. As a result, legislative action on AI is increasing globally. The European Union has come up with a draft AI Act (accessible here) which takes an approach of managing the different risk levels of AI. However, the United States’ Blueprint for AI Bill of Rights (accessible here) takes a voluntary (for now) and principle-based approach to regulate AI, focusing on their democratic values. Even the internet watchdog of China has solicited comments on its draft measures for regulating generative AI (accessible here), which mandate aligning data sets with socialist values and a mandatory security assessment.


In India, NITI Aayog has released a series of papers on #ResponsibleAI, highlighting that India should have an overarching AI ethics principle that will guide the overall design, development, and deployment of AI in the country (accessible here). NITI Aayog also elaborated on principles of safety and reliability; equality; inclusivity and non-discrimination; privacy and security; transparency; accountability; and protection and reinforcement of positive human values (accessible here). Notably, in a recent response to a starred question in the Indian Parliament (Lok Sabha), the Minister of Electronics and Information Technology stated that the Government is not presently considering regulating AI in India (accessible here); however, India may debut for a regulation of AI in this area by following an approach similar to the one taken by EU in the upcoming Digital India Act.


Guardrails foster beneficial innovation while addressing the legitimate and harmful concerns that arise from unrestricted and exponential technological progress. Drawing from such literature and the ‘golden triangle’ of Article 14, 19 and 21 of our Constitution, certain elemental principles have been set out below that stakeholders in India should adopt in relation to developing, deploying, or governing AI to ensure that the consequential AI products and applications are ‘trustworthy’, ‘responsible’, and can be released in the global and domestic markets. These principles are recommended to ensure that Indian businesses and entrepreneurs venturing into this domain remain immune from any changes in the international or domestic regulatory environment.


I. Fairness

 

AI systems must be designed and deployed with fairness in mind, including concerns for equality and equity in the context of issues such as biasness and discrimination. Discriminatory practices can occur when AI technology negatively differentiates treatment based on characteristics such as caste, religion, colour, ethnicity, sex (including pregnancy, childbirth, medical conditions, gender identity and sexual orientation, age, national origin, disability, genetic information, or any other classification). Such practices may amplify harmful historical and social treatment.

 

The concept and standards of fairness can be complex and challenging to define as perceptions of fairness can vary significantly depending on cultural perspective and can be further influenced by the specific application of the technology. When developing and deploying AI, it is essential to ensure that any data used, represents correct and relevant data set with respect to the AI system’s deployment. However, it is important to recognize that simply mitigating biases does not necessarily equate to fairness in AI. An AI system that produces balanced predictions across demographics may still exclude individuals with disabilities or be impacted by the digital divide.

 

The protection against algorithmic discrimination should include a proactive equity assessment, the use of representative data, and safeguarding against attributes that are highly correlated with certain demographic features. The AI system must be tested before it can be sold or used to ensure that it is free from algorithmic discrimination. The AI must treat all individuals under the same circumstances equally.


II. Transparency and Accountability

 

To ensure transparency in the use of AI, several measures should be taken. Firstly, individuals must be informed when they are interacting with an AI or when their emotions or characteristics are being recognized through automated means. If AI is used to generate or manipulate image, audio, or video content that resembles authentic content, there should be an obligation to disclose that the content is generated through AI.


Logging capabilities should be incorporated into the AI system to ensure traceability of the functioning of AI throughout its lifecycle, especially for critical sectors such as healthcare, power, defence, finance, public infrastructure, etc. ‘Explainability’ and ‘interpretability’ are important in understanding the mechanisms underlying an algorithm’s operation and the meaning of the AI system’s output. Explainable systems can be more easily debugged and monitored and lend themselves to more thorough documentation, audit, and governance. Risks to interpretability can often be addressed describing why an AI system made a particular prediction or recommendation. If an AI directly enables harm that could have been reasonably foreseeable, those responsible for such an AI can be potentially held liable. Claiming ignorance or inability to determine how the harmful output was produced cannot be a tenable defence.


III. Safety and Governance


To ensure the responsible development and use of AI, it is essential to establish clear governance structures and procedures. Responsibility for risk mitigation, incident response, and potential rollback should be held at a higher management level in the organization to make prompt decisions.


AI systems should be designed to allow natural persons to oversee their functioning, with appropriate measures identified by the developer/deployer before placing the AI on the market or putting it into service. These measures should ensure that the AI is subject to operational constraints that cannot be overridden by the AI and are responsive to a natural person. Additionally, the individuals assigned to oversee the system should have the necessary competence, training, and authority to perform their roles. Human oversight aims to prevent or minimize the risks to health, safety, or fundamental rights that may arise when the AI system is used as intended or under reasonably foreseeable misuse conditions.

 

Furthermore, AI systems should be developed in consultation with diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts. Pre-deployment testing, risk identification and mitigation, and ongoing monitoring should be conducted to ensure the system is safe and effective for its intended use, and to address potential risks and concerns that may arise beyond the intended use, with adherence to domain-specific standards.

 

Finally, to ensure that the AI system is fair, honest, and impartial, its design and functioning should be recorded and made available for external scrutiny and audit to the extent possible. It may be appropriate to conduct an independent ethics review before deployment. Under no circumstances should any AI cause physical or psychological harm or lead to a state in which human life, health, property, or the environment is endangered as prescribed under ISO/IEC TS 5723:2022.

 

IV. Data Protection


AI systems should prioritize privacy protection through the principles of "privacy by design" and "privacy by default” and seek consent from individuals for the collection, use, access, transfer, and deletion of their data. Consent requests should be clear, concise, and understandable in plain language and all languages of the Eighth Schedule of the Constitution of India.

 

Data protection principles such as built-in privacy protections, data minimization, use and collection limitations, and transparency should be integrated into AI systems. This is especially important in cases where AI systems are used to make decisions that can impact people's lives, such as granting loans or employment. There is a risk that inaccurate and faulty data collection will lead to adverse decisions.

 

AI systems that collect, use, share, process, or store sensitive personal data or information should undergo a thorough ethical review and monitoring. This includes data related to health, disability, biometrics, behavioural, interaction with the criminal justice system, family information, minors, or data that could expose individuals to significant harm, such as a loss of privacy or financial harm. Even in derived form, such data should not be sold, shared, or made public as part of data brokerage agreements.

 

In the ever-evolving world of artificial intelligence, one-size-fits-all regulations are simply not feasible. With AI’s versatility and myriad forms, each with its unique risk profile, the path to effective regulation is a winding one. However, businesses in India would be remiss to let regulatory uncertainties leave them in the dust of the AI race. To remain sustainable and competitive in the long run, they must navigate the complex landscape of AI governance with diligence and foresight.


Authored by: Mr. Nakul Batra, Partner and Ms. Aankhi Anwesha, Associate – DSK Legal