Introduction
On August 13, 2025, the Reserve Bank of India (“RBI”) released the ‘Free-AI Committee Report’ (“FREE AI Report”), which deals with the use and development of artificial intelligence (“AI”) models by entities regulated by the RBI (such entities, “REs”).
Earlier, in December 2024, the RBI had constituted a committee to (i) analyze the use of AI in the Indian financial sector, and (ii) recommend a sector-specific framework for the responsible and ethical enablement of AI models (such framework, “FREE AI”). Pursuant to the release of the FREE AI Report, preliminary reports suggest that the proposed framework may require REs to undertake significant investments and operational changes, including with respect to new governance structures and capacity-building measures.
Banks – a key RE category – have been integrating AI models in their business operations for some time. However, such models are largely unregulated due to the absence of a dedicated AI-regulation in the country. In that regard, the FREE AI Report discusses the basis for regulating AI use in the Indian financial sector in the future, pursuant to principles of transparency, responsibility, and ethics.
Background
With respect to regulating AI, India’s stance has fluctuated between non-intervention (to facilitate growth) and caution (to protect AI users from harm). At present, it appears that India has decided to maintain a pro-innovation approach to promote the development and deployment of safe, responsible, and trustworthy AI models for the purpose of enhancing the quality of life for all. For a broad overview of Indian regulatory developments on AI, see our note here. For a discussion on the challenges associated with AI regulation, including for the purpose of balancing promotion with protection, see our notes here and here.
In March 2024, the Government of India launched the IndiaAI mission with an outlay of INR 103.72 billion. This was followed by an advisory issued by the Ministry of Electronics and Information Technology (“MeitY”), which required intermediaries to comply with due diligence obligations to ensure user safety (for a detailed discussion on the MeitY advisory, see our note here). Separately, in October 2023, an expert group constituted by the MeitY released the first edition of its India AI Report, which laid the groundwork for developing an AI ecosystem in India while ensuring proper regulation based on high standards of governance, protection of intellectual property rights, and deployment of responsible AI systems (for an overview of key AI-related risks and considerations, see our note here).
AI in the Financial Sector
Financial sector firms, both globally and in India, have adopted the use of AI more quickly than other industries. Such adoption has extended to back-end and front-facing operations, including those related to sales, customer and employee experiences, fraud and risk management, and technology development.
There are several benefits which are expected to arise from AI integration in the Indian financial sector. Such integration is likely to enhance credit access, especially for people situated in remote areas. In addition, AI-powered chatbots can reduce the cost of operations for financial sector entities and ensure seamless and reliable communications. Further, AI can be employed by financial institutions for cybersecurity – especially relevant in light of India’s robust digital payments ecosystem. Lastly, AI can assist REs to fulfil their regulatory obligations, including those related to Know-Your-Customer (KYC).
However, an excessive reliance on AI without adequate supervision could pose threats, including those stemming from the potentially exclusionary nature of AI models on account of skewed training data, leading to issues of accessibility and discrimination. The FREE AI Report notes that AI models are ‘western-centric’ and largely based on the English language. However, the customers of REs hail from diverse regions of the country and may not be proficient in English. Accordingly, bespoke AI models need to be developed for customers who speak other languages. While there exist strong reasons to build ‘Indic’ AI models for critical sectors such as national security and defence, such models have not gained much traction in India (including on account of data scarcity in local languages), despite several Government-led initiatives. Nevertheless, such efforts continue, and recent achievements in this regard may be replicated in the future, including for the financial sector.
Further, AI models keep changing constantly and rapidly, evolving from the data generated in their source codes. With respect to the financial sector in particular, there might arise situations where the source code might not be updated by the back-end team with the latest financial and economic information, leading to incorrect output from the AI model.
In addition, the use of AI would require financial institutions to partner with third-party entities for the provision of technology-related services to operate and maintain AI systems. Such partnerships and/or contractual arrangements are likely to expose personal data to additional privacy risks, potential breaches, and/or instances of unauthorized processing, including with respect to certain sensitive information relating to customers. In this regard, for an overview of India’s new data protection regime comprising the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and its draft rules, see our notes here and here. For a discussion on contractual arrangements with third-party entities for data processing, see our note here. Further, cybersecurity concerns may be exacerbated through manipulations of training datasets, leading to erroneous decision-making.
Finally, there are ethical and economic concerns involved in unrestrained AI dependence. AI models in finance might be biased towards or against a specific group or type of customers, including those with a certain profile, background, other demographic features, and/or income range. In such situations, decisions made pursuant to ill-fitted assumptions might lead to unreasonable discrimination and even financial loss.
Regulating AI in the Financial Sector
In the FREE AI Report, the RBI has proposed a two-pronged approach to regulate AI in the financial sector: amendments to existing regulations and new AI-specific rules. In Annexure IV of the FREE AI Report, the RBI has provided suggestions to expand upon seven existing master directions to include AI regulation within their scope. Such master directions deal with, among other things, cybersecurity, digital lending, customer service, fraud detection, information technology (IT) governance, and outsourcing of IT services by REs. According to the RBI, such amended master directions may prove useful in curbing potential problems arising from the use of AI in the financial sector by guiding REs on how existing regulations would apply to AI.
Further, the RBI has proposed the creation of a principles-based framework for developing new regulations to govern AI in the financial sector. Such principles, when adopted and applied in an interconnected manner, are expected to guide the development, deployment, and governance of AI.
Broadly, the seven principles recommended in the FREE AI Report include the following:
- Building and protecting public trust in AI systems.
- Final decision-making should vest with humans and not AI models. Further, human safety, awareness, and interests should be protected during AI-customer interactions.
- The responsible innovation of AI should be prioritized over cautionary restraint.
- AI models should act in an unbiased and non-discriminatory manner.
- Entities deploying AI should be held accountable for the actions of such AI models, irrespective of the extent of autonomy permitted to such models.
- AI models should be transparent in relation to their disclosures, and the entities deploying them should understand how such models work (i.e., explainability, where AI-generated decisions must be traceable to comprehensible, human logic).
- AI models should be sustainable and resilient to physical, infrastructural, and cyber risks.
Further, the FREE AI Report provides for AI governance in two primary aspects – (i) innovation support and (ii) risk mitigation. Aspects related to innovation support include leveraging the benefits of AI by enhancing opportunities for AI development, increasing AI adoption, and promoting AI implementation in a responsible way. This requires policies to develop adequate infrastructure, human skills, and institutional capacity. Aspects related to risk mitigation include AI regulation focused on protecting customers from potential threats with respect to implementing AI in the financial sector. Such protection may be possible through the allocation of governance responsibilities among REs, building strong user safeguards, and ensuring constant oversight over AI systems. This requires amendments to existing regulations, including through tailored changes to account for AI-specific risks, and the formulation of new policies to govern the ever-changing AI landscape.
With respect to liability, the FREE AI Report proposes a mechanism to impose liability in a manner that balances consumer interests with the needs of REs, which may seek to experiment with, and perfect, their AI models. While REs will continue to be held accountable for losses suffered by their customers due to AI related errors, including through the payment of compensation, the supervisory regulation over REs using AI models should be graded. The FREE AI Report suggests that full-scale regulatory actions against REs should be avoided if an error occurs despite their compliance with prescribed safeguards. In that respect, REs should be provided the benefit of doubt, especially if they undertake immediate corrective and risk mitigation measures. However, such flexibility should be limited to one-off instances: accordingly, if an RE fails to address persistent issues, severe regulatory action may be taken. A graded approach can ensure that REs have the proper incentive to undertake innovation and AI integration with due responsibility.
The Way Ahead
The FREE AI Report provides a broad overview of potential compliance obligations which might be imposed on REs through future AI-related regulation. At present, given the absence of clear, comprehensive AI laws, REs could consider self-regulation, including through voluntarily designed risk mitigation strategies pursuant to FREE AI principles. Further, general principles of industry self-regulation are likely to provide appropriate guidance in AI governance.
It is possible that self-regulation could enhance customer trust, help towards building reliable and secure AI models in the future, contribute towards the formulation of industry-wide norms through consensus among relevant stakeholders, and ultimately lead to market and regulatory predictability. For instance, the FREE AI Report discusses important governance strategies that REs could implement and gradually integrate as a self-regulatory exercise. If undertaken at the sectoral level, such regulatory initiatives might enable REs to benefit from shared learning, common expertise access, and collaborative synergies. As an example, in an industry where sectoral participants can address emerging concerns in complex domains, the Advertising Standards Council of India (“ASCI”), a voluntary, industry-run body, collaborates with government agencies to curb misleading advertising. Since the ASCI provides a flexible mechanism to address deceptive practices, it is able to respond quickly, thereby mitigating the need for heavier legal action. However, self-regulation involves certain risks (e.g., misleading and/or false environmental, social and governance (ESG)-based claims), where self-set standards may be manipulated (e.g., for ‘greenwashing’ purposes; for an overview of ‘greenwashing’, see our note here). While ethical concerns with respect to ‘AI washing’ in finance already exist, AI washing is particularly dangerous because it may undermine explainability and ongoing efforts to make AI systems more transparent and trustworthy. If REs exaggerate or conceal how they use AI, it may become difficult for stakeholders to assess the real value or actual risks of AI tools.
Ultimately, self-regulation in the financial sector should lead to the development and enforcement of binding regulation by the RBI. Until then, self-regulatory initiatives might enable the RBI to identify gaps and pitfalls, which it can address later when it frames appropriate rules. Such binding RBI regulations may be necessary to ensure oversight and accountability.
The first step towards self-regulation requires the formulation of an AI-related policy delineating what constitutes responsible and ethical AI use for the organization. Such policy should provide a clear classification matrix of potential risks arising from the use of AI models and identify specific individuals and roles as nodes of accountability for possible harms. The policy should be applicable to both internally developed and off-the-shelf AI models adopted by the RE concerned. REs should also establish dedicated risk management committees for identifying, assessing, and mitigating AI-related risks and periodically updating internal technical, technological, and organizational measures. Such committee should comprise senior management officials who are well-versed with business nuances of the financial sector, as well as sectoral and technical experts in AI who are equipped to guide the ongoing development and refinement of AI models employed by the RE.
In addition, self-regulation should involve the setting up of internal data governance frameworks dealing with data collection, use, access, retention, storage, sharing, processing, modification, and deletion. The DPDP Act and its rules, including the Business Requirement Document (“BRD”) for consent management released by the MeitY on June 6, 2025, provide a good starting point for REs to formulate such data governance frameworks. The BRD provides a technical blueprint for organizations to design and implement a consent management system in compliance with the DPDP Act and its rules. For a detailed guide to the BRD, see our note here. For general discussions on consent management under the DPDP Act, see our notes here and here.
An RE’s data governance framework should provide for checks to ensure that the data fed into AI models in use has been ethically sourced, valid consents have been obtained from relevant individuals, and their rights under the DPDP Act are not infringed. Since financial institutions often deal with sensitive customer data, stringent controls and safeguards should be put in place to ensure its protection, in addition to, and consistent with, obligations under the DPDP Act. Further, the RBI has issued several circulars and guidelines addressing data protection and localization, particularly in the context of digital payments, cybersecurity, and financial data governance.
Further, REs should develop mechanisms for the regular testing of their AI models. Such tests should focus on detecting biases, model degradation, and unexplained behavior (especially in case of autonomous AI models). These tests could be ‘adversarial’ in nature for the purpose of revealing underlying vulnerabilities in AI models. The faults identified through such tests should be remedied, with detailed records of steps taken to remedy the situation. The test reports and records of rectification might help REs to defend themselves against potential regulatory scrutiny. Further, the testing policies should also provide for rigorous and clear protocols and standard operating procedures before adopting any internally or externally developed AI model.
REs should also develop specific policies for customer protection. Accordingly, the establishment of grievance redressal mechanisms may be necessary to facilitate complaints about, and accountability for, customer interactions with AI models or AI decision-making more broadly. These complaints should be addressed by human representatives to ensure customer confidence in final resolutions. Further, customers should be clearly informed about the fact that that they are interacting with an AI model whenever they do, and individuals providing feedback, asking questions, seeking help, or raising grievances, should have the option of shifting to human representatives instead. These customer protection policies should ensure that individuals get adequate time and opportunity to adapt to AI models while availing financial services.
To minimize the impact of potential operational threats and bottlenecks, REs should update their business continuity plans, including responses and strategies to address such concerns. AI deployment-related threats may be generic and traditional (like server failures or cyber incidents) or AI-specific failures, where the AI model might appear to be functional but produces unreliable outputs. Such business continuity plans should provide for human-guided fall-back options so that the RE can assume manual control over the operations while the faults are remedied, ensuring minimum loss or damage.
In addition, REs should establish and implement AI-related audit mechanisms. Regular audits of AI models are crucial since such models are typically adaptive and opaque, thereby making determinations of bias and deterioration difficult. The AI audit mechanism should ascertain whether (i) the data used as input for training the AI model is accurate and unbiased; (ii) the model architecture and design align with the intended purpose; and (iii) the output produced is compliant with applicable guidelines and policies. Further, these audit mechanisms should be graded, i.e., more stringent when the risk level is higher, based on the activities and processing tasks performed by different AI models. External third-party auditors may be employed to undertake annual audits of critical AI models.
Conclusion
While AI has been, and continues to be, actively used in the Indian financial sector, the RBI’s FREE AI Report is significant, including for the purpose of guiding future AI regulation and governance in such sector. The indicative guidelines and principles in the FREE AI Report also provide an opportunity for to develop a constructive system of self-regulation, by devising robust measures and strategies over time to ensure that AI models are safe, reliable, and trustworthy. By enhancing customer confidence and trust, self-regulation may lead to greater integration of AI models in finance.
This insight has been authored by Rajat Sethi, Aparna Ravi, Dr. Deborshi Barat and Divyanshu Sharma from S&R Associates. They can be reached at [email protected], [email protected], [email protected] and [email protected], respectively, for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.