The rise of ‘agentic’ artificial intelligence (“AI”), i.e., systems capable of autonomously planning, initiating, and executing complex sequences of actions (not merely generating outputs), marks a significant inflection point in the evolution of digital technologies. Unlike earlier generations of AI, which functioned as responsive tools operating within defined parameters, agentic systems exhibit a degree of operational independence that approximates functional agency. This transition carries substantial implications for law, regulation, and public policy, particularly in jurisdictions such as India, where technological adoption is rapid and regulatory frameworks remain in development.
Recent deployments across sectors (see, e.g., here, here, and here) – including financial services, enterprise automation, manufacturing, retail, healthcare, and public administration – signal a broader shift towards autonomy, where decision-making authority, rather than mere computational capacity, can be outsourced to algorithmic systems. This evolution is underpinned by the defining features of agentic AI: advanced reasoning capabilities, the ability to interface with and act upon external systems, and participation in complex ecosystems of interacting agents. These characteristics elevate such systems beyond traditional software, positioning them as functional analogues of human agents.
Autonomy, Unpredictability, and Legal Characterization
The defining legal concern arises from the autonomy of such systems in determining how objectives are achieved. While goals are typically specified by human users, agentic AI independently devises and executes the steps required to fulfill those goals, often in real time and in response to dynamic conditions. This introduces unpredictability: system actions may be emergent, context-sensitive, and not fully foreseeable. As a result, foundational legal assumptions concerning control, causation, and responsibility may come under challenge.
Distributed Systems and Multi-Layered Risk
Such challenges are amplified by the distributed nature of agentic AI ecosystems. Systems may rely on external data sources, interact with third-party platforms, and even recruit other AI systems as sub-agents. These architectures create novel risk vectors, including reliance on unverified sub-agents, misaligned risk tolerances among stakeholders, and cascading failures across interdependent systems. This represents a marked departure from earlier technologies, which generally operated within more predictable and user-directed frameworks.
Current Indian Legal Framework: Gaps and Constraints
India’s existing legal framework is not fully equipped to address these developments. While the Information Technology Act, 2000 (“IT Act”) and the Digital Personal Data Protection Act, 2023 (“DPDP Act”) remain the primary statutes governing digital activity, they are oriented towards data governance and intermediary conduct rather than autonomous system behavior. Sectoral regulators such as the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) impose requirements related to outsourcing, resilience, and auditability, but these frameworks presuppose that ultimate control resides with identifiable human actors. They do not adequately contemplate systems capable of independent action across institutional boundaries.
Recent amendments to, and government advisories under, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 represent incremental steps towards risk mitigation, focusing on transparency (e.g., labeling obligations with respect to AI-generated deepfakes), responsiveness (e.g., requirements related to expeditious takedown of unlawful ‘synthetic’ material), and platform accountability (including due diligence obligations). However, since their orientation remains primarily content- and intermediary-centric, they do not adequately engage with the broader risk profile of agentic AI. While relevant in mitigating downstream harms, they are unlikely, in their current form, to address the risks posed by agentic systems.
Liability and the Problem of Attribution
The question of liability presents the most acute legal difficulty. Established doctrines in tort, contract, and criminal law rely on foreseeability, intent, and proximate causation. Under the Consumer Protection Act, 2019, liability depends on identifiable defects, while negligence requires foreseeable harm arising from a breach of duty. Criminal offences typically require the presence of mens rea. Agentic AI disrupts these foundations: outcomes may not be foreseeable, decisions may lack intention, and causation may be distributed across multiple actors and system components. This gives rise to a responsibility gap, where no single actor exercises sufficient control to justify full legal attribution.
However, established legal theories may help narrow this gap. Principles of agency suggest that users may be held responsible for actions undertaken within the scope of authority conferred upon the system. Vicarious liability doctrines attach responsibility where actions fall within an assigned functional role, even if unintended. Strict liability principles may apply where the deployment of such systems creates known and socially unacceptable risks. These potential approaches suggest that liability is likely to be anchored in risk creation, control, and the allocation of authority, as opposed to system autonomy per se. Such doctrinal adaptation aligns with broader principles of Indian law, which emphasize duty of care and control in assigning responsibility.
Nonetheless, limitations will persist. Risk categorization will remain imprecise in rapidly evolving technological environments, and analogies to agency do not fully capture the complexity of multi-agent AI systems. In practice, liability is likely to be distributed across users, developers, and deployers, particularly given the prevalence of third-party AI services and contractual risk allocation.
Comparative Perspectives and the Limits of Existing Models
In light of the above, risk-based regulatory approaches may be necessary. Frameworks that categorize systems based on potential harm – particularly in high-risk sectors such as healthcare, finance, and critical infrastructure – could justify enhanced oversight or strict liability. In lower-risk contexts, traditional negligence standards, supplemented by contractual and insurance mechanisms, may suffice.
However, even sophisticated regulatory models face limitations. The European Union’s AI Act, while adopting a structured risk-based classification, is oriented towards systems with defined functions and predictable impacts. Agentic AI, by contrast, evolves dynamically, making stable risk classification difficult. Moreover, assumptions of meaningful human control may not hold where systems independently determine how to achieve objectives. The distributed nature of AI value chains further complicates accountability, as existing regulatory categories such as ‘provider’ and ‘deployer’ may not align with complex, multi-layered ecosystems. Such factors make AI agents difficult to insure, and reports suggest that insurance companies are increasingly seeking to exclude or restrict AI-related losses and harms from the scope of corporate liability coverage under existing policies – although this trend may lead to novel and/or bespoke insurance product categories.
While civil liability frameworks may mitigate certain risks by incentivizing safer design and deployment, they are unlikely to address systemic harms, particularly where impacts are diffuse or difficult to attribute. This underscores the need for ex ante regulatory frameworks. In India, while initiatives such as the India AI Governance Guidelines seek to provide an initial foundation, they do not yet satisfactorily address issues related to auditability of autonomous decision-making, mandatory incident reporting, or lifecycle accountability.
Prospects for Ex Ante Regulation in India
The proposed Digital India Act offers a potential legislative pathway. Envisioned as a successor to the IT Act, it could incorporate risk-based classification, impose design-related obligations such as auditability and traceability, and mandate pre-deployment testing and ongoing monitoring for high-risk systems. It may also clarify responsibility across the AI lifecycle and integrate with existing frameworks such as the DPDP Act.
Nevertheless, the unpredictability of agentic AI undermines the effectiveness of pre-deployment assessments. Regulatory focus on intermediaries may leave gaps in addressing upstream developers and downstream users, particularly in cross-border contexts. There is also a risk of regulatory imbalance: excessively stringent obligations may stifle innovation, while overly general standards may prove ineffective. Constraints in regulatory capacity and the absence of global consensus further complicate enforcement and interoperability.
Beyond Liability: Systemic and Operational Risks
Agentic AI presents broader risks, beyond legal liability. Systems may override safeguards, exhibit flawed reasoning, or act in unintended ways, particularly in high-stakes environments. At the same time, the economic potential is considerable. Agentic AI may enhance productivity, enable new business models, and support the evolution of India’s services-driven economy towards more scalable and automated operations.
Addressing these dynamics will require a recalibration of legal frameworks. Traditional assumptions centered on human actors, discrete actions, and linear causation must evolve to accommodate systems characterized by autonomy, adaptability, and interaction. This will involve both the reinterpretation of existing doctrines and the development of new regulatory instruments.
Implications for Corporate Risk Management
For companies, risk mitigation must be internally driven, especially in the absence of comprehensive regulation. This may begin with use-case classification and risk tiering, ensuring that high-impact deployments receive enhanced scrutiny. Enterprise-level AI governance frameworks – incorporating legal, technical, and business perspectives – are essential, along with continuous oversight through auditing and monitoring.
Technically, companies should prioritize constrained autonomy, implementing guardrails on system actions through sandboxing, access controls, and human-in-the-loop mechanisms. Observability and traceability are critical, enabling detection of anomalies and supporting accountability. Contractual mechanisms will also play a key role, particularly where third-party systems are involved, through representations, indemnities, and audit rights.
Compliance with existing legal frameworks, including the DPDP Act and sectoral regulatory expectations, must be embedded into AI deployments. Companies should also consider developing AI-specific incident response frameworks, recognizing that failures may involve autonomous actions with downstream effects. A compliance-by-design approach – documenting system design choices, maintaining audit trails, anticipating future regulatory developments, and aligning with global best practices – may become necessary.
Conclusion
Ultimately, the challenge lies in balancing innovation with accountability. Overregulation risks stifling technological progress, while under-regulation may erode trust and expose stakeholders to unacceptable harm. Rather than treating agentic AI as wholly unprecedented, it may be more effective to adapt existing legal principles to its unique characteristics, while recognizing the limits of such adaptation in addressing systemic risks.
For India, the path forward will likely involve a combination of doctrinal evolution and regulatory innovation, including risk-based frameworks, enhanced obligations for high-risk applications, and mechanisms for ongoing oversight. Engagement with international developments will also be critical to ensure interoperability and alignment with emerging global standards. The trajectory of agentic AI may depend on the ability of legal and regulatory systems to evolve in step with technological change.
This insight has been authored by Rajat Sethi and Dr. Deborshi Barat from S&R Associates. They can be reached at [email protected] and [email protected], respectively, for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.