Introduction
Recently, the OpenAI CEO, Sam Altman posted on X seeking recruitment applications for the position of ‘Head of Preparedness’. In the post he acknowledged the rising ethical and security concerns around AI systems. This statement from the CEO of a leading AI Company like OpenAI sparked a conversation for an urgent need to frame regulations to address and mitigate AI related harms.[1]
Risks of AI flagged by OpenAI CEO[2]
- Mental Health and Psychological Harm: AI systems may inadvertently manipulate or destabilize users’ emotional and cognitive states at scale.(To read more on AI related Mental Health and Psychological Harms, refer – https://ssrana.in/articles/openais-report-triggers-ethical-ai-concerns-1-2-million-users-seek-self-harm-related-advice-from-chatgpt/ )
- Cybersecurity Exploitation and Dual Use Capabilities: The same AI capabilities that strengthen cyber defense can be rapidly repurposed to enable more effective cyberattacks.(To read more on cybersecurity in AI Systems, refer – https://ssrana.in/articles/issue-of-prompt-injections-open-ais-new-atlas-browser-faces-critical-cybersecurity-threats/)
- Capability Misuse and Abuse Pathways: AI Capabilities may be misused in ways that are not detected by traditional safety evaluations.
- Risks of Self Improving and Autonomous Systems: Systems that can improve themselves may exceed human control or understanding before risk are identified.
- Governance and Precedent Gaps: There is no proven framework for reliably governing increasingly powerful AI systems under real-world conditions.
Bill in the Lok Sabha to address AI related harms
In an effort to regulate risks related to AI, on December 5, 2025, the Artificial Intelligence (Ethics and Accountability) Bill, 2025 (hereinafter referred to as ‘the AI Bill’ / ‘The Bill’) was introduced before the Lok Sabha by Smt. Bharti Pardhi.[3] The AI bill seeks to establish an Ethics and Accountability framework to prevent misuse of AI and prescribes a penalty of INR 5 crore on such misuse. It also seeks to ensure fairness, transparency and accountability in use of AI systems.
Risks of AI addressed by the Bill
The Bill seeks to address the following risks associated with the use of AI:
- Algorithmic Bias
- Misuse of Surveillance Capabilities
- Lack of Transparency
- Lack of Accountability in decision making AI systems.
While the Bill has addressed some important concerns and risks related to AI technologies, one of the key concerns, as flagged by the OpenAI CEO, is the risks related to Autonomous and Self-Improving AI systems.
The Need to address Concerns around Autonomous and Self-Improving AI systems.
- The Race to developing SuperintelligenceOn July 30, 2025, Meta announced the development of self-improving autonomous AI systems through the Meta Superintelligence Labs. Presently, AI training and development requires human intervention. However, as per the announcement, development of ‘personal superintelligence’ is now in sight. The technology will penetrate beyond just productive tools and workplace automations to impact the individual lives of the people. [4]
- What is Superintelligence?
- Superintelligence refers to AI systems that surpass human intelligence across most domains such as reasoning, creativity, problem solving, and discovery. Personal superintelligence is sub classification of this technology which is a deeply personalized AI companion that understand the user’s goals, values, and context, and helps the user achieve her aspirations. This will be delivered through personal devices like smart glasses that see what the user sees, hear what the user hears, and interact with the user throughout the day. [5]
- How can AI systems become truly autonomous and self-improving?
- Presently, AI training requires human intervention through feedback mechanisms wherein humans score the output of the AI models facilitating ‘Reinforcement Learning’ to ensure that AI systems behave in line with human standards and preferences. However, obtaining human feedback is a slow and expensive process. Therefore, Large Language Models (hereinafter referred to as ‘LLM’) are being used to generate synthetic data for training purposes to mitigate the issue of data scarcity and slow and costly feedback mechanisms. This process essentially make the AI system autonomous and self-improving.
- Need for Ethical and Safety Standards
- In November 2025, OpenAI raised concerns around the risks of Superintelligence and called for a set of shared standards to ensure ethical and safe development of the technology. It recommends measures such as public oversight and accountability proportional to capabilities to prevent derailing of the race to developing superintelligence systems.[6]
How are AI related Risks Presently Governed in India?
Presently, AI regulations around the world are at fairly nascent. Most Countries including the USA, UK and India have adopted a pro-innovation approach and propose a voluntary self-regulating framework to ensure safe, secure and ethical development and deployment of AI technologies. In this context, super-intelligence, autonomous systems and self-improving AI systems are essentially left unregulated. Therefore, based on this pro-innovation approach, India is depending on it existing laws and regulations to regulate AI systems.
- Reliance on Existing Laws and Sectoral Regulations
- As per the India AI Governance Guidelines (hereinafter referred to as ‘Guidelines’) released by the Ministry of Electronics and Information Technology (hereinafter referred to as ‘MeitY’),[7] India has adopted an approach wherein the Government intends to amend the existing laws and regulations to extend their scope and application to AI systems and technologies.[8] Sectoral regulators such as the Reserve Bank of India, Securities and Exchange Board of India, and Insurance Regulatory and Development Authority of India will formulate sector specific regulations related to AI usage for specific purposes in each sector.(To read more on sectoral AI regulations, refer – https://ssrana.in/articles/the-free-ai-framework-regulating-ai-in-financial-sector/ )
- The Guidelines recommend and state that the existing laws such as the Information Technology Act, 2000, The Bhartiya Nyaya Sanhita, 2023 and the Digital Personal Data Protection Act, 2023, after slight modifications, shall be adequately equipped with handling and regulating AI misuse.
- (To read more on the application of existing laws on AI systems, refer – https://ssrana.in/articles/effect-of-digital-personal-data-protection-rules-2025-on-ai-regulation/)
- (To read more on the India AI Governance Guidelines, refer – https://ssrana.in/articles/meity-unveils-indias-approach-towards-regulating-artificial-intelligence/ )
- AI related Harms and the Constitution of India
- The provisions of the Constitution of India, especially Article 14, 19, and 21 can be invoked to seek remedy against injury or harm caused by use of AI systems or technologies. For example, harm caused due to algorithmic bias can be dealt as a violation of Right to equality and Right to life and liberty under Article 14 and 21 respectively. Additionally, the harm caused by usage of AI systems for generation of deep fake content can be dealt as a violation of Right to Privacy under Article 19 as has been seen in the Shilpa Shetty case wherein the Bombay High Court held that creation and circulation of deep fake content infringes an individual’s right to privacy.[9]
Way Forward
According to the statements and announcements by the CEOs of leading AI companies, we are closing in on an era of super-intelligent systems which would be capable of operating autonomously and surpass the capabilities of human in almost every domain. Globally, countries have been focusing on regulating AI by adopting a pro-innovation approach through voluntary regulations which are static and linear in nature showing limited scope and application on dynamically evolving AI systems. This approach have been adopted by the USA, UK, India and numerous other countries around the world. However, this approach poses a significant risk of ineffective regulations. European Union’s risk based approach presumes the categorization of risks in a system where new risks emerge almost every other day.
AI technologies must be considered as complex adaptive systems with unpredictable emergent behaviors where small changes in the code or mechanisms can compound into disproportionate impacts. Therefore, it is more feasible to implement real time guard rails rather than attempting to predict outcomes.[10] The guardrails may include:
- Multi-factor authentication to ensure robust check and balances for high risk actions.
- Implementation of clear technical thresholds through regulatory sandbox
- Manual overrides to preserve human agency and allow human intervention in unpredictable AI behaviors
- Regular mandated audits to enforce explainability.
- ‘Skin in the Game’ principle to ensure accountability of those developing AI systems in case of supposed unintended consequences.
[2] https://x.com/sama/status/2004939524216910323
[5] https://www.meta.com/superintelligence/
[6] https://www.rediff.com/news/report/openai-warns-of-superintelligence-risks/20251110.htm
[7] https://static.pib.gov.in/WriteReadData/specificdocs/documents/2025/nov/doc2025115685601.pdf
[8] https://static.pib.gov.in/WriteReadData/specificdocs/documents/2025/nov/doc2025115685601.pdf
[10] https://eacpm.gov.in/wp-content/uploads/2024/01/EACPM_AI_WP-1.pdf
Our Coverage on LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7414981826768891904
For more information, please visit our site at https://ssrana.in/ or write to us at [email protected]