Google recently acknowledged in a 2024 environmental report that its 2030 net-zero goals were at risk because of developments related to artificial intelligence (“AI”). Google’s carbon emissions have increased by almost 50% over the past five years. Its report states: “As we further integrate AI into our products, reducing emissions may be challenging.”

Google’s situation appears to be representative of the wider industry. Microsoft’s 2024 Environmental Sustainability Report, like Google’s, states that its emissions grew by almost 30% since 2020 due to the construction of more data centers that are “designed and optimized to support AI workloads.”

However, barring a few exceptions like Google and Microsoft, most AI developers do not disclose, and are not required to disclose, their AI-related emissions. Meanwhile, a May 2024 Goldman Sachs report claims that the processing of an average ChatGPT query requires about 10 times as much electricity as a traditional Google search.

The training process for a single large language model (“LLM”) like ChatGPT can consume thousands of megawatt hours of electricity and emit the same amount of carbon as the annual emissions of hundreds of households. Further, chip manufacturing and AI supply chains produce a significant environmental impact, while AI model training can lead to the evaporation of freshwater resources.

The Policy Dilemma

On the one hand, the power of AI is likely to prove revolutionary while combating climate change. In this regard, AI can play a crucial role in developing sustainable technologies and optimizing resource use.

On the other hand, the training of large AI models, while fundamentally transforming several sectors, remains energy- and resource-intensive, resulting in significant emissions and waste.

Accordingly, balancing AI ambitions with existing decarbonization goals is not just a long-term necessity but an urgent mandate for sustainable growth involving both companies and governments. As India accelerates its journey towards becoming a global economic powerhouse, it may want to address the environmental implications of increased AI deployment.

Global Developments

GPAI

A 2021 report by the the Global Partnership on Artificial Intelligence (“GPAI”) on climate change and AI had issued certain recommendations for government action (such report, the “GPAI Report”). India is a member and Lead Chair 2024 of the GPAI. The GPAI is an international and multi-stakeholder initiative to guide the responsible development and use of AI, based on human rights, inclusion, diversity, innovation and economic growth.

Identifying climate change and digital transformation, respectively, as the most powerful trends of the 21st century, the GPAI Report suggests that the way in which we manage such trends, as well as their increasing interaction, will play a significant role in humanity’s future. Given that every application of AI affects the climate, governments can reduce the negative impacts of AI by incorporating climate considerations into AI regulation, strategies, funding mechanisms and procurement programs.

Further, governments can support the development of relevant institutional capabilities to responsibly implement, evaluate and govern AI in the context of climate change, including through impact assessments via data collection on AI emissions, as well as by establishing standard measurement and reporting frameworks. Meaningful action on such initiatives will require collaborations among multiple branches of government, such as agencies focused on AI and digitalization, agencies focused on climate change and climate-related sectors, standard-setting and regulatory bodies, as well as local governments.

OECD

The Organization for Economic Co-operation and Development (“OECD”) and the GPAI formed a new integrated partnership for the purpose of advancing international efforts towards implementing human-centric, safe, secure and trustworthy AI – as embodied in the principles of the OECD Recommendation on AI (the “OECD Recommendation”). Adopted in 2019 as the first intergovernmental standard on AI, the OECD Recommendation was updated in 2024, including to introduce an explicit reference to environmental sustainability. Pursuant to such revision, the OECD Recommendation calls for inclusive growth, including beneficial outcomes such as the protection of natural environments and the pursuit of sustainable development.

Since AI systems can use enormous computational resources, a November 2022 OECD report aims to improve global understanding and measurement related to AI’s environmental impact, as well as decrease AI’s negative effects and increase planetary benefits. The report distinguishes between: (i) the direct environmental impacts of developing, using and disposing of AI systems and related equipment (on the one hand); and (ii) the indirect costs and benefits of using AI applications (on the other hand). The report ultimately recommends the establishment of measurement standards, the expansion of data collection, the identification of AI-specific impacts, as well as the improvement of transparency to help policymakers leverage AI for the purpose of addressing sustainability challenges.

National and Regional Developments

The US

Earlier this year, a bill related to a proposed Artificial Intelligence Environmental Impacts Act of 2024 (the “US Bill”) was introduced in the US senate, including for the purpose of requiring: (i) the Administrator of the Environmental Protection Agency to carry out a study on the environmental impacts of AI; (ii) the Director of the National Institute of Standards and Technology (“NIST”) to convene a consortium on such environmental impacts; and (iii) the NIST Director to develop a voluntary reporting system on the environmental impacts of AI.

The US Bill stemmed from certain findings, such as the following:

  • The amount of computational power used for AI applications has increased rapidly over the last decade.
  • Accelerating use of AI has the potential to significantly increase energy consumption.
  • Rapid growth in data center infrastructure, including cooling systems and backup power equipment, contributes to pollution, water consumption and land-use changes.
  • Resource- and energy-intensive manufacturing processes are required for the hardware that runs AI, leading to significant environmental impacts.
  • Yearly increases in electronic waste pose greater environmental and health risks.
  • Many AI applications can have positive environmental impacts, such as optimizing systems for energy efficiency, developing renewable energy, and monitoring environmental changes. However, AI applications may also have negative environmental impacts, including rebound effects, behavioral impacts and accelerating high-pollution activities.
  • Estimates of the current and future environmental impacts of AI remain uncertain.
  • Options to reduce the negative environmental impacts of AI include the use of more efficient models, hardware and data centers; harnessing renewable energy; as well as examining the impact of all AI applications.
  • Promoting transparency and protection measures may help mitigate the negative environmental impacts related to the rapid growth of AI use.

While some countries have published rules and guidelines for regulating AI systems, only a few of them address environmental sustainability as an explicit goal. Some jurisdictions, such as the European Union, aim to protect the environment from high-risk AI; however, it focuses mainly on consumer and business interests, fundamental rights, democracy and human safety.

The EU

While defining a “serious incident” to mean an incident or malfunctioning of an AI system that directly or indirectly leads to specified negative outcomes, the EU’s “AI Act” lists “serious harm to property or the environment” as one such outcome. The AI Act contains detailed requirements with respect to the sharing of information on, and the reporting of, such serious incidents.

The AI Act also contemplates codes of conduct related to the voluntary application of certain requirements on the basis of key performance indicators, including for the purpose of measuring objectives that involve assessing and minimizing the impact of AI systems on environmental sustainability – including with regard to energy-efficient programming and techniques.

Further, by August 2028 and every three years thereafter, the European Commission (the “Commission”) will evaluate the impact and effectiveness of such voluntary codes of conduct, including with regard to environmental sustainability. In the interim, the Commission has the power to issue standardization requests asking for deliverables on: (i) reporting and documentation processes to improve resource performance, such as by reducing a high-risk AI system’s energy consumption across its lifecycle; and (ii) energy-efficient development of general-purpose AI models.

In addition, by August 2028 and every four years thereafter, the Commission will review and submit a progress report on standardization deliverables related to the energy-efficient development of general-purpose AI models, and assess the need for further measures or actions, including binding ones.

In terms of technical documentation, providers of general-purpose AI models are required to provide certain descriptive details, including information about the development process related to their models, such as in terms of the known or estimated energy consumption. When unknown, the energy consumption of a model may be based on data about the computational resources used.

For the purpose of determining whether a general-purpose AI model has capabilities or impacts which may be classified as a ‘systemic risk’, the Commission will take into account certain criteria, such as the amount of computation used for training the AI model, or as indicated by a combination of other variables, such as the estimated cost, time or energy consumption for training. The systemic risks which general-purpose AI models may pose include actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors, serious consequences to public health and safety, as well as public and economic security.

India

To demonstrate compliance with current and future environmental laws and AI regulations, AI companies in India could start preparing for the adoption of concrete measures.

Such measures could include disclosing detailed technical documentation on the energy consumption, carbon emissions, and overall environmental efficiency of AI systems throughout their lifecycle. Voluntarily undergoing third-party audits with respect to AI systems for the purpose of verifying efficiency claims and environmental impacts could provide a positive signal to investors – especially those investors which are conscious about environmental, social and governance (“ESG”) parameters.

Consistent with global regulatory discussions, AI companies could categorize AI use-cases based on environmental risk, with high-impact applications potentially subject to stricter oversight. As the regulatory landscape evolves, proactive steps towards transparency, accountability and sustainable practices will be crucial for AI companies to maintain their compliance profiles and public trust.

Investing in ‘green AI’ can not only address climate change concerns, but also support corporate social responsibility (“CSR”) and ESG claims. Demonstrable alignment with sustainable practices can lead to a competitive advantage, attracting investments and consumer loyalty. By learning from global experiences, India can lead the way in terms of responsible AI deployment and sustainable technology innovation. Integrating environmental considerations into AI-related laws may become a crucial element of India’s future regulatory journey.

Through multistakeholder initiatives, AI companies could collaborate with the government to promote the adoption of energy-efficient algorithms and models, including with the aim of minimizing computational complexity and resource use. The use of green AI architectures that minimize latency and energy consumption could be incentivized through policy and practice. Moreover, the AI industry could promote the use of efficient data formats and compression techniques for the purpose of reducing storage and transmission costs.

In addition, AI companies could lead consultations on the environmental impact of their AI systems. Such initiatives could be accompanied by the publication of relevant data on AI projects at regular intervals, along with corresponding discussions on climate change.

In 2023, a policy brief under a G20 taskforce acknowledged the unique governance issues related to the environmental impact of AI. The document pointed out that: (i) the future of AI involves increasing demands for energy and other resources; and (ii) such increasing demands will put a strain on the green transition and renewable energy supplies. Further, AI development will require rare-earth metals, and the mining and processing of such raw materials is likely to damage the environment.

Accordingly, it was suggested that the G20 should convene a commission of experts, along with government and industry representatives, to explore AI’s energy and environmental costs. Such commission should make recommendations about ensuring an equitable share of costs and benefits while underlining globally acceptable environmental and ethical standards.

Conclusion

As AI continues to revolutionize industries, it is important to address its environmental impact. Accordingly, AI may be regulated in the future with the aim of making increased adoption sustainable. For countries like India, which have ambitious net-zero goals but are yet to frame a dedicated AI regime, adopting best practices from around the world is crucial.

For instance, the technical documentation made available by AI providers could include information on the energy consumption and overall efficiency of their AI systems. Such information ought to be comparable and verifiable, in respect of which the Ministry of Electronics and Information Technology (“MeitY”) may develop guidelines in the future, including in collaboration with the Ministry of Power and the Bureau of Energy Efficiency (“BEE”).

Further, regulations around the recording and reporting of AI consumption data can anticipate future contingencies, such as increased computation costs, higher explainability requirements, and more complex hyperparameter configurations. While the design of AI/ML systems should enable the measurement, logging and disclosure of energy efficiency and resource use, the regulatory monitoring of AI-related emissions can be made transparent, including through the adoption of harmonized methodologies, statutory baselines and periodic impact assessments.

In addition, legislators and policymakers need to collaborate with industry leaders and technology innovators for the purpose of standardizing practices to minimize AI’s carbon footprint. These could include (1) algorithmic, hardware and data center optimization; (2) the use of green AI/ML architectures; (3) the adoption of energy-efficient algorithms and models along with ‘smart’ consumption calculation tools; (4) greater utilization of renewable energy sources; as well as (5) carbon offset initiatives and carbon credit trading.

Investing in green AI will not only benefit the environment but also enhance CSR and ESG profiles, including by aligning companies with the growing demand for sustainable practices from consumers, investors and other stakeholders. Going forward, the integration of climate change considerations into AI regulation is likely to become an important element of responsible AI deployment.


This insight has been authored by Rajat Sethi and Dr. Deborshi Barat from S&R Associates. They can be reached at [email protected] and [email protected], respectively, for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.