Artificial Intelligence is no longer a distant concept — it is shaping daily decisions in business, government, and society. Peru has now taken a decisive step to set the ground rules.
Peruvian AI Act (Law 31814) and its Regulations (Supreme Decree N.° 115-2025-PCM) represent an important first step in building a general regulatory framework and set out the roadmap for both the public and private sectors in the coming years. The text designates the Secretariat of Government and Digital Transformation (SGTD) as the technical-regulatory authority, responsible for leading, assessing, and overseeing the use of AI. Should non-compliance be detected, the SGTD will refer matters to the competent authorities —the Data Protection Authority, Indecopi, the Superintendence of Banking and Insurance (SBS), or the Public Prosecutor’s Office— to enforce the Regulations within their respective mandates.
The Regulations properly enshrine key principles that should guide the development and use of AI: non-discrimination, data protection, algorithmic transparency, human oversight, and respect for fundamental rights. This approach reflects a central idea: AI must serve collective well-being and must not become a source of exclusion, opacity, or rights violations. Technology can bring enormous benefits, provided it is implemented under clear ethical and legal parameters.
One of the core contributions is the risk-based classification into two main categories. First, “prohibited uses,” which include subliminal manipulation of individuals, lethal autonomous systems in the civilian sphere, mass surveillance without legal basis, or biometric analysis that leads to discrimination. Second, “high-risk uses” in areas such as education, healthcare, social programs, employment selection, justice, financial services, and the management of critical assets. Prohibited uses must be eliminated entirely, while high-risk uses may be developed subject to strict conditions of transparency, human oversight, and control.
High-risk systems will be required to clearly disclose their purpose, functioning, and the types of decisions they generate. The Regulations even foresee visible labeling of AI-generated products or services, so that citizens can recognize them. In the case of public entities, the requirements are stricter: they must adopt ethical-use policies, apply technical standards such as NTP-ISO/IEC 42001:2025, publish the source code of systems financed with public funds, and always ensure effective human supervision.
The Regulations also introduce AI impact assessments. For the public sector, these are mandatory for high-risk systems; for the private sector, they are voluntary but strongly encouraged. Such assessments allow organizations to anticipate biases, measure impacts, and document mitigation measures. Companies conducting these evaluations will be better positioned with clients, regulators, and investors.
Another key element is the implementation schedule, which ranges from one to four years. These timelines provide public and private institutions with the necessary space to progressively adapt, thus preventing the Regulations from being perceived as an immediate obstacle.
Nevertheless, challenges remain. Some concepts, such as the “classification of individuals,” are ambiguous and may lend themselves to discretionary interpretations that could hinder legitimate projects. Adding to this is the existence of over twenty legislative initiatives in Congress, which heightens the risk of overregulation and could generate uncertainty. The most reasonable course would be to allow the law and its Regulations to mature before introducing additional burdens.
Despite these tensions, the Regulations open fertile ground for strategic action. Companies can raise their governance standards, invest in specialized talent, adopt best practices, and strengthen transparency as a competitive advantage. The State, in turn, must issue clear guidelines and coordinate with the private sector to avoid duplication and regulatory excess. The real challenge lies in achieving a balance between the protection of fundamental rights and economic dynamism. This balance will be decisive for Peru’s ability to compete regionally in technological innovation.
The Regulations should not be viewed as an endpoint, but rather as a starting framework. Their effectiveness will depend less on what is written on paper and more on how they are implemented in practice. The Peruvian government has taken the first step. It is now up to the private sector to become informed, invest in training, and adopt the new framework to demonstrate that innovation can indeed go hand in hand with rights protection. The goal must be to position ourselves as regional leaders in the ethical and innovative use of AI, rather than fall behind in the face of an unprecedented transformation.