All posts
Published
October 30, 2025

Towards integrated regulation: the path of AI

Global AI regulation is fragmented, posing political risks for emerging markets. Speyside Latin America and Speyside Mexico examine the corporate affairs challenge and regulatory lag. Get the analysis.

Global AI regulation is fragmented, posing political risks for emerging markets. Speyside Latin America and Speyside Mexico examine the corporate affairs challenge and regulatory lag. Get the analysis.

“AI is too important not to regulate and too important not to regulate well.”

Kent Walker, President of Global Affairs at Google and Alphabet

Artificial intelligence (AI) is transforming and redefining various industries globally. Its vertiginous advance, visible in fields as diverse as the development of vaccines or the redesign of Formula 1 cars -which can undergo hundreds of modifications every week-, has generated great opportunities but has also posed risks and challenges that demand integral regulation.

The discussion on these ethical, social and political risks reached Rome a few days ago, where the Second Annual Conference on AI was held where Pope Leo XIV acknowledged that while this technology has been used in a positive way to promote greater equality, “there is the possibility that it may be misused for selfish gain at the expense of others or, worse, to foment conflict and aggression.”

But the international debate on its regulation is divided. At the Paris Summit in February, the United States and the United Kingdom refused to sign a declaration endorsed by more than 60 countries calling for inclusive, ethical and safe AI, arguing that over-regulation could inhibit innovation. This disagreement reflects a larger problem: global regulatory fragmentation that prevents unified and universal governance..

As a result, countries and organizations have adopted different approaches. The European Union banned uses considered “unacceptably risky” such as facial recognition, biometric surveillance and social scoring. The G7 promoted principles and codes of conduct to address threats such as disinformation, invasion of privacy and violation of intellectual property. For its part, the Association of Southeast Asian Nations (ASEAN) promoted an ethical guide with educational recommendations to cement a moral foundation in the development of these technologies; while China implemented regulations requiring security assessments, algorithm registrations for AI providers with social mobilization capabilities, and differentiation between real and AI-generated content.

The private sector has also taken relevant steps. Google, for example, proposes a regulation based on specific risk analysis and adapted to each use case. This proposal seeks to avoid general approaches that limit innovation without responding adequately to the real dangers.

Mexico moving forward little by little

In Mexico, the National Artificial Intelligence Alliance (ANIA) presented the “Proposed National Artificial Intelligence Agenda for Mexico 2024-2030”, with recommendations on policies, regulation, governance and measurement indicators to integrate AI processes in public administration, industry, the education system, scientific research and technological development. However, the lack of a National Strategy and an institution specifically dedicated to this matter has left the country behind. Because of these shortcomings, the Latin American AI Index (ILIA) places Chile, Brazil and Uruguay as leaders in the region, while Mexico ranks sixth.

Despite these efforts, and the dozens of initiatives that have been presented in recent years, no progress has yet been made in updating the legal framework. Above all, because first a constitutional reform is needed that empowers the Congress of the Union to issue a law at the national level.

In this context, what does “good regulation” mean? The global organization “The Ambit”, which in the document “Voices from Southeast Asia. On Global AI Governance,” schematized approaches to AI regulation under three broad headings:

Risk-based approach, which consists of the identification and evaluation of potential risks and the measures that must be adapted to mitigate them. There may be risks to human rights, health and security (violation of privacy and surveillance of citizens); risks to national security (cyber-attacks, data leakage, biometric identification, disinformation and manipulation of information); and risks of intervention in the democratic process (biases in algorithms and dissemination of fake news).

Principled approach, which prioritizes ethical and moral considerations such as avoiding the use of technology for discriminatory purposes.

Value-based approach, which focuses on objectives to be achieved, such as the defense of democracy or the protection of human rights.

Each country will have to choose its approach according to its context. But only a true collaboration between government, the private sector and academia will make it possible to develop regulatory frameworks that promote innovation without compromising fundamental rights.

Regulating artificial intelligence is not just a technical issue. It is a political, ethical and strategic task, because what is at stake is not only the future of technology, but the future of our society.

Conclusion

Regulating AI is essential—not just to guide technology, but to protect our values and future. Only through global cooperation and thoughtful frameworks can we ensure AI benefits everyone.

Our Story

View All News
Public Affairs

CEE 2026: Country Dynamics & Strategic Outlook

The Speyside Group analyzes the 2026 strategic landscape for Central and Eastern Europe (CEE), a region that currently serves as a pivotal bridge and a testing ground for economic resilience and political adaptability. Across the region, geopolitical pressures, European Union (EU) policies, and national investment ambitions are converging, creating a highly fragmented but lucrative market for foreign direct investment (FDI)
Read post
Latin America

Colombia’s 2026 Elections: Stability, Constraints, and What Investors Should Really Be Watching

Speyside Group analyzes the landscape of Colombia's 2026 Elections, focusing on the critical balance of institutional Stability and severe macroeconomic Constraints. As the country approaches a decisive electoral calendar, the core question for market participants is no longer just who wins, but who can govern effectively. We explore the strategic Implications for Energy, Mining, and Infrastructure , highlighting that execution risk, rather than ideological shifts, is ultimately What Investors Should Really Be Watching.
Read post
APAC

Fragmentation or the Future? Navigating Extended Producer Responsibility in Southeast Asia

The Speyside Group team analyzes the evolution of Extended Producer Responsibility (EPR) in Southeast Asia, which has transitioned from a niche European policy into a defining element of the region's environmental governance. Unlike the coordinated approach of the European Union, Southeast Asia's EPR landscape is heavily fragmented across six major markets: Indonesia, Vietnam, Thailand, the Philippines, Singapore, and Malaysia. While this fragmentation creates immediate compliance complexities for multinational corporations, it also presents significant commercial opportunities for first movers willing to embed circularity into their core operations.
Read post