All posts
Published
October 30, 2025

Towards integrated regulation: the path of AI

Global AI regulation is fragmented, posing political risks for emerging markets. Speyside Latin America and Speyside Mexico examine the corporate affairs challenge and regulatory lag. Get the analysis.

Global AI regulation is fragmented, posing political risks for emerging markets. Speyside Latin America and Speyside Mexico examine the corporate affairs challenge and regulatory lag. Get the analysis.

“AI is too important not to regulate and too important not to regulate well.”

Kent Walker, President of Global Affairs at Google and Alphabet

Artificial intelligence (AI) is transforming and redefining various industries globally. Its vertiginous advance, visible in fields as diverse as the development of vaccines or the redesign of Formula 1 cars -which can undergo hundreds of modifications every week-, has generated great opportunities but has also posed risks and challenges that demand integral regulation.

The discussion on these ethical, social and political risks reached Rome a few days ago, where the Second Annual Conference on AI was held where Pope Leo XIV acknowledged that while this technology has been used in a positive way to promote greater equality, “there is the possibility that it may be misused for selfish gain at the expense of others or, worse, to foment conflict and aggression.”

But the international debate on its regulation is divided. At the Paris Summit in February, the United States and the United Kingdom refused to sign a declaration endorsed by more than 60 countries calling for inclusive, ethical and safe AI, arguing that over-regulation could inhibit innovation. This disagreement reflects a larger problem: global regulatory fragmentation that prevents unified and universal governance..

As a result, countries and organizations have adopted different approaches. The European Union banned uses considered “unacceptably risky” such as facial recognition, biometric surveillance and social scoring. The G7 promoted principles and codes of conduct to address threats such as disinformation, invasion of privacy and violation of intellectual property. For its part, the Association of Southeast Asian Nations (ASEAN) promoted an ethical guide with educational recommendations to cement a moral foundation in the development of these technologies; while China implemented regulations requiring security assessments, algorithm registrations for AI providers with social mobilization capabilities, and differentiation between real and AI-generated content.

The private sector has also taken relevant steps. Google, for example, proposes a regulation based on specific risk analysis and adapted to each use case. This proposal seeks to avoid general approaches that limit innovation without responding adequately to the real dangers.

Mexico moving forward little by little

In Mexico, the National Artificial Intelligence Alliance (ANIA) presented the “Proposed National Artificial Intelligence Agenda for Mexico 2024-2030”, with recommendations on policies, regulation, governance and measurement indicators to integrate AI processes in public administration, industry, the education system, scientific research and technological development. However, the lack of a National Strategy and an institution specifically dedicated to this matter has left the country behind. Because of these shortcomings, the Latin American AI Index (ILIA) places Chile, Brazil and Uruguay as leaders in the region, while Mexico ranks sixth.

Despite these efforts, and the dozens of initiatives that have been presented in recent years, no progress has yet been made in updating the legal framework. Above all, because first a constitutional reform is needed that empowers the Congress of the Union to issue a law at the national level.

In this context, what does “good regulation” mean? The global organization “The Ambit”, which in the document “Voices from Southeast Asia. On Global AI Governance,” schematized approaches to AI regulation under three broad headings:

Risk-based approach, which consists of the identification and evaluation of potential risks and the measures that must be adapted to mitigate them. There may be risks to human rights, health and security (violation of privacy and surveillance of citizens); risks to national security (cyber-attacks, data leakage, biometric identification, disinformation and manipulation of information); and risks of intervention in the democratic process (biases in algorithms and dissemination of fake news).

Principled approach, which prioritizes ethical and moral considerations such as avoiding the use of technology for discriminatory purposes.

Value-based approach, which focuses on objectives to be achieved, such as the defense of democracy or the protection of human rights.

Each country will have to choose its approach according to its context. But only a true collaboration between government, the private sector and academia will make it possible to develop regulatory frameworks that promote innovation without compromising fundamental rights.

Regulating artificial intelligence is not just a technical issue. It is a political, ethical and strategic task, because what is at stake is not only the future of technology, but the future of our society.

Conclusion

Regulating AI is essential—not just to guide technology, but to protect our values and future. Only through global cooperation and thoughtful frameworks can we ensure AI benefits everyone.

Our Story

View All News
Public Affairs

Hungary’s April 2026 Elections: Why This Vote Matters for Policy and Business

Speyside Group analyzes Hungary’s 2026 parliamentary elections on 12 April. The elections represent a critical inflection point with direct implications for businesses and investors. Polls suggest a lead for the opposition Tisza party, but structural features of the electoral system and entrenched Fidesz influence mean policy change is likely to be gradual and uneven, creating ongoing regulatory and political uncertainty. The outcome will shape Hungary’s EU engagement, access to funding, sectoral policy, and geopolitical positioning, with implications for market access, fiscal stability, and operational risk. Companies should prioritize regulatory foresight, stakeholder engagement, and adaptable strategies to navigate a transitional environment where political shifts may not immediately translate into predictable policy outcomes.
Read post
Healthcare

The Global Price Anchor: Why 2026 is the Year of the "Glocal" Drug Strategy

Speyside Group analyzes the transformative "Great Healthcare Plan" of 2026, which has fundamentally redefined Pharmaceutical Market Access by linking U.S. drug pricing to international benchmarks. This shift toward a "glocal" strategy means that negotiation outcomes in Europe or APAC now act as a direct Global Price Anchor for the American market, effectively collapsing the divide between domestic and international pricing.
Read post
Public Affairs

Venezuela's Transition: Maduro's Capture, Legal Framework, and the Race for Strategic Resources.

The Speyside Latin America team analyzes the seismic shift in Venezuela following the U.S. military operation "Operation Absolute Resolve" and the arrest of Nicolás Maduro in January 2026. This event has triggered a Crisis management scenario, resulting in a transitional government under Delcy Rodríguez subject to direct U.S. oversight. With the world's largest proven oil reserves of 303 billion barrels and vast critical mineral deposits, Venezuela presents high-stakes opportunities for investors in high growth and emerging markets. Our analysis covers the immediate bullish reaction in the energy sector—with major gains for companies like Chevron and Halliburton—and the strategic race to secure assets in the Orinoco Mining Arc. Navigating this volatile landscape requires robust Corporate Affairs strategies to manage regulatory changes and complex stakeholder engagement.
Read post