On February 11, 2025, The Datasphere Initiative hosted a key side event at the AI Action Summit in Paris, titled “Advancing Global AI Governance: Exploring Adaptive Frameworks and the Role of Sandboxes.” The event brought together thought leaders, policymakers, and experts to discuss how adaptive governance frameworks and AI sandboxes can guide the rapid development of artificial intelligence (AI) while ensuring accountability and ethical responsibility.
The Urgency of Adaptive Governance Frameworks
The event began with Andrew Wilson, Deputy Secretary General of Policy at the International Chamber of Commerce (ICC), stressing the importance of finding effective AI governance solutions, particularly in light of geopolitical tensions and rapid technological change. Wilson introduced the ICC’s four-pillar model, which balances principles, regulation, technical standards, and self-regulation to foster AI development while addressing risks.
Amanda Craig, Senior Director of Responsible AI Public Policy at Microsoft, noted the growing complexity of AI governance due to the proliferation of regulatory frameworks across different sectors and jurisdictions. She called for a set of common governance practices that could work across diverse environments. Lucia Russo, Economist and Policy Analyst at the OECD, introduced the Hiroshima AI Process, which promotes voluntary commitments, transparency, and interoperability to ensure a more integrated approach to global AI regulation.
The Role of AI Sandboxes in Governance
A central focus of the event was the role of AI sandboxes as a tool for governance. Lorrayne Porciuncula, Executive Director of the Datasphere Initiative, emphasized the need for real-world testing and collaboration between developers and regulators in AI sandboxes. She shared insights from the Datasphere Initiative’s latest report, which maps over 60 AI sandbox initiatives worldwide and calls for greater global cooperation. Félicien Vallet, Head of the AI Department at CNIL, explained how sandboxes allow companies to test AI applications while ensuring compliance with existing regulations, offering clarity without regulatory exemptions.
The need for regulatory sandboxes in Europe is particularly pressing, especially with the upcoming EU AI Act, which mandates sandbox creation by 2026. Alex Moltzau, Policy Officer at the European AI Office, discussed how the Act will support startups by providing free, compliant testing environments, but also acknowledged the practical challenges in securing funding and resources for such initiatives.
Promoting Equity Through AI Sandboxes
The event also highlighted how AI sandboxes could promote equity in AI governance. Rachel Adams, CEO of the Global Center on AI Governance, noted that sandboxes offer a chance to level the playing field for countries with limited resources, enabling them to implement AI regulations that align with global standards. Benjamin Chua, Senior Manager at Singapore’s AISI, shared how Singapore’s open-source AI testing toolkits and third-party assurance ecosystem demonstrate how local and international regulatory frameworks can align. By testing AI in real-world conditions through sandboxes, Singapore demonstrates how local and international regulatory frameworks can work together to support responsible AI development.
Practical Insights on Implementing AI Sandboxes
The panel also explored the challenges of designing and implementing AI sandboxes. Caroline Louveaux, Chief Privacy Officer at Mastercard, emphasized the need for clearly defined objectives, timelines, and data protection safeguards. Keith Sabilika, Senior Fintech Specialist at the Financial Sector Conduct Authority of South Africa, stressed the importance of collaboration between regulators especially when considering merging and complex innovations such as crypto assets and AI. HaeOk Choi, a Research Fellow at the Science and Technology Policy Institute, explained how fast-track approvals and government support can accelerate innovation. She showcased Korea’s sandbox program, which has supported over 1,400 projects also noting that despite the potential, interoperability across ministries is still challenging. Choi also emphasized the need for a transition to a risk-based regulatory framework to effectively respond to emerging industries such as AI.
Raphael von Thiessen, AI Sandbox Program Leader at the Office of Economy in the Canton of Zurich, highlighted the flexibility of regional sandboxes, which align with local startup ecosystems. He reminded us that sandboxes should be viewed as just one tool among many. Romanas Zontovičius, ICT Industry Manager at Innovation Agency Lithuania, emphasized the importance of expert consultations and legal advice to ensure that sandboxes are supportive rather than punitive.
A Call for Continued Collaboration
The event concluded with Bertrand de La Chapelle, Chief Vision Officer at the Datasphere Initiative, calling for greater collaboration across sectors. Trust between parties and sectors emerged as a key theme throughout the discussions, with companies needing assurance that participation in sandboxes would not expose them to undue risks, while regulators and civil society demand real accountability. De La Chapelle urged ongoing dialogue between governments, businesses, civil society, and academia to address the complexities of AI governance.
Shaping the Future of AI Governance
The discussions underscored that AI sandboxes are essential tools for testing and refining AI governance frameworks, but must be integrated into broader, well-crafted regulatory structures. By fostering global cooperation and an iterative approach to sandbox testing, stakeholders can develop governance models that balance innovation with safety.
The Datasphere Initiative’s newly released report offers valuable guidance for both policymakers and the private sector, providing a roadmap for designing and implementing effective AI sandboxes. These efforts are key to shaping a responsible, equitable future for AI, ensuring that governance frameworks evolve alongside technological advancements.