AI, AI, and More AI: A Regulatory Roundup
The Biden Administration releases an Executive Order. The UK holds a much anticipated AI Safety Summit. The G7 agrees on an AI Code of Conduct. China is cracking down, struggling to censor AI-generated chatbots. The OECD attempts to win an agreement on common definitions. And the European Union plows ahead with its plans for a binding AI Act.
Ever since Chat GPT burst onto the scene, AI has jumped to the top of digital policy agendas. Here are the major initiatives and analyses from CEPA and elsewhere analyzing this flurry of activity:
White House Executive Order on Artificial Intelligence
President Joseph Biden’s Executive Order on Artificial Intelligence, released on October 30, aims to put guardrails on the new technology while solidifying the US lead, including measures to attract foreign talent to Silicon Valley. Vice President Kamala Harris followed up on November 1 by announcing the creation of a new Safety Institute at the UK AI Safety Summit.
- CEPA Analysis
- The Quiet US Revolution in AI Regulation by Pablo Chavez
- Leap Forward: AI Executive Order Pleases Optimists and Pessimists by Camilo Torres Casanova & Eduardo Castellet Nogués
- Top Takeaways
- Companies are required to share their safety test results for those systems that can be particularly risky.
- The order addresses the danger of racial, religious, and gender discrimination by issuing guidance against AI-generated bias.
- Cloud service providers are to report foreign customers to the federal government, limiting the ability of foreign countries to train AI models.
- US tech leadership should be strengthened by providing a path to streamline visa applications of AI experts.
- Further Reading
- Harris Warns That the ‘Existential Threats’ of AI Are Already Here — The New York Times
UK AI Safety Summit
The UK AI Safety Summit is the world’s first global meeting focusing on the “existential risks” of emerging technologies. It targets two types of threats: “misuse” and “loss of control.”
- CEPA Analysis
- Finding a Niche: UK Takes on AI Existential Risk by Clara Riedenstein and Bill Echikson
- Top Takeaways
- The UK AI Safety Summit is focusing on frontier models and existential risk, issues that are not being covered by other frameworks or governments to carve their own space.
- The Bletchley Declaration recognizes AI as a “potentially catastrophic risk to humanity,” with China, the US, the EU, and the UK among the 28 signatories of the agreement.
- Signatories agree to develop respective risk-based policies and enhance information-sharing across the AI scientific community.
- Further Reading
- How Sunak’s Bletchley Park summit aims to shape global AI safety — Financial Times
US Senate AI Insight Forums
Senator Chuck Schumer’s (D – NY) AI Insight Forums are designed to educate American legislators on the use, pitfalls, and benefits of artificial intelligence. The third and fourth gatherings of private sector and civil society leaders were happening on November 1. But Congress is yet to consider significant AI legislation with a binding impact.
- CEPA Analysis
- Congress Talks AI With Tech Titans by Pablo Chavez, Ylli Bajraktari, Enrique Dans, Virginia Dignum, and Koustubh Bagchi
- Top Takeaways
- The closed-door briefings focused on Workforce (Meeting 4), High Impact AI (Meeting 3), Innovation (Meeting 2), and Elections & Security (Meeting 1).
- The Forums are designed to educate US Senators so they can introduce sensible and pragmatic legislation.
- The Forums are a part of Sen. Schumer’s SAFE Innovation Framework for AI, which keeps “innovation as the North Star” in legislation.
- Further Reading
- US Senate AI ‘Insight Forum’ Tracker — Tech Policy Press
G7 at Hiroshima, Japan
At the Hiroshima meeting of the Group of Seven (G7) in May 2023, the leaders of the world’s largest democracies agreed to work on AI principles and a Code of Conduct. But the G7 has no binding powers and critics see its code as the lowest common denominator.
- CEPA Analysis
- In Hiroshima, the G7 Seems Set to Sing Washington’s Tune on China by Matthew Eitel
- Top Takeaways
- The G7 approved an International Code of Conduct for Organizations Developing Advanced AI Systems, a voluntary framework for companies to commit to transparency, watermarks, and other measures to address security and privacy risks and encourage information sharing.
- The leaders agreed on the International Guiding Principles for Organizations Developing Advanced AI Systems for the public sector to promote a risk-based approach to developing trustworthy AI.
- The G7 statement commits to further cooperation on AI Standards.
- Further Reading
China
China has rolled out detailed regulations governing artificial intelligence, including measures governing recommendation algorithms as well as new rules for synthetically generated images and chatbots. Western critics see the moves as an attempt to control the new technology, with some predicting AI chatbots will challenge censorship.
- CEPA Analysis
- Transatlantic Unity on China Runs Through AI by Eva Maydell and Ylli Bajraktari
- Watch Out Russia and China: AI is a Threat by Ben Dubow
- Top Takeaways
- China requires AI software to reflect the “core values of socialism” in the Measures for the Administration of Generative Artificial Intelligence Services.
- The Regulations on Security Management of Facial Recognition Technology Applications uses AI as a tool for authoritarian control of its population through facial and pattern recognition.
- China’s Administrative Provisions on Deep Synthesis of Internet Information Services, introduced in 2022, seek to regulate generative AI to prevent the emergence of anti-regime narratives.
- Further Reading
- What the U.S. Can Learn From China About Regulating AI — Foreign Policy
OECD AI Principles
The Paris-based Organization for Economic Co-operation and Development (OECD), set out its Principles for Artificial Intelligence in May of 2019. The principles are designed to create responsible and transparent AI systems that uphold democratic values. But they remain principles, not concrete measures.
- CEPA Analysis
- To Align or Not to Align: Mapping Global Approaches to Regulate Artificial Intelligence by Eduardo Castellet Nogués & Marielle DeVos
- Top Takeaways
- The OECD Principles have acted as a forum for consensus on AI, acting as a foundation for the EU-US TTC Joint Roadmap for Trustworthy AI and Risk Management.
- The AI principles are currently being edited to account for the development of generative AI.
- The OECD’s AI principles present a multi-front approach, using the organization’s focus on economics and R&D while emphasizing privacy, and individual and worker rights.
- Further Reading
- How governments are beginning to regulate AI — Financial Times
EU AI Act
While some organize summits or voluntary codes of conduct, the European Union is in the final stages of wrapping up the world’s first major binding AI legislation. After Chat GPT emerged, the European Parliament expanded the law’s scope to include foundation models rather than specific applications. Critics fear a regulatory overkill that will hurt European competitiveness.
- CEPA Analysis
- Europe’s AI Act Nears Finishing Line — Worrying Washington by Hadrien Pouget
- Europe’s Push to Regulate AI Accelerates by Luca Bertuzzi
- Transatlantic Community Must Unite to Address AI Risks and Opportunities by Ylli Bajraktari and Lauren Naniche
- On AI and Tech, the US Must Avoid Europe’s Mistakes by Adam Kovacevich
- Top Takeaways
- AI systems are categorized by their level of risk under four categories — low, minimal, high, and unacceptable. With obligations increasing according to the risk level.
- Military AI remains outside of the scope. Negotiators remain divided about how to deal with government surveillance.
- A final political agreement is expected before Christmas.
- Further Reading
Expect more initiatives, regulations, codes, and events. On the first day of the UK Summit, Britain’s tech Secretary, Michelle Donelan, announced that the next AI Safety Summit will be held in South Korea in six months, and a third summit will be held in France in late 2024.
Clara Riedl-Riedenstein is an intern at CEPA’s Digital Innovation Initiative.
Bill Echikson is a non-resident CEPA Senior Fellow and editor of Bandwidth.
Eduardo Castellet Nogués is a Program Assistant at CEPA’s Digital Innovation Initiative.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
The post AI, AI, and More AI: A Regulatory Roundup appeared first on CEPA.