Contemplating EU Regulation for Wide-Ranging Artificial Intelligence Architectures
The Center for Data Innovation is hosting a discussion on the implications of the proposed amendments to the European Union's AI Act for general-purpose AI systems (GPAI). This video webinar, scheduled for September 13, 2022, from 10:00 AM to 11:00 AM (EDT) or 4:00 PM to 5:00 PM (CET), will bring together industry experts to delve into the intricacies of this important topic.
The discussion will be moderated by Hodan Omaar, Senior Policy Analyst at the Center for Data Innovation. The esteemed panel of speakers includes Kai Zenner, Head of Office & Digital Policy Advisor to MEP Axel Voss, European Parliament; Anthony Aguirre, Vice-President for Policy and Strategy at Future of Life Institute; Irene Solaiman, Policy Director at Hugging Face; Andrea Miotti, Head of AI Policy and Governance at Conjecture; Alexandra Belias, International Public Policy Manager at DeepMind; and Andrea Giuricin, Senior Researcher at the University of Bologna.
The French presidency has proposed expanding the EU's draft law for regulating high-risk AI tools to include GPAI systems. These systems, capable of performing a wide range of tasks, often power more specific applications such as image and speech recognition, pattern detection, and translation. The proposed expansion aims to ensure future high-risk AI applications that rely on these models pose as little risk as possible.
However, the broad-brush approach of this proposal has sparked debate. Those who oppose the French proposal argue that it may inadvertently capture a swath of systems not intended for high-risk use cases.
The proposed amendments to the EU's AI Act for GPAI systems introduce specific governance rules and obligations. These include mandatory transparency requirements, creation of detailed technical documentation, and disclosure of copyrighted materials used in training these models. For GPAI systems categorized as high-risk, additional duties apply such as rigorous model evaluations, adversarial testing, and mandatory incident reporting.
These amendments reflect the EU’s attempt to regulate AI systems that serve multiple purposes rather than a single narrow task. The aim is to balance innovation with user safety and fundamental rights protections.
The implications for AI development and deployment in the EU and beyond are significant. The detailed transparency and evaluation requirements impose higher compliance burdens on AI developers and providers, potentially increasing costs and slowing down market entry, especially impacting smaller companies and startups.
As a pioneering legal framework with extraterritorial reach, the AI Act sets a global benchmark. Non-EU AI providers wanting market access must comply, influencing global AI governance practices and possibly leading to similar regulations elsewhere.
By enforcing strict accountability and human oversight, the Act aims to mitigate ethical, safety, and societal risks associated with AI, fostering greater public trust in AI technologies in the EU market.
Despite the intended protections, major European companies have expressed concern over the regulatory complexity and potential competitiveness challenges. Their calls for a regulatory pause were denied, signaling the EU’s commitment to proceed with the current schedule.
The discussion will be interactive, with the speakers answering questions that can be asked via Slido. Register for this insightful event to gain a deeper understanding of the proposed amendments and their potential impact on AI development, innovation, and deployment in the EU and beyond.
- The discussion on the EU's AI Act amendments will involve exploring the implications for general-purpose AI systems (GPAI), which are often utilized in technologies like image and speech recognition, and translation, requiring a balance between innovation and user safety.
- The proposed amendments encompass specific governance rules for GPAI systems, including mandatory transparency requirements, detailed technical documentation, and disclosure of copyrighted materials used in training these models, potentially impacting AI development costs and market entry, particularly for smaller companies.
- The AI Act aims to serve as a global benchmark, regulating AI systems that perform multiple purposes, enforcing strict accountability and human oversight to foster greater public trust in AI technologies, even amidst concerns expressed by major European companies regarding regulatory complexity and competitiveness challenges.