EU Proposes Artificial Intelligence Regulations: What Does This Mean for the U.S.?
On Wednesday, April 21, the European Union (EU) officially set forth the “Proposal for a Regulation laying down harmonized rules on artificial intelligence,” more commonly referred to as the Artificial Intelligence Act (hereinafter, “the Regulation”). This set of rules would govern the use of artificial intelligence (AI) within the jurisdiction of the EU. The Regulation specifically addresses the risks posed by AI and a goal of positioning the EU as a global leader in AI regulation. This would not be the first time the EU promulgated a set of rules regarding new uses of emerging technologies.
In 2016, after the EU passed the General Data Protection Regulation (GDPR), a comprehensive privacy regulation designed to protect individual data privacy across Europe, the rest of the world followed closely behind. Companies realized that by running their businesses online and inevitably catering to some European citizens, they would need to rehaul their privacy policies and notify their customers of the sweeping changes. Other countries began drafting and passing privacy legislation largely based on the provisions in the European regulatory framework. The State of California passed the California Consumer Privacy Act (CCPA) only one month following the official implementation of the GDPR, which has fueled a push for other state and federal privacy laws in the US.
Given the widespread adherence to the GDPR, there is reason to believe that other countries, including the U.S., will follow suit with AI. While the regulations will likely take time to progress through the European system, it is a positive sign that at least one legislative body is addressing the issues surrounding AI as the technology continues to advance at an exponential rate.
The Regulation targets uses of AI based on the severity of their risk. For instance, it will completely prohibit uses it deems “unacceptable,” such as the exploitation of children or disabled individuals, or the creation of “social scores” leading to unreasonably inequitable treatment of individuals. Any AI systems that are considered “high-risk” will be subject to more stringent measures like risk assessments, detailed documentation and human supervision. As the proposed law currently stands, “high-risk” systems would include those that affect individual safety, credit and hiring decisions, among other considerations.
As for enforcement, the European Commission (EC) has already proposed the establishment of a “European Artificial Intelligence Board,” which would serve as a regulatory body. Additionally, the EC has noted that violators will first be notified of their violations, accompanied by a request to remedy them. However, the Regulation currently contemplates fines of up to 30,000 euros ($36,000 USD) for any subsequent violations of the law, and larger companies could be subject to fines totaling 6% of their global annual revenue.
Unlike the GDPR distinction between obligations for data “processors” versus “controllers,” the Regulation intends to broadly govern the use of AI systems by providers, distributors and users alike. While the EU plans to regulate all possible parties using AI, this is a provision that is almost certain to require further clarification, given the complexity of the technology and nascence of the industry. More importantly, the geographic applicability as it is currently drafted will also elicit confusion. For instance, the Regulation currently applies to companies based outside of the EU that operate AI systems in the EU, as well as those that use system output in the EU. Furthermore, companies will be subject to the Regulation if 1) they collect any data in the EU, 2) they use it extraterritorially for a high-risk AI system and 3) that use has an impact on individuals in the EU. AI or data collection schemes with any potential connection to the EU could trigger the Regulation once enacted, but such questions should be answered as the law continues to take shape.
However, debate about the Regulation in its current iteration has already begun. For example, while there is a narrow exception for law enforcement officials to use AI systems in emergency situations like terrorism, many have been quick to criticize the exemption as a potential catalyst for additional invasions of privacy and/or unfair discrimination against certain groups of people. More importantly, individuals and industry stakeholders have requested less fluidity around the definitions of “high-risk” and “non-high-risk” AI systems. Based on the Regulation’s current text, there is a significant chance that the list of systems falling under each category could evolve over time, which would force companies to continually adjust their business models. Additionally, the proposed codes of conduct that could be used to limit liability for “non-high-risk” AI systems have been accompanied by minimal guidelines, and companies have requested that the EC expound on them for business planning purposes.
Many U.S. companies could be said to target individuals in the EU by virtue of the fact that they operate online and, therefore, they will need to keep an eye on the developments of the Regulation. This has sparked a debate regarding AI regulation on U.S. soil. After passage of the GDPR, the U.S. has not passed a national privacy law. On the other hand, individual states like California and Virginia have since passed their own laws. Even prior to the GDPR, numerous federal laws like the Health Insurance Portability and Accountability Act (HIPAA) and Gramm-Leach-Bliley Act (GLBA) had been enacted to protect consumers in specific industries from various privacy violations. Some experts believe the EU regulation of AI should inform similar frameworks in the U.S., whether it is a comprehensive federal law or specific to some states. Others think that the definitions of “high-risk” and “non-high-risk” are too broad to apply to AI in the U.S. For instance, many social media platforms use AI in their algorithms to target specific age groups, and this could be considered exploitative under the Regulation. Despite the Regulation’s intended effort to mitigate the risks posed by AI, some critics believe that the U.S. should be wary of curtailing innovation.
Of course, the Regulation is certain to evolve over time. The proposed framework will continue to work its way through the EU legislative process, involving several “readings” by Parliament and various modifications by the EC, which could last several years. It is currently open for public comment until June 22nd, so companies using AI should consider how the Regulation may influence how they conduct business and provide feedback accordingly.
Lutzker & Lutzker will continue to monitor the developments surrounding the Artificial Intelligence Act and keep you informed. Anyone interested in more active involvement in the EU comment period should contact us as soon as possible in order to explore the best options for effective participation.