A Cross-Sectional Comparison of EU, China, and US Artificial Intelligence Policy Landscapes
“It is difficult to think of any major industry that artificial intelligence (AI) will not transform. This includes healthcare, education, transportation, retail, communications, and agriculture. There are surprisingly clear paths for AI to make a big difference in all of these industries,” stated Andrew Ng, Computer Scientist and Global AI Leader.
President Joe Biden’s Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of AI recognizes that AI has huge potential for both promise and peril. AI has the potential to solve urgent world challenges, making the world more prosperous, efficient, and secure. Meanwhile, irresponsible and unethical use of AI could exacerbate societal harms such as fraud, discrimination, bias, and disinformation. It also has the potential to displace and disempower workers, stifle competition, and pose major risks to national security.
AI will ultimately reflect the people who build it, use it, and the data which creates it. In the changing world of AI, it is important to remain current on technological progress and to ensure that global policies inform the best strategies to mitigate risks and harness AI’s potential. While the European Union (EU) appears to be the current leader on AI policymaking, there is plenty of criticism surrounding the EU’s approach. China has also emerged as an early leader on AI regulation, though China’s long-term AI ambitions remain uncertain and cause for concern. As the US desires to maintain both its technological edge and policy leadership, there are likely lessons to be learned within the global policy realm.
EU AI Act of 2024
First proposed by the European Commission in April 2021, the EU AI Act was adopted in March 2024 by a vote of 523-46, with 49 abstentions. The historic vote makes the EU the first group of countries to produce legislation broadly regulating AI, with the purpose of ensuring better conditions for development and use of the innovative technologies, including in healthcare, transportation, manufacturing, and sustainable energy.
The EU’s AI regulatory framework is the first of its kind to establish new rules for differentiating levels of risk to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The framework distinguishes between unacceptable, high, and minimal risk applications of AI. Unacceptable risk AI systems are considered a threat to people and are banned. These systems involve cognitive behavioral manipulation, social scoring, biometric identification, and real-time and remote biometric identification systems. High risk systems are identified as negatively affecting safety or fundamental rights. Under the EU system, all high-risk AI systems must be assessed before being put on the market and throughout their lifecycle.
The EU AI Act has been met with both praise and criticism. This law establishes the first global standards for ethical and responsible AI development. This represents a milestone as the first comprehensive international regulatory framework for AI, suggesting this can be done in the US and elsewhere. It also has direct applicability across member states with business-oriented rules. The EU’s risk-based approach, where the permissibility of AI is contingent upon evaluating the associated risks posed by systems on a case-by-case basis, has also increased comfort with the integration of AI into everyday life.
The criticism for this act has also been broad, suggesting that the focus on risk negates the potential benefits of AI. Opponents argue that regulation examining only risk has the potential to hinder development of applications when the public value has the potential to outweigh risks. Businesses have also raised concerns with the complexities of compliance, which requires precise documentary requirements, controls, and checks. The private sector has argued the law could lead to lost business opportunities as heavy-handed regulation could make the EU less competitive than other parts of the world. Finally, there are concerns about the impact stringent standards and compliance costs will have on developing countries in Europe that see AI as a potential solution to help bolster their economies.
The formal adoption of this AI Act in 2024 is not the end of the regulatory process, but just the beginning. EU member states will need to appoint competent national authorities to oversee its implementation. The EU Commission will also need to issue guidelines to help actors apply the large number of provisions. Given that implementation of the law remains in its infancy, exports anticipate that additional policy changes are likely. A political agreement surrounding the AI Act was reached in December of 2023 between the European Parliament and the Council of the European Union. The AI Act is expected to enter force 20 days after publication in the Official Journal, which is expected this May or June.
China: The Wildcard Case
China emerges as an outlier on AI, with a lack of transparency regarding the government’s intentions behind producing AI models. China may aspire to be dominant in AI, as evidenced by its ongoing efforts to roll out some of the world’s earliest and most detailed regulations governing the technology. However, this policy activity could also be aimed at supporting other government objectives aimed at controlling content to strengthen the military and the Chinese economy. According to the Carnegie Endowment for International Peace, Beijing has been leading the way in AI regulation through releasing new strategies to govern algorithms and chatbots. However, global partners seek greater understanding and transparency regarding what these regulations entail.
Closer study of China’s AI regulations may help provide insights into China’s AI trajectory. Implemented by the Cyberspace Administration and the Chinese Ministry of Science and Technology, the three most concrete and impactful regulations on algorithms and AI in China include the 2021 regulation on recommendations algorithms, the 2022 rules for deep synthesis, and the 2023 draft rules on generative AI. The rules for recommendation algorithms bar excessive price discrimination and protect the rights of workers subject to algorithmic scheduling. The deep synthesis regulation requires conspicuous labels be placed on synthetically generated content. The draft generative AI regulation requires both the training data and model outputs to be true and accurate. There is widespread speculation that information control by the central government and the CCP is the ultimate goal for all three measures.
In this age of geopolitical competition, China’s AI regulations have been largely dismissed as irrelevant by the West, especially as Chinese President Xi Jinping and the Chinese Communist Party (CCP) have unchecked power to disregard their own rules. Regardless, policymakers around the world, including in the US, frequently point to Chinese AI governance as a reason to push their own governments to pursue some kind of AI regulation. For example, US Senate Majority Leader Chuck Schumer (D-NY) has described China’s efforts as a “wake up call to the nation,” warning the US that China should not write the global rules for AI. The world should be wary as China continues to build its bureaucratic know-how and AI regulatory capacity.
US Administrative and Legislative AI Activity
The Biden Administration has tread a thin line between promoting innovation and protecting against the potential harms of AI, while Congress continues to work on building AI expertise. In October 2022, the White House Office of Science and Technology Policy (OSTP) released a Blueprint for an AI Bill of Rights to protect all people from the threats of this technological use, which would include data privacy, algorithmic discrimination protections, and notice and explanation. The Biden Administration has also secured voluntary commitments from leading AI companies to manage risks posed by AI. This includes companies' commitments to ensure products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust through industry innovation.
Building on the Blueprint for an AI bill of Rights, President Biden signed his AI EO in October 2023, laying out an ambitious agenda for the US to confront AI with strength and expertise. This EO directs federal agencies to establish new standards for AI safety and security that protect Americans’ privacy, while advancing equity and civil rights. Specifically, the EO outlines guiding principles to harness AI for public good. These principles include that AI must be safe and secure; promote responsible innovation, competition, and collaboration; support American workers; advance of equity and civil rights; incorporate safeguards against fraud unintended bias, discrimination, and other harms; and protect Americans’ privacy and civil liberties.
The US Congress has also taken an interest in AI. Senate Majority Leader Schumer (D-NY) has been leading AI forums on topics including America’s AI workforce and AI applications in high impact sectors, including health care, financial services, and the justice system. He has applauded President’s Biden EO and is championing bipartisan efforts in the Senate to draft a framework for AI legislation. Leader Schumer has argued that Congress must act with urgency so other countries do not take the lead on AI, particularly countries that do not share the same collective values with the US. Simultaneously, he has recognized the task is extremely difficult and far-reaching, as the industry is constantly developing and changing. Leader Schumer has emphasized the importance of AI efforts remaining bipartisan, especially in today’s highly polarized political climate. He has worked alongside Sens. Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM) on drafting a framework for AI legislation.
Outside Leader Schumer’s effort, additional AI expertise has emerged from the Senate and House AI Caucuses. The Senate AI Caucus in the Senate was formed in 2019 and is chaired by Sens. Heinrich and Rounds. This caucus works to develop smart policies that balance AI risks and rewards to keep the US economy competitive while meeting ethical standards. Similarly, the Congressional AI Caucus strives to inform policymakers of the technological, economic, and social impacts of advances in AI by bringing together experts from academia, government, and the private sector to discuss the latest technologies and opportunities created by new technology. This caucus is co-chaired by Reps. Mike McCaul (R-TX) and Anna Eshoo (D-CA).
More recently, House Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a bipartisan task force on AI. Launched in February 2024, the task force aims to explore how Congress can ensure America continues to lead the world in AI innovation, while considering guardrails that may be appropriate to protect the nation against emerging threats. The task force is jointly led by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA) and consists of 12 members representing key committees of jurisdictions. The task force’s stated goal is to produce a comprehensive report that includes guiding principles, forward-looking recommendations, and bipartisan policy proposals developed in consultation with committees or jurisdictions.
Both the US House and Senate have also started to embrace AI in their own operations. Each chamber has now issued guidelines for responsible internal AI usage and best practices for staff to follow. These guidelines on usage and purchasing ChatGPT Plus Licensing discuss treating AI tools similarly to other search engines, not placing sensitive information on an AI tool, and keeping in mind all information within an AI tool could be used elsewhere in the world. Congressional guidance also emphasizes the importance of verifying the accuracy of the information generated by AI tools through human review.
While none of these Congressional efforts appear to be far enough along to suggest that we could see comprehensive AI legislation this year, it remains possible that relatively smaller AI bills move sooner rather than later. For example, the November elections could present an opening for action on bills like the bipartisan Protect Elections from Deceptive AI Act. Introduced by Sens. Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME), this bill would ban the use of AI to generate materially deceptive content (deepfakes) falsely depicting federal candidates in political ads with the intent to influence elections.
Conclusion
AI has taken the world by storm and countries around the globe are seeking to respond. While policymaking may not happen as quickly as some would like, there is an argument to be made that the US is striking the right equilibrium between working too quickly to regulate AI and fostering an environment for innovation, all while being mindful about risk. The European and Chinese regulatory models could present lessons learned and position US policymakers to make decisions on components that may be helpful or harmful to include in a US AI regulatory framework.