The Global Race Toward Regulation: The Future of Artificial Intelligence

Artificial intelligence (AI) has been a hot topic of conversation in 2023, with ongoing concerns about privacy, disinformation, and cybersecurity. While AI promises to significantly boost technological advancement across multiple sectors, the pressing question is how to govern this rapidly evolving field in a way that balances innovation and public interest. Most eyes are on the United States and other leading Western countries for guidance on AI regulation. Historically, the European Union (EU) has taken the lead in handling privacy and data concerns, most notably with the General Data Protection Regulation (GDPR) in 2016. In June of this year, the EU introduced the AI Act – the first proposed regulatory and legal framework for AI.

While the EU continues to lead the charge on AI regulation, the US is still exploring AI’s capabilities and potential legal and regulatory approaches amidst criticism from both sides of the aisle. The US still lacks a federal privacy law, and there is a hodgepodge of state-level measures – like the California’s Consumer Privacy Act (CCPA), which only applies to businesses and consumers in California – that companies need to navigate. At the federal level, Senate Majority Leader Chuck Schumer’s (D-NY) Security, Accountability, Foundations, Explainability (SAFE) Innovation Framework is the most notable plan under consideration. The question stands then: Will the US regulate AI on a state-by-state basis, similar to the CCPA, or will the SAFE Innovation Framework stand as a guiding tool to create federal law? 

The EU AI Act

When the European Parliament passed the EU AI Act, it marked the first proposed law on AI by a major regulator. The Act covers most existing AI applications, sets up a surveillance and enforcement plan, and establishes a mechanism to allow organizations and companies to scrutinize high-risk AI systems. It categorizes AI applications into three risk types: unacceptable risk applications, high-risk applications, and applications with no risk (or those not banned). Unacceptable risk applications, such as government-run social scoring prevalent in China, are prohibited. High-risk applications, which might include AI usage for scanning job applications, are subject to legal requirements depending on the risk. Applications deemed to pose limited or no risk remain unregulated. 

In addition to defining risk, the Act mandates the creation of a single national surveillance authority (NSA) in all 27 member states. Although designed to streamline inter-state communication, it also means AI specialists in business, commerce, and law will operate in different offices. Moreover, the Act allows companies to become “notified bodies,” facilitating the government’s review and approval of AI systems. The EU anticipates that this system will foster an independent AI assessment process, enhancing transparency and fairness. 

With respect to generative AI, the Act introduces regulations as well. Entities using generative AI systems must disclose that their content was AI-generated, design their model to avert the generation of illegal content, and publish summaries of copyrighted data used for training these systems.

The Act now faces a “trilogue” – a debate among the European Commission, Council, and Parliament – where key issues concerning its proposed structure and implementation will be examined. Several organizations have seized this opportunity to express their concerns about the AI Act, in hopes of editing its language before the Act is finalized. The Oxford Commission on AI & Good Governance, for instance, has noted that the AI Act concentrates heavily on AI’s impact at the individual level, not on the societal level. This concern is echoed by the University of Cambridge’s Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk. They argue that the Act’s current form is “inflexible,” lacking mechanisms to change what is considered “high risk” or “banned” AI applications as the technology evolves. They advocate for “adaptive regulation” that can keep pace with AI’s development while also accommodating small-scale, individual-level innovation. Looking ahead, the question of civil liability remains undetermined, and with stakeholders hoping that it will be clarified during the trilogue. Despite debates over risk definitions and regulatory mechanisms, the ultimate success of the AI Act hinges on its enforceability and execution. 

The US’s SAFE Innovation Framework

A week after the EU AI Act was passed, Senate Majority Leader Schumer unveiled his SAFE Innovation Framework. While the EU AI Act is legislative text, the SAFE Innovation Framework is a developing framework meant to guide Congress in crafting future laws and regulations. The framework, along with accompanying forums where members of Congress can consult with AI experts, aims to bolster economic security, combat misinformation, protect consumer privacy, encourage innovation by collaborating with AI developers, and position the US ahead of global competitors, namely China. 

Leader Schumer urged the federal government to work alongside the private sector, asking: “If the government doesn’t step in, who will fill its place?” He has highlighted “explainability” as critical for ensuring AI accountability and transparency with consumers. Nevertheless, he has acknowledged that the algorithms and behind-the-scenes workings of AI, while conferring great value, also demand careful attention to privacy and intellectual property protections. They represent, as Leader Schumer puts it, “The highest level of intellectual property (IP) for AI,” and he recognized that requiring companies to reveal their IP would stifle innovation and give adversaries an upper hand. 

The SAFE Innovation Framework signifies an “all hands on deck” effort from the Senate, with multiple committees tasked with drafting bipartisan legislation. Leader Schumer created a bipartisan working group currently headed by himself along with Sens. Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD). 

Despite the bipartisan support, the framework has also received bipartisan criticism, chiefly around Section 230 protections and the potential need for a new, AI-focused agency. Section 230, a part of the Communications Decency Act of 1996, provides immunity for online computer services with respect to third-party content generated by its users. Sens. Michael Bennet (D-CO), John Thune (R-SD), Richard Blumenthal (D-CT), Marsha Blackburn (R-TN), and Josh Hawley (R-MO) have all expressed concern about Section 230. These members sit on key committees with jurisdiction over AI issues, such as Senator Blumenthal serving as Chair of Permanent Subcommittee on Investigations, Senator Bennet sitting on the Select Committee on Intelligence, and Senator Blackburn sitting on the Commerce, Science, and Transportation Committee. Just a week before the SAFE Innovation Framework’s release, Sens. Blumenthal and Hawley introduced the No Section 230 Immunity for AI Act. This legislation would allow consumers to sue AI companies in federal or state court. Simply put, AI companies would be liable for any AI-generated material, prompting a broader conversation about the interplay between consumers, the private sector, and their respective liberties. The Act also raises important ethical questions about AI-managed data and privacy. Expect Section 230 and its related topics to be debated during the proposed AI Insight Forums. 

While Section 230 chatter circulates Congress, other members have voiced different concerns, notably the capacity of Congress to manage AI-related issues. For example, Reps. Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA) and Sen. Brian Schatz (D-HI) have proposed the creation of a new, dedicated AI body – the National Commission on Artificial Intelligence. Democrats, such as Sen. Bennet, support this idea since “Congress is never going to do it on its own.” Others in the Republican caucus, however, see this as potentially redundant. Rep. Jay Obernolte (R-CA) has expressed concern that it would duplicate existing efforts by federal agencies still “grappling with the problem of how to establish rules regarding A.I. within their sectoral spaces.” 

Looking Ahead

Despite the breadth of Leader Schumer’s SAFE Innovation Framework, the EU’s AI Act – and the EU in general – has a head start in regulating AI. The UK is planning to host the inaugural international “AI Summit” this fall with other European countries and researchers, and the country is positioning itself as a global leader in AI discourse.

Nonetheless, the US’s apparent lag could be a strategic advantage, allowing for careful consideration as AI and its capabilities continue to evolve. Leader Schumer’s approach is to pull from all angles of expertise, offering great potential, even if it means the US is playing “catch-up” with the EU. The US’s framework, with AI Forums and discussions around Section 230, leaves room for much debate.

From an optimistic angle, the delay could serve as an opportunity for more thorough review of regulation. As Leader Schumer stated: “It’s in our nature to press ahead.” Both sides of the aisle aim to make the US the global leader in technology and innovation. While American values and traditional nature in Congress suggest that Capitol Hill is likely to prioritize consumers over the private sector, factoring in critical amendments as needed, Congress has also abstained from regulation to avoid stifling private sector innovation. AI regulation is completely new territory, making it difficult to predict Congress’s next move. 

It remains to be seen how the EU’s AI Act will evolve, whether it will maintain a single NSA per member state, or create multiple ones based on the subject area. This could provide valuable insights for the US as it develops its own regulatory system, potentially leaving it to states to decide their own AI regulation, much like the privacy debates in 2016 and the CCPA.

Looking ahead, the US will need to balance privacy concerns and reach a bipartisan resolution to the Section 230 debate. Achieving consensus on this issue could set a precedent for navigating other critical areas, such as privacy, national security, and innovation. While it may appear that the U.S. is currently in a “catch up” phase compared to some of its international counterparts, the rapidly evolving nature of AI makes it uncertain who is genuinely leading. The landscape of AI is as predictable as it is promising, underscoring the fact that the true frontrunner in this field is yet to be determined.