AI Regulation: The Slow Transition from Chaos to Order

Everyone in Washington seems to be rushing to dip their toes into the murky water of artificial intelligence (AI), yet it is unclear exactly who has jurisdiction and who will end up taking charge. The hurry to get involved comes with the realization that AI may be moving more quickly than DC can keep up, which could result in some potential missed opportunities and dangerous shortfalls.  

Congressional Committees and Subcommittees

The last time Congress reorganized committee jurisdictions was in the 1970s. At that time, clearly legislators could not have taken into account the internet and how it should be tackled, which has created ambiguity around the right regulatory path, especially in regards to AI. No committee has exclusive jurisdiction, leaving activity essentially wide open to any that sees AI as relevant or affecting their purview. A number of committees have started conducting both hyper-specific and more general hearings, as well as introducing legislation around AI. 

In the Senate, the Commerce, Science, and Transportation Committee believes that it has primary jurisdiction, yet other committees have been claiming a role, citing specific areas of AI’s reach that are within their charters. The Senate Homeland Security and Governmental Affairs Committee has claimed jurisdiction over federal government activities and procurement; the Senate Health, Education, Labor, and Pensions (HELP) Committee has a part to play in regards to AI in the healthcare, education, and workforce spaces; and the Senate Armed Services Committee has taken to managing AI within the Department of Defense operations, as seen by the Subcommittee on Cybersecurity’s hearing to receive testimony on the state of AI and machine learning applications to improve the Department of Defense operations. 

The House and Senate Judiciary Committees have recently approached AI in areas including intellectual property (IP) and human rights: the full Senate Committee held an oversight hearing focused on rules for artificial intelligence, during which OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery welcomed government regulation and guidance, which clearly indicates the rarity and precariousness of the situation. The Subcommittee on Human Rights and the Law conducted a hearing on AI and its effects on human rights, and the Subcommittee on Intellectual Property had their own hearing on patents, innovation, and competition. On the other side of the Capitol, the House Judiciary Committee held a hearing on AI and intellectual property, focusing on the interoperability of AI and copyright law.

In addition to the House Judiciary Committee, the House Energy and Commerce Committee is also likely to play a regulatory role, while the House Science, Space, and Technology will likely work more towards incentivizing research and development. 

Members of Congress  

There is no shortage of Members who are  taking advantage of this new regulatory opportunity. Senate Commerce, Science, and Transportation Chair Maria Cantwell (D-WA), House Science, Space, and Technology Chair Frank Lucas (R-OK), Senate Judiciary Chair Dick Durbin (D-IL), and House Judiciary Chair Jim Jordan (R-OH) have been and will continue to lay a claim for their respective committees.

With all of these sharp elbows, Senate Majority Leader Chuck Schumer (D-NY) has recognized some of the obvious shortfalls with the Congressional process in addressing AI. Along with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD), Leader Schumer recently came out with the Safety, Accountability, Foundations, and Explain (SAFE) Innovation Framework and AI Insight Forums proposition. The framework highlights the importance of advancing innovation, while still promoting security and accountability. The often unwieldy legislative process is having a hard time keeping pace with AI innovation, so Leader Schumer is setting out this fall to hold AI insight forums, which will solicit top AI experts to come to Congress to educate and discuss the most urgent AI issues and most pressing priorities.

 In addition to leadership actions, other Members of Congress have also been flexing their legislative muscle. Senator Josh Hawley (R-MO) has introduced legislation that waives immunity under Section 230 of the Communications Act of 1934 for claims and charges related to generative AI, meaning online computer services will no longer be granted immunity for content generated by its users in such cases. Senate Homeland Security and Governmental Affairs Chair Gary Peters (D-MI) has introduced his AI Leadership Training Act and the TAG Act, which directs agencies to be transparent when using automated and augmented systems to interact with the public or make critical decisions.

In the House, Congressman Ted Lieu (D-CA) has introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act as well as a bill that would establish a 20-member commission with experience in computer science or AI, civil society, industry, labor and government, with 10 members selected by Democrats and 10 by Republicans. The task of the commission would be to release three reports over the course of two years, focusing on the current capacity of the federal agencies for AI oversight and regulation, including suggesting possible new approaches and regulatory structures. 

Executive Branch Actions

There have also been major developments out of the Executive Branch. The White House's Blueprint for an AI Bill of Rights established key principles to help guide the design and use of AI, one of which being a commitment to safe and effective systems. Recently, President Biden emphasized this principle of responsibility during his meeting with seven top AI executives, all of whom agreed to voluntary commitments for responsible innovation. Within the Executive Office of the President, the Office of Science and Technology Policy (OSTP) has established infrastructure available to tackle AI: a Select Committee on AI, a National Science and Technology Council, a Subcommittee on Machine Learning and AI, and a Subcommittee on Networking and Information Tech Research and Development. 

Earlier this year, OSTP released a request for Information on National Priorities for Artificial Intelligence to inform the Biden-Harris Administration in developing a National Artificial Intelligence (AI) Strategy, which was due July 7. The Select Committee on AI of the National Science and Technology Council at OSTP published a National Artificial Intelligence Research and Development Strategic Plan Update comprised of nine main strategies: 

“1) Make long-term investments in fundamental and responsible AI research, 

2) Develop effective methods for human-AI collaboration, 

3) Understand and address the ethical, legal, and societal implications of AI, 

4) Ensure the safety and security of AI systems, 

5) Develop shared public datasets and environments for AI training and testing, 

6) Measure and evaluate AI systems through standards and benchmarks, 

7) Better understand the national AI R&D workforce needs, 

8) Expand public-private partnerships to accelerate advances in AI, and 

9) Establish a principled and coordinated approach to international collaboration in AI research.” 

While these strategies and proposed paths forward are encouraging, they lack teeth; there needs to be additional rule-making in order for enforcement to be feasible.

Various Departments are taking a more specific approach to AI by looking at how AI is affecting their defined areas of interest. The Department of Education’s Office of Educational Technology recently released insights and recommendations relating to Artificial Intelligence (AI) and the future of teaching and learning. The National Institute of Standards and Technology (NIST) within the Department of Commerce has released an AI Risk Management Framework, an AI Resource Center, and AI Standards. The National Artificial Intelligence Institute, within the Department of Veterans Affairs, has created an AI strategy based on the collaboration of more than 20 Veteran Affairs offices and has scheduled an International Summit for AI in HealthCare for September. Through the National AI Initiative Act, Congress directed the National Science Foundation to establish a task force, which produced a final report that presents a roadmap and implementation plan for a national cyber infrastructure to aid in the developing future AI technology. 

Taking Strategies to Tangible Action 

There has not been a scarcity of attention on AI and the importance of appropriately addressing it. The Legislative and Executive Branches have both been busy working on various approaches and strategies. Often, diffusion of responsibility comes with inaction, yet we have seen just the opposite. These various actions now need to be synchronized and coordinated, so as to not create unnecessary redundancy. The next steps on legislation and implementation will take time, which is why there has been a recent emphasis on the importance of the private sector taking preemptive actions to self-regulate. In this transition period from a state of relative chaos into a more orderly ecosystem of AI regulation, frequent communication with the private sector will be imperative so that Congress and various agencies do not repress the up and coming industry, or inadvertently neglect public concerns.