Artificial intelligence (AI) captivated the attention of the public in 2023. Conversations about AI’s capabilities were sparked by the rollout of ChatGPT in late 2022. These discussions were quickly followed by debates among lawmakers over how to regulate AI. Given the rapid advancement in AI technology and the slow progression of policymaking, especially at the federal level, it’s unsurprising that these discussions are still ongoing. We find ourselves in 2024 with many of the same questions about the future of the AI policy that we had in 2023.
As in recent ESG and data privacy debates, the European Union (EU) has raced ahead of the U.S. and other countries in developing AI policy. The EU’s proposed AI Act would apply reporting and transparency requirements broadly. It would also ban high-risk uses of AI. The Act will likely be approved this year, and will influence AI policymaking throughout the rest of the world.
In the United States, no such measure has passed. While there is no national framework legislation regulating AI, actions and proposals at both the state and federal levels provide insight into the direction of AI policymaking in the United States. Following state and regulatory action on AI is key, given the low probability of robust federal action. Below we summarize the trends we have seen so far in AI policy proposals, and detail what may come next.
Federal Approaches to AI Policy
In recent years, federal policymaking decisions have shifted away from Congress towards regulatory agencies and the courts. Since 2011, Congressional majorities have been slim and partisan divides have been significant. This has led to challenges in passing complex, robust legislation through Congress. As a result, recent administrations have aimed to effect change through rulemaking. Without the likelihood of shepherding a bill through Congress, federal lawmakers impact policy through statements, hearings, and bill introductions. The first year of active AI policymaking followed these trends.
Trend 1: A Non-Legislative Approach to AI Policymaking
Especially in an election year, the Biden administration does not want to be perceived as inactive on such a hot-button issue such as AI. Over the summer in 2023, the administration secured voluntary commitments from leading AI companies to manage risk. The White House built on these commitments in October of that year with the release of an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order directs the federal government to initiate rulemaking or policy changes. New rules or policies will increase transparency and reduce risk around the use of AI. They will also promote responsible use of the technology.
Following the release of the executive order, many of the advised actions taken place. The National Institute of Standards and Technology created a leadership group for its new AI Safety Institute. Another significant development was a Department of Commerce proposal that would require cloud providers to alert the government of foreign use of powerful AI models.
While these developments are significant, it’s worth noting that there are limitations to a strictly regulatory approach to policymaking. Executive orders and many administrative actions are reversible by any subsequent administration. Additionally, these rulemaking processes can be slower than the legislative process and subject to their own uncertainties, including court cases.
Trend 2: High-profile Hearings Drive Media Coverage
Just like Presidents, congressional leaders can also find themselves stymied by the challenge of passing legislation through a gridlocked Congress. Recently, many legislators have turned to high profile committee meetings with industry leaders to communicate their agenda. Some of the most closely covered committee hearings of the past decade have given legislators a highly-visible opportunity to question Mark Zuckerberg, Sam Bankman-Fried, and others.
This trend has continued with hearings on AI in 2023 and early 2024. Recent committee hearings have included a wide range of guests, including leaders from Microsoft and Nvidia as well as representatives of the music industry. Senate Majority Leader Chuck Schumer has been especially active in this regard. Senator Schumer has initiated a series of forums bringing together tech leaders, consumer rights groups, and civil rights advocates. Even if these conversations don’t directly lead to new policy, they help shape the debate on the use of AI in the U.S.
Trend 3: A focus on Discrimination, Misinformation, and Transparency
Executive Actions, committee hearings, and legislation proposals have made clear the areas of greatest concern for U.S. lawmakers in relation to AI. If significant action on AI does take place in 2024, it will likely relate to preventing discrimination and misinformation, or increasing transparency.
AI’s risk of contributing to existing societal inequities is well-established and concerning. Some lawmakers have centered their concerns about AI around issues of bias and discrimination. The recently introduced S 3478 aims to account for this risk. The bill would require federal agencies that use algorithmic systems to have an office of civil rights focused on bias and discrimination. The White House and Senator Schumer have also centered race in their discussions of AI. They have aimed to incorporate diverse voices in the conversations shaping AI policy.
Increased focus on AI is paired significant consternation about the safety of our democratic process. With 2024 being an election year, we can expect a focus on combating AI-related misinformation in the run up to November. In the fall of 2023, lawmakers proposed a bipartisan bill that would prohibit the distribution of deceptive AI-generated election-related content. Whether such a bill can become law, as well as whether it can be enforced, remains to be seen.
Finally, there does appear to be some consensus regarding the need for transparency in AI. President Biden’s executive order calls for the establishment of best practices regarding the detection and labeling of AI-generated content. Legislation calling for watermarking AI-generated content and encouraging training in the use and detection of AI for federal employees have also been introduced.
State Approaches to AI Policy
At the state level, lawmakers are often learning about AI as they begin to craft regulations. State activity in 2023 was widespread and it is expected that the pace of this work may increase in 2024. As “laboratories of democracy,” states play a crucial role in developing new policy to meet new needs. In an increasingly nationalized political environment, we also see policy trends moving from state-to-state more quickly. This has been seen in recent years with marijuana and gambling legalization efforts. Tracking AI policy trends across state governments is essential to ensuring compliance and in assessing what’s to come.
Trend 1: California Leads the Way
California is the largest sub-national economy in the world. It’s also home to one of the largest technology innovation hubs. Governor Newsom and California Democrats have shown an interest in being the first to act on hot-button issues like abortion, gun rights, and ESG regulations. It isn’t surprising that significant legislative action on AI is expected to occur in Sacramento this year.
California has adopted measures requiring an inventory of current “automated decision system” use in state government. The legislature has also expressed support for President Biden’s approach to AI regulation. Efforts to come in 2024 are headlined by Senator Weiner’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. This bill would regulate the development and use of advanced AI systems. It would require AI developers to report to the state on testing protocols and safety measures.
Trend 2: A Focus on Labor
One of the most common concerns associated with any new technology is its potential to cause job displacement. Because it simulates human cognition, AI poses a risk to disrupt certain industries and displace those working in them. While AI poses threats to oft-threatened industries like manufacturing, it is also places at risk industries not commonly thought of in this context. Organizations representing reporters, screenwriters, and lawyers have all sounded the alarm about the labor risks of AI.
There is still much we don’t know about how AI will affect our workplaces. State responses to AI’s impact on labor show a desire to learn more while preventing some overreach. New Jersey’s A 5150 and New York’s A 7838 are both propose requiring their state’s Department of Labor to collect data on job losses due to automation. Massachusetts’s An Act preventing a dystopian work environment, perhaps the most interestingly named of the bills in this category, seeks to ban the use of AI in certain hiring and workplace productivity practices.
Trend 3: Task Forces, Commissions, and Studies
When it comes to complex policymaking discussions, it’s worth remembering that the vast majority of state legislators don’t come from the field which they are regulating. This isn’t a dismissal of these legislators or their ability to regulate AI; however, it underscores the need for state legislators to study these issues before they act. As such, much of the AI legislation that has passed so far have established groups dedicated to studying its impact and making recommendations. It will be important to follow the work of these groups to anticipate what their impact on policymaking will be.
Looking Ahead: AI Policy
As we anticipate what action on AI awaits us through the rest of 2024, upcoming elections stand out as a monumental factor. Along with the presidency, all House seats, 34 Senate seats, and a majority of state legislator seats are up for election in November. AI policymaking will be heavily impacted by these elections, both in the lead up to and aftermath of election day.
As mentioned, AI poses a real risk to exacerbate the growing trend in the U.S. of election misinformation. Conversations about preventing this challenge have already begun, many focusing on preventing deep fakes or erroneous content. It seems likely that at least some misinformation will reach voters this fall. How the public and our elected officials react to it will shape any legislative action following the election.
There doesn’t yet appear to be consensus partisan positions on AI that the average voter will weigh in their decisions. However, the impact of AI should not be underrated as a campaign issue. After all, AI will have profound effects on healthcare, education, the economy, and civil rights; the issues that are perennially on the mind of the American electorate.
More Resources for Public Policy Teams
End of Session Report: Florida 2024 Legislative Session
The 2024 Florida legislative session saw significant activity in the realm of insurance and financial services, reflecting key themes of consumer protection, market stability, and regulatory modernization.
2024 118th Congress Report
While that post-election period will include important debates regarding the funding of the government (including cash-strapped disaster relief programs) into 2025, it is safe to say that we shouldn’t anticipate any seismic shifts in policymaking as lawmakers prepare for a new Congress in January.
2024 End-of-Session Report: New York
The recent legislative session in New York saw a significant focus on the “Employment, Labor and Professional Development” policy area, with numerous bills passed addressing a wide range of issues.