Artificial intelligence is leading to a massive shift in the way voice, video, and created content is consumed and shared. Deepfake content facilitates the creation and sharing of misleading, fake, explicit, or incorrect information that appears real. Proponents argue that deepfakes, or synthetic content, could have a legitimate use in movies, entertainment, and education. However, they also pose significant risks, including:
- Spreading misinformation
- Creating non-consensual explicit content
- Perpetrating fraud
In particular, deepfakes pose a significant threat to politics, national security, and government. Deepfakes can infiltrate anything from a candidate statement or video, to fake footage of an emergency or fabricated audio of recordings from government officials or politicians.
The rapid advancement of AI has made deepfakes increasingly sophisticated. Legislators and the Americans they serve share concerns about their potential misuse. The need for effective detection and regulation methods is clear. Read on to learn more about efforts to regulate deepfakes at the state and federal levels.
What Are Deepfakes?
A deepfake is a fake piece of media that is created using AI. The AI creates fabricated images, video clips, or audio snippets. The main issue with deepfakes is their “believability.” Deepfakes are often used to deceive viewers or users of social media on any number of controversial issues. This might include politics, national security, social issues, or notable people. AI-generated content is becoming more convincing and can closely mimic the appearance and voice of real individuals.
Why Regulate Deepfakes?
Deepfakes can cause significant harm. One of the primary concerns related to deepfakes is the spread of misinformation across various platforms, including social media and news organizations. Since 2016, misinformation has been a consistent target of federal legislation and regulatory efforts. Efforts to curb misinformation led to fact checking of posts and user content on Meta (then-Facebook). Currently, X offers a community notes function for users to correct or add context to misleading posts. A similar regulation or standard across social media could be adopted for deepfakes.
Without regulation, deepfakes could manipulate public opinion, interfere with elections, or incite unrest. The ability to produce highly convincing fake content is a serious threat to our democracy.
Privacy and personal security are also at risk with an increase in deepfakes. Citizens can be exploited with inappropriate and non-consensual content that is explicit. Everyday citizens must be protected from being victimized by deepfake content.
Deepfakes could also ramp up financial fraud – especially for seniors. While phishing emails and robocalls are already common attempts to defraud seniors, deepfake audio could be employed to impersonate state or federal officials or agencies looking for access to sensitive information.
How Are Legislators Approaching Deepfake Laws?
Congress has taken several steps to address the regulation of deepfakes. Legislators recognize the threats deepfakes pose to national security, privacy, and public trust. Efforts to understand deepfakes mirror those with big tech. Congress has held hearings to better understand the implications of deepfakes and explore technological solutions for detection and prevention.
In March of 2024, the House Committee on Oversight and Accountability held a hearing on deepfakes, called “Addressing Real Harm Caused by Deepfakes.” The hearing focused not only on the national security and political implications of deepfakes, but also their impact on everyday citizens, including children.
A follow-up report from the hearing found that improved technology will make it more difficult to distinguish deepfakes from real content. Technological advancement will further erode public trust in social media and the news. It also found that women and children were more likely to be targets of a deepfake video.
Federal Deepfake Laws
The 2019 National Defense Authorization Act mandates that the U.S. Department of Homeland Security (DHS) produce annual reports on the use of deepfakes. This law was among the first targeting deepfakes. Since then, Congress, agencies, and the White House alike have taken significant steps on deepfake regulation.
Congress has recently passed several efforts to regulate and oversee deepfakes. These include:
- The Preventing Deep Fakes Scams Act. H.R. 5808 establishes the Task Force on Artificial Intelligence in the Financial Services Sector. The Task Force reports to Congress on issues related to AI in the financial services sector.
- The DEEPFAKES Accountability Act. H.R. 5586 protects national security organizations from threats posed by deepfake technology. It also provides a legal recourse to victims of harmful deepfakes.
- The Protecting Consumers From Deceptive AI Act. H.R. 7766 requires the National Institute of Standards and Technology to establish task forces on AI and deepfakes. The Task Forces aim to facilitate and inform the development of technical standards and guidelines related to the identification of content created by generative AI. These standards will ensure that audio or visual content created or substantially modified by AI includes a disclosure acknowledging the origin of such content.
- The No AI Fraud Act. H.R. 6943 provides for individual property rights in likeness and voice.
Agency and White House Involvement
Beyond Congress, the White House and federal agencies have also taken steps to address deepfakes. The White House has conducted meetings and consultations with technology companies, researchers, and policymakers to discuss deepfake legislation and regulation.
Along with the DHS’s annual reports on deepfakes, mentioned above, other federal agencies have launched programs aimed at developing technologies to detect and counteract deepfakes. This includes the Department of Defense’s Advanced Research Projects Agency (DARPA). DARPA’s Media Forensics program creates automated tools to identify deepfake content.
The Federal Trade Commission has also engaged in efforts to protect consumers from the deceptive practices enabled by deepfakes. The Agency emphasizes the need for transparency and accountability in digital content creation.
State-Level Deepfake Laws
Several states have already passed or are looking at legislation to regulate deepfakes.
- California is a pioneer in deepfake regulation with laws enacted as far back as 2019. A.B. 602, passed in 2019, allows victims of non-consensual deepfake pornography to sue creators. Also passed in 2019, A.B. 730 prohibits the distribution of deceptive media aimed at influencing elections within 60 days of an election.
- With H.B. 1766 and S.B. 2396, Hawaii has focused on preventing misinformation or communications that could be considered deepfakes or fraudulent before or during elections.
- Laws in Arizona, including S.B. 1078 and S.B. 1336, aim to prevent the false use of digitized audio recordings or the unauthorized dissemination of deepfake videos, audio or communications for financial gain or malicious intent.
- Washington legislators have passed S.B. 5152, which targets deceptive media and election integrity. The law mandates disclosure of manipulated media that could influence elections. It also requires clear identification of any AI-generated content used in political campaigns.
- Similar to other states, Florida has implemented S.B. 850, which requires the labeling of political ads or other election related communications if they were created with generative AI.
- In New York, Governor Hochul signed S. 1042 in October of 2023. The legislation aims to regulate the use of deepfakes in various domains, including non-consensual pornography and election interference. It will provide clear guidelines and penalties for the misuse of deepfake technology.
Looking Ahead: The Future of Deepfake Laws
Deepfake laws and regulations must be multifaceted. This technology is rapidly advancing and avenues for potential misuse are many. Federal and state attempts to regulate this technology may aim to reign in bad actors or set common standards. Legislators must develop more comprehensive and specific laws targeting the malicious creation and dissemination of deepfakes, particularly those intended to deceive, defraud, or harm individuals and public institutions.
Key aspects of future regulations may include mandatory labeling of deepfake media. Measures such as these would ensure transparency and help viewers identify altered content. This might include the use of a blockchain system or immutable codes that identify original videos from deepfake content. Legal frameworks could impose severe penalties for creating or distributing harmful deepfakes, such as those used for political manipulation, financial fraud, or non-consensual explicit content.
Deepfake threats are global. International cooperation between countries and regulatory bodies will be crucial. Global leaders must share technological solutions and best practices for detection and regulation.
Advancements in deepfake detection will play a significant role in the future of regulation. Governments may fund research into and development of AI-driven tools to identify deepfakes accurately and swiftly. Public awareness campaigns will also be vital to educate citizens about the existence and potential dangers of deepfakes. These efforts will foster a more informed and critical media consumption.
Overall, the evolving legal and regulatory landscape will aim to balance protecting society from the risks of deepfakes while allowing for legitimate use in fields like entertainment and education.
Using Plural to Monitor Deepfake Laws
Top public policy professionals trust Plural for their legislative tracking and stakeholder management needs. With Plural, you’ll:
- Access superior public policy data
- Be the first to know about new bills and changes in bill status
- Streamline your day with seamless organization features
- Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
- Keep everyone on the same page with internal collaboration and external reporting all in one place
Create a free account or book a demo today!