Press "Enter" to skip to content

Opinion: California’s Anti-Deepfake Law Is Far Too Feeble

While well intentioned, the law has too many loopholes for malicious actors and puts too little responsibility on platforms.

Imagine it’s late October 2020, and that there’s fierce competition for the remaining undecided voters in the presidential election. In a matter of hours, a deepfake video depicting a candidate engaged in unsavory behavior goes viral, and thanks to microtargeting, reaches those who are most susceptible to changing their vote. Deepfakes—the use of AI to generate deceptive audio or visual media depicting real people saying or doing things they did not—are a serious threat to democracy and lawmakers are aggressively responding. Unfortunately, their current efforts will largely be ineffective.

Last month, Governor Gavin Newsom signed California’s AB 730, known as the “Anti-Deepfake Bill,” into law. The intention to quell the spread of malicious deepfakes before the 2020 election is laudable. But four major flaws will significantly impede the law’s success: timing, misplaced responsibility, burden of proof, and inadequate remedies.

Timing

The law applies only to deepfake content distributed with “actual malice” within 60 days of an election—a forced time constraint that does not reflect the enduring nature of material posted online. “What happens if content is created or posted 61 days before an election and remains online for months, years?” asks Hany Farid, a professor and digital forensics expert at UC Berkeley who works on deepfake detection.

To ensure that the law does not infringe upon free speech rights, it incorporates exemptions for satire and parody. However, AB 730 is ambiguous on how to efficiently and effectively determine these criteria—ambiguity that nefarious actors are likely to game. By claiming satire and parody when the material is contested, a deepfake could be tied up in a lengthy review process for removal. Like the manipulated video of House Speaker Nancy Pelosi to make her appear intoxicated, a drawn-out review process to determine the video’s intent enables it to further gain virality and spur a contagion of negative effects.

Misplaced Responsibility

The law exempts platforms from the responsibility to monitor and stem the spread of deepfakes. This is due to Section 230 of the Communications Decency Act, which provides platforms with a liability safeguard against being sued for harmful user-generated content, especially if they are acting in good faith to remove the content. Court interpretations since the law’s passing in 1996 have broadened platforms’ immunity, even if they deliberately encourage the posting of harmful user-generated content.

“READ MORE…”

Breaking News: