Synthetic media, or content generated using artificial intelligence, has begun to infect political advertising. Federal legislation has spent most of its time stalled in committees, but states and online platforms have rapidly implemented regulations. Although synthetic media may pose harms through voter manipulation and democratic distortion, it also can lower campaign costs and more vividly illustrate conceptions of a political choice’s consequences. Some governments and commentators have sought to prohibit the most harmful forms, while others have focused more on transparent approaches to regulation. In the face of yet another contentious election cycle, the question of how to ensure choices are made based on belief and not manipulation looms large.
This Note assesses the current regulatory landscape for synthetic media in political advertising to analyze the benefits and drawbacks of greater regulation. Based on current and emerging regulatory approaches, this Note examines how governments and private actors have limited synthetic media usage within existing First Amendment jurisprudence. Although initial prohibitions served a necessary role, this Note proposes that transparency enforcement is the best approach and should be built upon by creating a repository that contains information on an advertisement’s synthetic content.