The Federal Communications Commission is looking to tackle AI-generated content in political ads, proposing a rule that would require such content be disclosed.
One of the biggest fears many critics have with AI is that it can be used to generate extemely realistic fake content that can be used in political ads, potentially having far-reaching impacts on elections and public policy. The FCC is taking the first steps to address such content, looking “into whether the agency should require disclosure when there is AI-generated content in political ads on radio and TV.”
The agency has issues Notice of Proposed Rulemaking and is looking for clarification on the following:
- Seeking comment on whether to require an on-air disclosure and written disclosure in broadcasters’ political files when there is AI-generated content in political ads,
- Proposing to apply the disclosure rules to both candidate and issue advertisements,
- Requesting comment on a specific definition of AI-generated content, and
- Proposing to apply the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.
The FCC clarifies that it is not seeking to prohibit or ban such content, only to require full disclosure and transparency when it is used.
“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” said Chairwoman Jessica Rosenworcel. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”
The FCC makes clear that, while AI is expected to play a significant role in the 2024 election cycle, the ability to use AI to create deceptive content necessitates that safeguards be put in place.
The use of AI is expected to play a substantial role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the potential use of “deep fakes” – altered images, videos, or audio recordings that depict people doing or saying things that did not actually do or say, or events that did not actually occur.
This issue is just one of many that illustrates the seismic shift AI is causing across industries and walks of life.