The Desk appreciates the support of readers who purchase products or services through links on our website. Learn more...

Broadcast, cable groups want FCC to ditch AI disclosure requirements

The groups say the requirement might confuse voters to the benefit of major technology platforms; a watchdog group says those concerns are overblown.

The groups say the requirement might confuse voters to the benefit of major technology platforms; a watchdog group says those concerns are overblown.

A television set with streaming content.
(Stock image)

Two major broadcast and cable industry groups are calling on the Federal Communications Commission (FCC) to ditch a proposal that would require political ads airing on television to include a conspicuous disclosure if their spots include material that was produced using artificial intelligence tools.

The proposal, first released by the FCC in July, is meant to crack down on the use of so-called “deepfake” technology that could mislead voters into thinking a candidate or other official said or did something that they, in fact, did not do or say.



The FCC said the use of artificial intelligence tools by political candidates and campaigns can be a benefit — it can help smaller campaigns and candidates “with limited financial resources…reach larger audiences.” But the agency said the growing trend of “deepfakes” can sow “confusion and distrust among the voting public” through material misrepresentations.

The proposal would require licensed broadcast TV stations and cable and satellite TV platforms to inquire about a candidate or campaign’s use of artificial intelligence in their spots, and disclose any such use accordingly when those commercials air on TV.



On Thursday, the NCTA — The Internet & Television Association (NCTA) took issue with the proposal, saying voters were likely to be confused by political advertising aired on TV that contained any such disclosure, only to be exposed to online ads that do not have the same notice.

“The proposed rules favor advertisers who advertise on streaming and online platforms over those who advertise on platforms subject to the FCC’s jurisdiction because their advertisements will not be subject to the FCC’s disclosure requirements,” the NCTA said in a letter.

The National Association of Broadcasters (NAB), a trade group representing the interests of commercial broadcast TV and radio stations, concurred with the NCTA’s concerns.

“Viewers and listeners will be exposed to novel disclosures on broadcast programming that they will not experience anywhere else, thus causing confusion as to why one ad contains a disclosure while the ostensibly same ad hosted elsewhere does not,” the NAB said on Friday, warning that political campaigns will “think twice about placing ads with radio and TV stations” if the disclosures are mandated.

Public Citizen, a non-profit consumer advocacy group, said those concerns are overblown. AI disclosures on political ads could actually strengthen the confidence in broadcast TV stations and cable TV platforms, because viewers know what they are getting, as compared to online advertising that lacks such disclosures, they said.

“The stakes of an unregulated and undisclosed Wild West of AI-generated campaign communications are far more consequential than the impact on candidates; it will erode the public’s confidence in the integrity of the broadcast ecosystem itself,” Public Citizen wrote in a letter to the FCC earlier this week. “Congress intended FCC-licensed stations to serve the public interest by, among other things, providing them with special rights and responsibilities with respect to election communications. But if voters cannot discern reality from verisimilitude because the use of deepfakes is not disclosed, they will increasingly lose confidence in the ability of broadcast and cable TV and radio to deliver trustworthy election information to the public.”

Political campaigns and causes have already taken advantage of AI tools to produce ads that include “deepfake” material. Earlier this year, an unknown group created a video that included a voiceover that made it appear as if Vice President Kamala Harris challenged the competency of then-incumbent President Joe Biden. The video, which was fraudulent, was reposted by X (formerly Twitter) owner Elon Musk, where it was exposed to nearly 200 million social media users. Last year, Florida Governor Ron DeSantis published doctored images that appeared to show former President Donald Trump kissing Dr. Anthony Fauci, who led the National Institute of Health during the coronavirus health pandemic.

“Political deepfakes are a here-and-now problem, poised to become much more severe as quickly evolving generative AI technologies make deepfakes easier to produce at ever higher levels of quality,” Public Watch said this week. “In the absence of regulation, deepfakes could become normalized in the public’s mind, making it much more difficult to address the problem later.”

Get stories like these in your inbox, plus free email alerts on breaking tech and media news.

Photo of author

About the Author:

Matthew Keys

Matthew Keys is a nationally-recognized, award-winning journalist who has covered the business of media, technology, radio and television for more than 11 years. He is the publisher of The Desk and contributes to Know Techie, Digital Content Next and StreamTV Insider. He previously worked for Thomson Reuters, the Walt Disney Company, McNaughton Newspapers and Tribune Broadcasting.
Home » News » Industries » Television » Broadcast, cable groups want FCC to ditch AI disclosure requirements