The chair of the Federal Communications Commission (FCC) has proposed a new rule that would require political advertisements aired on radio and television to disclose whether artificial intelligence tools were used in the production of those spots.
The rule, floated by FCC Chairperson Jessica Rosenworcel on Wednesday, would apply to all licensed broadcast TV and radio stations, along with cable networks and legacy pay TV platforms like cable and satellite.
“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”
The proposal is meant to curb the use of so-called “deepfakes,” where AI tools are used to manipulate certain video and audio elements to make it appear as if a person said or did something unscrupulous. Some political and technology groups have raised concerns that deepfakes could be used to sway voters for or against a candidate or cause.
As currently crafted, the proposal announced by Rosenworcel on Wednesday would not ban ad producers from using AI tools, but would require a prominent disclosure that those tools were used when the ads run on FCC-regulated platforms.
The rule would not apply to social media, streaming video, podcast, and streaming cable-like services because those platforms fall outside the FCC’s jurisdiction. But those platforms could fall under legislation being promoted by Senators Amy Klobuchar and Lisa Murkowski, who introduced a bill earlier this year requiring similar disclosures in all AI-produced ads, regardless of the platform.