Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
‘Have a little compassion’: Americans talk high holiday prices, anxiety
Dec 11 Dr. Frank Luntz‘System is rigged’: Black Americans on the American Dream
Nov 27 Dr. Frank Luntz‘Extremist’ or ‘phony’: Americans share who they voted for and why
Nov 21 Dr. Frank Luntz‘Extreme’ or ‘fake’: Swing voters weigh Trump or Harris
Nov 4 Dr. Frank LuntzDo we need new laws for AI-generated political ads?
By Straight Arrow News
It’s the Wild West when it comes to regulating AI-generated political advertising. As new technology explodes, many are questioning whether we need more oversight of ads made with artificial intelligence. Right now, campaign ads don’t have to disclose if they were created or manipulated by AI, and some Democratic lawmakers are hoping to change that.
Straight Arrow News contributor John Fortier urges caution. He believes any new law that regulates AI-generated ads would “risk sweeping up long-established and legitimate uses of modified images and video.”
Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
This is the dawn of a new national Republican coalition
Why are transitions of power so complicated in the United States?
The 25th Amendment should remain above politics
Uncensored political content like Trump-Musk on X is a win for free speech
How do presidential debates work?
Underreported stories from each side
Israel arrests Jerusalem man for spying on behalf of Iran
17 sources | 11% from the left Getty ImagesKFile: Pete Hegseth spread baseless conspiracy theories that January 6 attack was carried out by leftist groups
14 sources | 0% from the right AP ImagesLatest Stories
Congress unveils stopgap bill to avert shutdown
GrubHub agrees to $25m settlement for ‘deceptive’ practices
Disney pulls transgender storyline from upcoming Pixar series
RFK Jr.’s lawyer: NYT report over polio vaccine petition ‘categorically false’
'Dirty Dancing,' 'among 25 films named to National Film Registry
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
Give time, love and togetherness for the holidays
Wednesday Adrienne LawrenceDid Democrats learn anything from 2024 election?
Tuesday Ruben NavarretteGEC shutdown strikes a blow to government censorship
Tuesday Ben WeingartenElon Musk budget cuts will devastate GOP voters
Monday David Pakman