Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Related
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
What happens to China after Xi Jinping dies?
Jul 15 Peter ZeihanImpact of Italy’s older, shrinking population
Jul 8 Peter Zeihan‘On death’s door’: Undecided voters react to first debate
Jul 5 Dr. Frank LuntzBiden and Trump are both unfit to be president
Jul 5 Peter ZeihanDo we need new laws for AI-generated political ads?
By Straight Arrow News
It’s the Wild West when it comes to regulating AI-generated political advertising. As new technology explodes, many are questioning whether we need more oversight of ads made with artificial intelligence. Right now, campaign ads don’t have to disclose if they were created or manipulated by AI, and some Democratic lawmakers are hoping to change that.
Straight Arrow News contributor John Fortier urges caution. He believes any new law that regulates AI-generated ads would “risk sweeping up long-established and legitimate uses of modified images and video.”
Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Related
How do presidential debates work?
US elections have become much more secure since 2000
SCOTUS case on threat of disinformation raises thorny questions
Trump v. Anderson is more complicated than it looks
Era of Iowa, New Hampshire kicking off election season is ending
Underreported stories from each side
House Republicans request interview with White House physician
14 sources | 14% from the left AP ImagesIs college worth it? Poll finds only 36% of Americans have confidence in higher education
20 sources | 6% from the right Getty ImagesLatest Stories
Test Media Landscape in API
This is an election test post updated
Musk, Trump interview on X; Biden to speak at DNC; earthquake shakes LA
Judge overturns $4.7B NFL verdict
Americans paying off last summer's debt
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
Should Biden step aside or not?
Jul 8 David PakmanDebate disaster raises questions about Biden’s capacity to lead
Jul 5 Star ParkerAmericans deserve younger candidates, better ideas
Jul 5 Dr. Rashad RicheyDespite poor debate performance, Biden deserves our support
Jul 5 Jordan Reid