The tech world is preparing for what some say has already begun disrupting democratic processes: Artificial intelligence. More specifically, the focus is on generative AI, a type of AI that creates fake, but convincingly realistic images, audio and text.
At the Reuters NEXT conference in New York during the week of Nov. 5, AI entrepreneur and professor emeritus of psychology and neural science at New York University, Gary Marcus, emphasized that the peril AI poses to democracy stands out as the most substantial risk.
“There are a lot of elections around the world in 2024, and the chance that none of them will be swung by deepfakes and things like that is almost zero,” Marcus said.
Politicians have been particularly vulnerable to these threats. Meta has taken preemptive measures by deciding to prohibit advertisers from utilizing its generative AI for political ads on Facebook and Instagram.
Starting next year, the use of third-party AI software for political, electoral, or social ads will require disclosure. Failure to comply may lead to ad rejection, and repeated violations could incur penalties.
While the detection of deepfakes has historically been imperfect, DeepMedia claims its product performs with 99% accuracy in its detection.
“The thing that makes our deepfake detection highly accurate, really fast and easy to use, is the fact that we both do generation and detection, these are kind of two sides to the same coin,” COO and co-founder Emma Brown said.
Brown cautioned against focusing solely on entirely fabricated content, noting instances where only a brief segment of a video is manipulated. She emphasized the difficulty in detecting such alterations, even for highly trained analysts, making it a critical concern.
“One thing that we’ve found is, you know, there are certain situations where only three seconds of a video are faked, and it might be a 20-minute video, and it might change the meaning of something,” Brown said. “But it’s only three seconds.”
Beyond the domestic effects, deepfakes are further complicating international issues.
“One of the things that we’re doing is we’re working directly with platforms to make sure that it’s integrated for all users,” Brown said. “And we’ve actually recently come out with a Twitter bot in response to Israel, Hamas.”
Recent revelations about Adobe selling AI-generated images depicting scenes of war, including explosions and destroyed homes in Gaza, further underscore the challenges. Adobe used a label to indicate the images were generated with AI.
Experts, including Brown, anticipate that the prevalence of deepfakes will only increase, flooding social media platforms with more manipulated video and audio content.