Skip to main content

The 2024 presidential election is rapidly approaching, and we aren’t prepared for how new artificial intelligence models will affect our elections. Fake, A.I. generated images of former President Trump resisting arrest provided another fresh example of the firehose of lies, deception and false information that AI threatens to unleash on our electoral process. The potential for AI-generated deep-fake videos, text-based chatbots, and voice-synthesis technologies to disrupt elections in 2024 is nearly limitless, especially as the code for powerful AI large language models leak to the worst parts of the internet like 4Chan.

Even AI knows it: We recently asked OpenAI’s AI text generator, ChatGPT, about the risks that AI technology could be used to carry out political sabotage. Here’s how ChatGPT responded:

“Yes, political sabotage is a real risk of AI applications. As AI becomes more advanced, it could be used to manipulate public opinion, spread propaganda, or launch cyber-attacks on critical infrastructure. AI-powered social media bots can be used to amplify certain messages or opinions, creating the illusion of popular support or opposition to a particular issue. AI algorithms can also be used to create and spread fake news or disinformation, which can influence public opinion and sway elections.”

Indeed.

No doubt campaign lies, immense data gathering, and biased algorithms are not new concepts. What is new is the scale at which these tools can now be used to further polarize our society.

It’s led some to call for an outright moratorium on AI development, but to us, that’s a bit extreme. Instead, our focus should be on making sure we control AI, and not the other way around. We need to focus especially on how to protect our political system.

One would think the developers of these technologies would be concerned about bringing a new Frankenstein monster into the world, and would take every step possible to protect us from their vulnerability to abuse. It’s not a heavy lift; just ask ChatGPT, like we did: We posed the question of whether OpenAI could label its output so that people would know content was generated by AI rather than by a real person. ChatGPT immediately responded:

“Yes! OpenAI and other AI companies could add digital watermarks and metadata to label content as generated by AI, and make the labels nearly-indelible through encryption.”

You would think with such an easy solution, OpenAI would have long put these specific technical countermeasures against abuse into place. Sadly, they have not. Although technically feasible, “at present, OpenAI does not provide digital watermarks or metadata in its outputs to aid in its identification as the output of ChatGPT rather than the output of a real person,” per ChatGPT.

Instead, for now, the company only requires users to authenticate themselves so that their usage can be tracked, while releasing what it calls “guidelines for responsible AI use,” which include such plain-vanilla principles as being “fair and impartial” and avoiding “bias.”

The situation reminds us of an early period in American automobile manufacturing. Like AI designers, car makers were initially hesitant to include safety features. It took years for federal investigations and finally regulation to require the installation of seat belts, and eventually, new technologies emerged like airbag and automatic brakes. Those technological safeguards have saved countless lives.

In their current form, AI technologies are dangerous at any speed, let alone the viral speed with which we’re seeing the adoption of AI. Their sudden roll-out to anyone with a computer should be recognized as an emergency call to governments to act now to protect what remains of the public’s ability to determine when information of political importance is real and when it is fake.

There are basic actions Congress and state legislatures could take immediately, like imposing liability on providers of AI who choose not to take effective measures to ensure that their technologies contain sufficient controls to prevent their use for impersonation and fraud. We could require the kind of labelling suggested by ChatGPT, including watermarks, to enable people to distinguish between real people and AI bots. They could clarify that Section 230 of the Telecommunications Act provides no shield to liability for platforms that create new AI-modified content, rather than just act as a neutral pipe.

But legislation takes time, and the risks here are too great to wait. We should not expect the burgeoning industry of AI developers to voluntarily agree to some self-imposed moratorium.

The Federal Trade Commission has standing authority to investigate and take enforcement action against Big Tech when it engages in practices that are unfair, deceptive, or abusive toward consumers, as do state Attorneys General. Putting tools in the hands of fraudsters and other malicious actors including foreign adversaries without adding in controls to make them safer surely meets that standard, as consumer watchdogs are now urging.

The FTC and state AGs should open investigations, ask questions, and impose injunctive relief as needed to counter the impersonation risk.

AI’s capacity to impersonate people, including public figures, and threaten our national security and public trust is accelerating. We need to get control of it now, in time to prevent future abuses that have the potential to threaten not just our economy but our already-all-too-fractured democracy.

Timothy Wirth, former Senator from Colorado, Richard Gephardt, former Congressman from Missouri, and Jonathan M Winer, former U.S. Deputy Assistant Secretary of State, are members of the civic group, Keep Our Republic. Kerry Healey, former Lt. Governor of Massachusetts and Congressman Gephardt are Co-chairs of the Council for Responsible Social Media.