Skip to main content
Support
Blog post

AI Poses Risks to Both Authoritarian and Democratic Politics

Alla Polishchuk
Andriy Yermak
Head of the Office of the President of Ukraine Andriy Yermak

The use of artificial intelligence is expanding and, as many have predicted, AI is increasingly prominent in politics. In such countries as Argentina or Turkey, campaign managers have deployed AI to boost their candidate or to smear the opponent. Meanwhile, in Russia, where elections are a highly orchestrated affair, political managers are using AI to discredit antiwar activists and influential political émigrés. 

 

With at least sixty-four countries plus the EU planning to hold national elections in 2024, representing almost half the world’s population, the risk of “blurring the walls of reality,” as one analyst has put it, through the use of AI-generated and more conventional deep fake productions is disturbingly high.

 

AI-Driven Electoral Politics: What 2023 Can Teach Us

 

Argentina. In the run-up to Argentina’s October 2023 presidential elections, during what is now known as the first AI-driven election campaign, competing teams employed AI techniques to create images and videos for promotion and to launch attacks on each other. 

 

One of the candidates running for president, Sergio Massa, had his team create a video featuring his main rival, Javier Milei, explaining the hypothetical revenues that could be realized from the sale of human organs and suggesting that for this reason, parents could consider having children as a “long-term investment.” Despite the video being explicitly labeled as AI-generated, it was quickly shared on different platforms without disclaimers.

 

AI-generated images of Argentina’s presidential election campaigns have been viewed more than 30 million times. Despite this smear campaign, Milei, a self-described far-right “anarcho capitalist,” won.

 

Turkey. Another serious election-related example comes from Turkey’s 2023 presidential campaign. In the run-up to the elections, President Recep Tayyip Erdoğan’s staff shared a video depicting his main rival, Kemal Kiliçdaroğlu, being endorsed by the Kurdistan Workers’ Party, a designated terrorist group. The video was clearly fabricated but was widely circulated. 

Erdoğan won the election. Of course, his victory owed to much more than a single deep fake video. He used more traditional means of eliminating unwanted rivals from contention and exercising control over media to secure his victory. However, employing deep fakes, along with leveraging political power, against opponents could end their political career within minutes. 

 

AI Uses in an Authoritarian Context 

 

Russia is an interesting case in this new brave world of political AI. The Kremlin does not need AI-driven campaigns because it manages the procedures it calls “elections” using mostly traditional authoritarian means of political repression and control over would-be candidates’ access to the ballot. But Russia’s political managers do increasingly use deep fakes against President Putin’s political opponents. 

 

One favorite technique of the Kremlin’s propaganda machine is to create a flood of alternative, and changing, narratives to erode trust. At the outset of the full-scale invasion against Ukraine, hackers uploaded a deepfake of Ukrainian President Volodymyr Zelensky to a popular Ukrainian website. The deepfake supposedly showed Zelensky urging the Ukrainian army to surrender. Even a poorly made deepfake video published at a time when information warfare is at its peak may add uncertainty and play into the hands of the Kremlin aiming to destabilize Ukraine’s civil society.

 

Recent events show that it is easy to use deep fakes to compromise political opponents. The Russian writer Dmitry Bykov and the Russian-Georgian writer Boris Akunin, both of whom hold avowed antiwar views, recently revealed that pro-Kremlin pranksters who claimed to represent the head of the office of the president of Ukraine Andriy Yermak had engaged in such tactics against them. 

 

In conversation, both writers proclaimed their support of Ukraine, and as a result, publishing houses in Russia and stores stopped selling their books. 

 

The actor Artur Smolyaninov also reported having a video conference with what turned out to be a deep fake representation of the head of Zelensky’s office. The actor emphasized that the deep fake of Andriy Yermak looked very convincing, commenting, “On Zoom, you see an absolutely real person: he talks like Yermak, has gestures like Yermak—everything, just like Yermak.”

 

This is perhaps the first taste of how authoritarians can use new tools for the age-old end of maintaining power. Arguably, it is more important that AI may become a game-changer for elections in democratic or hybrid regimes than in authoritarian ones. Let us not forget that 2024 is a year when half the world’s population will have elections, including the United States, Indonesia, India, and many others in Europe. A perfectly timed deep fake video could seriously mess with the elections. 

 

How AI Can Erode Trust in Democratic Institutions 

 

Language models can create tons of unique messages for various social media platforms, for texting or emailing personalized messages to millions of voters. AI models can use reinforcement learning techniques to generate series of messages that become more effective at swaying your vote, following you across different websites and social media with targeted messages and ads. 

 

In this machine-learning approach, the AI model tries different methods, absorbs feedback on what works best, and refines its tactics to achieve a specific goal in the most effective way. The fact that it amplifies mistakes and provides inaccurate information is not a problem since its main goal is to influence your vote, not to give you accurate information.

 

Another technique is the psychochat, whereby candidates create and use digital avatars to appear to interact directly with voters. There already exists an example of such a psychochat, built around psychologist Martin Seligman’s personality and based on his writings. A bot named “Ask Martin” is a talking chatbot whose responses potentially reflect Mr. Seligman’s ideas

 

Psychochats might become the next trend for candidates as they create virtual versions of themselves to engage with potential voters, address their concerns, and mirror their beliefs. Perhaps in just a few months everyone will have the opportunity to chat with a digital version of Donald Trump.

 

If the staff of one presidential candidate decides to use AI in the upcoming 2024 presidential election, other candidates’ teams will assuredly follow suit. The winner might be determined by the effectiveness of their AI strategy, focusing on the use of technology and voter manipulation to gain votes rather than on addressing genuine voters’ interests.

 

Risks and Regulation

 

Researchers have been worried about the impact of AI on elections for a long time. This technology has the potential to mislead voters by contributing to the spread of disinformation through social media and raising questions about the validity of information people are exposed to on a daily basis. In the past year alone, deep fakes have become so realistic that up to half of survey respondents could not tell the difference between real and manipulated videos, with the proportion significantly higher among the older generation.

There are few ways to protect oneself against misleading AI-generated disinformation campaigns. One way to reduce the effect might be to enhance privacy protections while online. Disinformation campaigns depend heavily on accessing numerous amounts of personal data to target individuals, create personalized messages to manipulate viewers’ beliefs, and track and target individuals. Denying the machine access to any part of this information can significantly reduce its effectiveness.

 

The European Parliament and the U.S. Federal Election Commission are taking steps in the direction of regulating deep fakes in political messages and advertising campaigns. However, they are not there yet.

 

Even as people lose trust in public institutions and politicians, tech companies and social media companies are cutting back their moderation departments—thus creating a one-two punch. A leading example is Elon Musk, who, after purchasing Twitter, decided to restore several accounts, including those of Donald Trump, Kanye West, and Alex Jones, that had previously been banned for spreading conspiracy theories and antisemitism. Now that social media platform, renamed by Musk X, has turned into a swamp flooded with disinformation excused by Musk as exemplifying “freedom of speech.”

 

X is not alone in this. Other social media platforms also struggle with controlling the spread of hoaxes, deception, and false narratives. Companies like Meta, which owns Facebook and Instagram, and Google announced that political ads must disclose whether they use AI, though nonpaying accounts that put up AI-generated images or narratives are not required to disclose.

 

In an era in which deep fakes, disinformation campaigns, and conspiracy theories are getting more sophisticated, the real challenge might be holding the belief that an objective truth still exists.

The opinions expressed in this article are those solely of the author and do not reflect the views of the Kennan Institute.

About the Author

Alla Polishchuk

Alla Polishchuk

 Media Manager and Advisor; Propaganda and Disinformation Researcher

Kennan Institute

The Kennan Institute is the premier US center for advanced research on Russia and Eurasia and the oldest and largest regional program at the Woodrow Wilson International Center for Scholars. The Kennan Institute is committed to improving American understanding of Russia, Ukraine, Central Asia, the Caucasus, and the surrounding region though research and exchange.  Read more