Ad Image

How AI Will Amplify Foreign Election Interference in 2024

Foreign Election Interference

Foreign Election Interference

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Chris Olson of The Media Trust examines the impact of AI as a psyops weapon and how it will be utilized in foreign election interference.

With seven months left until the 2024 presidential election, a report from the Microsoft Threat Analysis Center (MTAC) has found that Russian, Chinese, and Iranian actors are already targeting American voters with online election interference campaigns. Their objectives vary in specifics, but in general, they all seek to push the U.S. in a direction favorable to the goals of their nations, worsen the country’s political divide, and distract the American public from issues that could provoke a U.S. response.

None of this is new: the same groups have been playing the same game since 2016. But with the arrival and ongoing progress of generative AI, many wonder how the game might change in 2024. Some have predicted doom, while others argue that the impact of AI is greatly exaggerated.

In isolation, the anti-alarmists have a point– but if we lay aside the hype and focus on the results AI is already delivering for global businesses, it’s clear that reality favors those who are worried: at its current stage of advancement, AI can amplify disinformation created by foreign actors through hyper-targeted digital campaigns of unprecedented scale and relevance to individual voters.

How AI Will Amplify Foreign Election Interference in 2024


AI Makes Everything More Efficient

There is a danger to sensationalistic reporting on AI: the more it happens, the less people are attuned to the ways AI is actually changing the world, and how it might affect them in the near future. Only two percent of Americans are more concerned than excited about AI. It’s a classic “boy who cried wolf” scenario that can only be preempted by acknowledging the limits of AI as it currently exists.

For some time, experts feared an Internet overwhelmed by deepfake videos, automated social accounts, and fully synthetic content, leading to mass deception and confusion. As pointed out in MTAC’s recent report, these fears have not come to pass– but they are also not realistic scenarios for how AI will intersect with U.S.-focused influence operations.

In the world of business, AI has not automated away most jobs– instead, it has taken on routine tasks and improved the efficiency of core business functions in non-trivial ways. Likewise, in the world of cyber-crime, AI has not replaced human hackers, but it has made reconnaissance, exploit writing, and social engineering conversations easier.

The bottom line is this: whatever people are already doing– AI helps them do it better. Nation-state actors who are already experienced in foreign election interference campaigns will simply use AI to make their jobs easier and more effective. Among other things, they’ll quickly discover how it can help them reach the right people with the right message.

Better Messages, Better Targeting

Since 2016, we have tended to assume that content on the Internet spreads organically. But this assumption is grossly out of date: today, the Internet largely runs on user targeting and content recommendation algorithms. Even paid content masquerades as real content, while rich, targeted advertisements follow us everywhere we go.

There is an invisible labyrinth of technology which makes this state of affairs possible, capturing data that flows through all the sites we browse and the devices that we use – even passive ones that sit in a living room. Before the arrival of ChatGPT, it was already good enough that advertisers could use it to target a single individual in a specific location.

With the arrival of advanced AI, it’s getting even better, and the benefits to foreign actors will be numerous:

  1. It is easier than ever to analyze large volumes of qualitative data about users – not merely numbers or quants, but conversational data as well, driving informed inferences about a user’s beliefs, personalities, and voting preferences. MTAC observes that Chinese actors have been putting questions to Americans through social media sock puppets, gathering intelligence on political sentiments. AI makes sifting through this information and utilizing it far easier, especially for non-native English speakers.
  2. It’s improving the technology that social media platforms and AdTech companies use to deliver messages to the right users. According to Google, advertisers who used its AI-driven Performance Max features achieved 18 percent higher conversion than advertisers who didn’t. Meanwhile, Meta is integrating AI features across all of its products – including tools for advertisers – and so are leading firms like WPP.
  3. It is possible to personalize messages on a scale never before imaginable in real-time. In the past, organizations that wanted to tailor their message for different groups had to define the parameters of those groups and write multiple versions of the same message. Now, AdTech will be able to do this on the fly – not for small groups, but for specific individuals, as they browse the Web.
  4. Translations are of higher quality. According to one study, AI has improved machine translations to an accuracy of 97 percent. This is undoubtedly one reason that threat groups – according to MTAC – are targeting not only English speakers but French, Arabic, Finnish, and Spanish speakers as well.

Until now, nation-state actors targeting Internet users across the world have faced many barriers to entry – language is one of them, and reaching the right users is another. Thanks to AI, those barriers are simply disappearing. It’s a brave new world, and the long-term consequences are unclear. But the immediate consequences are straightforward and predictable.

Individuals Are the Battleground

No matter the advances AI makes, individuals are usually overconfident in their ability to detect AI-generated content. They are more likely to worry that society as a whole will fall into AI-driven deceptions. However, research indicates that these are precisely the wrong priorities: according to MTAC, “collectively, crowds do well in sniffing out fakes on social media. Individuals independently assessing the veracity of media, however, are less capable”.

Individuals, then, are the ideal target for misinformation and propaganda – not the Internet as a whole. The same is true for cyber-crime: threat groups like BlackCat – responsible for the recent United Health breach – already depend on digital advertising as a primary delivery mechanism for ransomware attacks. Combined with cutting-edge AdTech, generative AI gives foreign actors the exact tools they need to replicate the approach that has worked so well in other domains.

We have slept on the potency of digital advertising, user targeting, and content recommendation algorithms that put messages directly in front of users, and the risk they pose to our democracy. As we advance, the battle of Moscow and Beijing for the soul of America will be intensely personal, and that should be reflected in our approach to foreign election interference and malign influence.

Share This

Related Posts