How AI May Be Used to Create Custom Disinformation Ahead of 2024

How AI May Be Used to Create Custom Disinformation Ahead of 2024

It’s now well understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.

When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.

“Generate AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”

Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”

Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.

“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.

Hany Farid, a professor of computer science at the University of California, Berkeley, says this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.

“You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.

Purveyors of disinformation will try all sorts of tactics until they find what works best, Farid says, and much of what’s happening with these disinformation campaigns likely won’t be fully understood until after they’ve been in operation for some time. Plus, they only need to be somewhat effective to achieve their aims.

Add a Comment