Humans may be more likely to believe disinformation generated by AI
Humans may be more likely to believe disinformation generated by AI
However, the company has also urged caution when it comes to overestimating the impact of disinformation campaigns. Further research is needed to determine the populations at greatest risk from AI-generated inauthentic content, as well as the relationship between AI model size and the overall performance or persuasiveness of its output, the authors of OpenAI’s report say.
It’s too early to panic, says Jon Roozenbeek, a postdoc researcher who studies misinformation at the department of psychology at the University of Cambridge, who was not involved in the study.
Although distributing disinformation online may be easier and cheaper with AI than with human-staffed troll farms, moderation on tech platforms and automated detection systems are still obstacles to its spread, he says.
“Just because AI makes it easier to write a tweet that might be slightly more persuasive than whatever some poor sap in some factory in St. Petersburg came up with, it doesn’t necessarily mean that all of a sudden everyone is ripe to be manipulated,” he adds.