Are TikTok and X Amplifying Antisemitic Content? It’s Increasingly Hard to Know.
Researchers like Mr. Venkatagiri said such data lacked context: “You have to look at the whole information environment, at every single variant of that hashtag, whether it’s being used in support or against a particular topic, what other text is being used, if it’s just a tweet or in a video, comments, links,” he said. “There’s so much more you have to think about beyond just looking at this one-off analysis.”
Tech giants have said that they are trying to balance the interests of researchers with users’ privacy rights, while also ensuring the data is not used for commercial purposes.
“It’s not that there shouldn’t be guardrails placed around researchers, but when those guardrails are placed by the companies themselves, it introduces challenges to what kind of research gets done and by whom,” Ms. Brown said.
The Anti-Defamation League has talked with TikTok for more than a year about advocacy groups’ getting access to its A.P.I., but the war has highlighted its urgency, said Yaël Eisenstat, a vice president at the Jewish advocacy group.
It is “prohibitively difficult” to independently determine whether TikTok is promoting content that favors Israel or Palestinians or analyze its management of antisemitic content, she said.
As of this year, academic researchers from nonprofit universities in the United States and Europe can apply for free access to TikTok’s research A.P.I. with a defined research proposal, subject to a two- to three-week approval process. (Approval processes are common across the platforms.) The company said that it has already received proposals to study content related to the Israel-Hamas war.
The Digital Services Act, a new European Union law, now requires large online platforms to provide real-time data to researchers studying the risks of social media, and has pushed companies like Meta and YouTube to offer new tools for researchers. Similar requirements are being proposed in the Platform Accountability and Transparency Act in the United States.
Meta, which has faced criticism over flawed data and tussled in 2021 with a New York University research group, updated researchers’ ability to work with its content archive (the company noted this summer that queries of its library take place in “controlled-access environments” that do not allow researchers to download data). YouTube, after years of pressure from groups like the Mozilla Foundation, is also opening up to independent research.
Susan Benesch, who runs Dangerous Speech Project, a research group, hopes to study content emerging from the Gaza conflict but, for now, is mostly relying on anecdotal evidence from acquaintances working in trust and safety at social media platforms.
She knows that the companies, hoping to avoid public criticism, don’t have an incentive to release data to researchers. Transparency could, however, be “a huge opportunity” for society, she said.
“There’s a gold mine with all of these different veins of invaluable information hidden in there that no researchers in the course of human history could have even dreamed about until now,” Ms. Benesch said. “The platforms still won’t give it to us, but now at least it’s there.”