Worldcoin just officially launched. Here’s why it’s already being investigated.

Worldcoin just officially launched. Here’s why it’s already being investigated.

“Worldcoin’s proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered,” says Eileen. “But I think at this stage, it’s clever marketing to try to convince everyone to get scanned and sign up so that they can achieve the ‘fastest’ and ‘biggest onboarding into crypto and Web3’ to date, as Blania told me last year.”

Eileen points out that Worldcoin has also not yet clarified whether it still uses the biometric data it collects to train its artificial intelligence models, or whether it has deleted the biometric data it already collected from test users and was using in training, as it told MIT Technology Review it would do before launch. 

“I haven’t seen anything that suggests that they’ve actually stopped training their algorithms—or that they ever would,” Eileen says. “I mean, that’s the point of AI, right? that it’s supposed to get smarter.”

What else I’m reading

  • Meta’s oversight board, which issues independently drafted and binding policies, is reviewing how the company is handling misinformation about abortion. Currently, the company’s moderation decisions are a bit of a mess, according to this nice explainer-y piece in Slate. We should expect the board to issue new abortion-information-specific policies in the coming weeks. 
  • At the end of July, Twitter rebranded to X, in a strange, unsurprising-yet-surprising move by its new czar Elon. I loved Casey Newton’s obituary-style take, in which he argues that Musk’s $44 billion investment was really just a wasteful act of “cultural vandalism.” 
  • Nobel-winning economist Joseph Stiglitz is worried that AI will worsen inequality, and he spoke with Scientific American about how we might get off the path we seem to currently be on. Well worth a read! 

What I learned this week

Bots on social media are likely being supercharged by ChatGPT. Researchers from Indiana University have released a preprint paper that shows a Twitter botnet of over 1,000 accounts, which the researchers call fox8, “that appears to employ ChatGPT to generate human-like content.” The botnet promoted fake-news websites and stolen images, and it’s an alarming preview of a social media environment fueled by AI and machine-generated misinformation. Tech Policy Press wrote a great quick analysis on the findings, which I’d recommend checking out.

Additional reporting from Eileen Guo.

Add a Comment