How OpenAI is trying to make ChatGPT safer and less biased
Consensus project: OpenAI has traditionally used human feedback from data labellers, but recognizes that the people it hires to do that work are not representative of the wider world, says Agarwal. The company wants to expand the viewpoints and the perspectives that are represented in these models. To that end, it’s working on a more experimental project dubbed the “consensus project,” where OpenAI researchers are looking at the extent to which people agree or disagree across different things the AI model has generated. People might feel more strongly about answers to questions such as “are taxes good” versus “is the sky blue,” for example, Agarwal says.
A customized chatbot is coming: Ultimately, OpenAI believes it might be able to train AI models to represent different perspectives and worldviews. So instead of a one-size-fits-all ChatGPT, people might be able to use it to generate answers that align with their own politics. “That’s where we’re aspiring to go to, but it’s going to be a long, difficult journey to get there because we realize how challenging this domain is,” says Agarwal.
Here’s my two cents: It’s a good sign that OpenAI is planning to invite public participation in determining where ChatGPT’s red lines might be. A bunch of engineers in San Francisco can’t, and frankly shouldn’t, determine what is acceptable for a tool used by millions of people around the world in very different cultures and political contexts. I’ll be very interested in seeing how far they will be willing to take this political customization. Will OpenAI be okay with a chatbot that generates content that represents extreme political ideologies? Meta has faced harsh criticism after allowing the incitement of genocide in Myanmar on its platform, and increasingly, OpenAI is dabbling in the same murky pond. Sooner or later, it’s going to realize how enormously complex and messy the world of content moderation is.
Deeper Learning
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
Hundreds of startups are exploring the use of machine learning in the pharmaceutical industry. The first drugs designed with the help of AI are now in clinical trials, the rigorous tests done on human volunteers to see if a treatment is safe—and really works—before regulators clear them for widespread use.
Why this matters: Today, on average, it takes more than 10 years and billions of dollars to develop a new drug. The vision is to use AI to make drug discovery faster and cheaper. By predicting how potential drugs might behave in the body and discarding dead-end compounds before they leave the computer, machine-learning models can cut down on the need for painstaking lab work. Read more from Will Douglas Heaven here.
Bits and Bytes
The ChatGPT-fueled battle for search is bigger than Microsoft or Google
It’s not just Big Tech that’s trying to make AI-powered search happen. Will Douglas Heaven looks at a slew of startups trying to reshape search—for better or worse. (MIT Technology Review)
A new tool could help artists protect their work from AI art generators
Artists have been criticizing image making AI systems for stealing their work. Researchers at the University of Chicago have developed a tool called Glaze that adds a sort of cloak to images that will stop AI models from learning a particular artist’s style. This cloak will look invisible to the human eye, but it will distort the way AI models pick up the image. (The New York Times)