We need to bring consent to AI 

We need to bring consent to AI 

Other experiments in AI to grant users more control show that there is clear demand for such features. 

Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion. 

Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion. 

Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.  

“Our ultimate goal is to build a consent layer for AI, because it just doesn’t exist,” he says.

Deeper Learning

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. MIT Technology Review’s senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google.

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.  

Add a Comment