A team from the University of Chicago has created a tool to help give artists more control over their work and protect it from AI.
AI models use massive amounts of data and images as part of their learning process. This has sparked heated debate on the ethics and legality of using copyrighted material and artists’ work without permission or compensation.
Numerous companies and organizations are working on a solution, but MIT Technology Review reports that Ben Zhao, a professor at the University of Chicago, led a team that created a new tool: Nightshade. This is the same team that created Glaze, a masking tool that hides an artists’ unique style from AI models.
According to the outlet, Nightshade works in a similar manner, altering pixels in a way that makes AI models believe the picture is something completely different from what it really is, all while being imperceptible to the naked eye.
MIT says Zhao’s team will integrate the two tools:
The team intends to integrate Nightshade into Glaze, and artists can choose whether they want to use the data-poisoning tool or not. The team is also making Nightshade open source, which would allow others to tinker with it and make their own versions. The more people use it and make their own versions of it, the more powerful the tool becomes, Zhao says. The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause.
Artists seem eager to use Nightshade to protect their work.
“It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent,” said illustrator and artist Eva Toorenent.