NewsTech

University of Chicago develops tool to protect art from unauthorized AI scraping

Researchers at the University of Chicago have developed a technology that allows artists to protect their art from unwanted AI scraping.

The past year saw the rise of AI art platforms, such as Midjourney and Stable Diffusion, that have allowed users to generate “art” simply by giving the platforms a written prompt To do so, however, the machine learning algorithms of these need to be “trained” using art done by human artists. This is done by having these algorithms “scrape” image-sharing sites for artwork to train on.

This process of scraping has raised concerns from artists worried that their creative works are being gobbled up by machine learning and used to create new images with neither credit nor compensation. As such many have started looking for ways to protect their art from unauthorized AI scraping.

To solve this, a team of researchers at the University of Chicago worked with artists to come up with a new tool that promises to allow artists to protect their work. (Read: Discord adds AI features, raises privacy concerns in the process)

Called Glaze, the technology works by adding a second, nearly invisible layer on top of a piece of art. This layer contains a second piece of art in a totally different style from the original artist. This second layer, however, is very visible to machine learning algorithms, confusing any that try to scrape said piece for training.

Glaze is specifically aimed at how machine learning platforms allow their users to “prompt” images based on a specific artist’s style. Currently, a user can ask for an illustration based on the style of an artist of their choice and, if the platform has scraped enough of that artist’s work, it’ll be able to generate something that looks roughly similar to the original.

“What we do is we try to understand how the AI model perceives its own version of what artistic style is. And then we basically work in that dimension—to distort what the model sees as a particular style,” explained Ben Zhao, a professor of computer science at the University of Chicago, to Tech Crunch. “So it’s not so much that there’s a hidden message or blocking of anything … It is, basically, learning how to speak the language of the machine learning model, and using its own language—distorting what it sees of the art images in such a way that it actually has a minimal impact on how humans see.”

But by covering a piece with another, barely visible, piece of art in a different style, Glaze ends up confusing the algorithm.

“This comes from a fundamental gap between how AI perceives the world and how we perceive the world. This fundamental gap has been known for ages. It is not something that is new. It is not something that can be easily removed or avoided,” he adds.

Already, artists online have started experimenting with Glaze to see if it does work. So far, tests in the hands of these artists do seem to show that Glaze is doing its job. In one example by AI researcher David Marx on art by Karla Ortiz shows an AI unable to copy the latter’s style. In the same tweet, Marx congratulated Zhao and his team for their work.

The beta version of Glaze is currently available to download as a public beta on the University of Chicago‘s website.

If you like reading our content, why not show your appreciation by treating us to a cup of coffee? (or two, if you’re feeling generous)



Franz Co

managing editor | addicted to RGB | plays too many fighting games

Discover more from Variable

Subscribe now to keep reading and get access to the full archive.

Continue reading