Introducing Glaze: Protecting Artist Signature Styles

Written By Edward Feral

The reporting team at News of the AI. This is a combination of our human reporting team and various AI writing partners that we're testing! Stories are never 100% AI nor 100% human - it'll be fun.

Since the advent of generative AI models, the world has seen a dramatic shift in the creation and consumption of content, particularly in the art sector. AI models like MidJourney and Stable Diffusion, trained on extensive databases of online images, have revolutionized art creation, simultaneously posing significant ethical challenges. These databases often consist of images that are copyrighted, private, or sensitive, and many artists have found their work incorporated in AI training data sets such as LAION-5B, without their consent or recognition. Moreover, the process known as “style mimicry” has raised substantial issues around artists’ rights, intellectual property, and personal identity.

The Problem of Style Mimicry

Style mimicry, facilitated by AI models, allows users to replicate an artist’s unique style using a process known as “fine-tuning” or LoRA on models like Stable Diffusion. The result is an AI capable of producing a potentially unlimited number of images that mimic the style of the targeted artist. Often, the copied art pieces, sometimes poor facsimiles, are widely distributed online, sometimes still carrying the original artist’s name in the metadata.

The consequences of style mimicry are far-reaching and potentially devastating for artists. Apart from the obvious loss of income and commissions, the propagation of low-quality synthetic copies can dilute an artist’s brand and damage their reputation. More deeply, the unauthorized use of an artist’s unique style — often a crucial element of their identity — can feel akin to identity theft. 

Introducing Glaze: An Artist’s Shield

Glaze, a novel system developed to counteract style mimicry, operates by computing a set of minimal changes to artworks. These changes, although imperceptible to human eyes, dramatically alter the way AI models perceive the art style. For instance, an AI model could perceive a glazed charcoal portrait as a piece of modern abstract art. Consequently, when someone attempts to generate art mimicking a certain artist’s style using the AI model, the result significantly deviates from the intended style.

To understand the robustness of Glaze, it’s crucial to comprehend that Glaze doesn’t act as a watermark or a hidden message. Instead, it introduces a new dimension to the artwork, perceptible to AI models but not humans. This dimension is not easily located, computed, or reverse-engineered, making Glaze resistant to various image processing activities such as cropping, filtering, resizing, or compression.

Tweet from Karla Ortiz demonstrating the use of Glaze on one of her paintings

The Potential and Limitations of Glaze

While Glaze promises significant protection against style mimicry, it is not a permanent or universally applicable solution. As with any machine learning tool, Glaze has its limitations. For instance, Glaze-induced changes might be more visible on art with flat colors and smooth backgrounds. Additionally, the rapid advancements in machine learning may someday outpace Glaze’s protective abilities. However, Glaze is an essential step towards creating more robust artist-centric protection tools to counter AI mimicry.

How Does Glaze Work?

Drawing from the properties that give rise to adversarial examples — small alterations in inputs that drastically change AI model classification — Glaze is able to trick AI models by modifying the image in ways that significantly alter AI perception but not human perception. These alterations cannot be easily removed from the artwork and are robust against various image processing operations. In addition, the cloaking alterations made by Glaze are image-specific, enhancing the system’s effectiveness against different AI models.

Glaze: Future Developments

Though Glaze is currently incompatible with mobile devices due to high computation demands, a web service version is in development to provide access to artists without a dedicated GPU. Furthermore, while Glaze is not designed to defend against Image2Image transformations, it may offer some level of protection against such attacks at very high intensity settings.

While significant challenges remain in ensuring the rights and identities of artists in an era of advanced AI, systems like Glaze represent an encouraging step forward. By developing and enhancing these tools, we can begin to protect artists against style mimicry, encouraging creativity and individualism in a world increasingly influenced by AI.

  1. Knaus, T. (2023, March 17). Glaze is a new tool to protect artists from AI’s style mimicry. TechCrunch. https://techcrunch.com/2023/03/17/glaze-generative-ai-art-style-mimicry-protection/
  2. Carlini, N., Tramer, F., Wallace, E. (2023). What is Glaze? Glaze: Protecting Artists from AI’s Style Mimicry. https://glaze.cs.uchicago.edu/what-is-glaze.html
  3. Carlini, N., Tramer, F., Wallace, E. (2023). Glaze FAQ. Glaze: Protecting Artists from AI’s Style Mimicry. https://glaze.cs.uchicago.edu/faq.html
  4. Totilo, S. (2023, July 12). AI Art Heist: How AI’s Image Generation Capabilities are Leading to a New Type of Art Theft. Kotaku. https://kotaku.com/ai-art-images-theft-midjourney-stable-diffusion-glaze-1850235184

Leave a Comment