Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • SweetAIBelle@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Generally speaking, the way training works is this:
    You put together a folder of pictures, all the same size. It would’ve been 1024x1024 in this case. Other models have used 768z768 or 512x512. For every picture, you also have a text file with a description.

    The training software takes a picture, slices it into squares, generates a square the same size of random noise, then trains on how to change that noise into that square. It associates that training with tokens from the description that went with that picture. And it keeps doing this.

    Then later, when someone types a prompt into the software, it tokenizes it, generates more random noise, and uses the denoising methods associated with the tokens you typed in. The pictures in the folder aren’t actually kept by it anywhere.

    From the side of the person doing the training, it’s just put together the pictures and descriptions, set some settings, and let the training software do its work, though.

    (No money involved in this one. One person trained it and plopped it on a website where people can download loras for free…)