• pavnilschanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 months ago

    Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with ‘poisoned’ data. Is that true?

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      10 months ago

      I’ve only heard that running images through a VAE just once seems to break the Nightshade effect, but no one’s really published anything yet.

      You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there’s a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        So there’s a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

        This would be truly ironic

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Until they come with some preprocessing step, or some better feature extractors etc. This is an arms race like there are many of