Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    7
    arrow-down
    7
    ·
    1 month ago

    Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.

    Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.

      I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.

    • Taldan@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      Let’s say I believe you. If that’s the case, why are AI companies still scraping everything?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.

        The point is not that the scraping doesn’t happen, it’s that the data is already being highly processed and filtered before it gets to the LLM training step. There’s a ton of “poison” in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.