• echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Ai actually has huge problems with this. If you feed ai generated data into models, then the new training falls apart extremely quickly. There does not appear to be any good solution for this, the equivalent of ai inbreeding.

    This is the primary reason why most ai data isn’t trained on anything past 2021. The internet is just too full of ai generated data.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      This is why LLMs have no future. No matter how much the technology improves, they can never have training data past 2021, which becomes more and more of a problem as time goes on.

      • TimeSquirrel@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        You can have AIs that detect other AIs’ content and can make a decision on whether to incorporate that info or not.

          • TimeSquirrel@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            10 months ago

            Doesn’t look like we’ll have much of a choice. They’re not going back into the bag.
            We definitely need some good AI content filters. Fight fire with fire. They seem to be good at this kind of thing (pattern recognition), way better than any procedural programmed system.