• Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It’s going to drive the AI into madness as it will be trained on bot posts written by itself in a never ending loop of more and more incomprehensible text.

    It’s going to be like putting a sentence into Google translate and converting it through 5 different languages and then back into the first and you get complete gibberish

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Ai actually has huge problems with this. If you feed ai generated data into models, then the new training falls apart extremely quickly. There does not appear to be any good solution for this, the equivalent of ai inbreeding.

      This is the primary reason why most ai data isn’t trained on anything past 2021. The internet is just too full of ai generated data.

      • Ultraviolet@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        This is why LLMs have no future. No matter how much the technology improves, they can never have training data past 2021, which becomes more and more of a problem as time goes on.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          You can have AIs that detect other AIs’ content and can make a decision on whether to incorporate that info or not.

            • TimeSquirrel@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              10 months ago

              Doesn’t look like we’ll have much of a choice. They’re not going back into the bag.
              We definitely need some good AI content filters. Fight fire with fire. They seem to be good at this kind of thing (pattern recognition), way better than any procedural programmed system.