A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125

  • Gormadt@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    16
    ·
    edit-2
    6 months ago

    Without checking the video, probably

    Generative AI has issues when fed generated stuff back into it’s model creating feedback loops and now that there’s more generated content out there (that may or may not be tagged properly) there’s bound to be more fed back into itself

    Edit after watching the video: I hadn’t even thought about the diminishing returns of larger data sets. That’s really interesting.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      On the other hand, if we move from larger and larger models with as much data they can gather to less generic and more specific high quality datasets, I have a feeling there’s still a lot to gain. But quality over quantity takes a lot more effort to maintain.

    • magic_lobster_party@kbin.run
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      The video is more about the diminishing returns when it comes to increasing size of training set. It’s following a logarithmic curve. At some point, just “adding more data” won’t do much because the cost will be too high compared to the gain in accuracy.