cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

  • Pantoffel@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I don’t think the issue is corps feeding the internet into AI systems. The real issue is gatekeeping to information and only giving access to this information while milking the individual for data by trackers, money by subscriptions, and more money by ads (that we pay for with subscriptions).

    Another larger issue that I fear is often ignored is the amount of control large corporations and in theory the government can have over us just by looking at our trace we leave in the internet. Just have a look at Russia and China for real world examples of this.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      As an open source contributor, I believe information (facts and techniques) should be free.

      As an open source contributor, I also know that two-way collaboration only happens when users understand where the software came from and how they can communicate back to the original author(s).

      The layer of obfuscation that LLMs add, where the code is really from XYZ open-source project, but appears to be manifesting from thin air… worries me, because it’s going to alienate would-be collaborators from the original authors.

      “AI” companies are not freeing information. They are colonizing it.

      • Meowoem@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        My open source project benefits hugely from the free to access LLM coding tools available, that’s a far bigger positive than the abstract fear that someone might feel alienated because the guy copy pasting their code doesn’t know who he’s copying from?

        And yes, obviously the LLM isn’t copying code it’s leaning from a huge range of sources and combining it to make exactly what you ask for (well not exactly but with some needling it gets there eventually) but even if it were that’s still not disrupting collaboration because that’s not how collaboration works - no one says ‘instead of coding all the boring elif statements required for my fiction determining if something is a prime, I’ll search code snippits and collaborate with them’ every worthwhile collaborator to my project has been an active user of the software and wanted to help improve it or add functions - AI won’t change that, and if it does it’ll only be because it makes coding so easy I don’t need collaborators

  • Meowoem@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    ‘everything new is bad and scary’ I really don’t understand why this viewpoint is so common in a tech community.

    AI will solve so many problems with the current internet and make it far easier to use. And there’s no such thing as over grazing Wikipedia, I certainly wrote my small portions of it very aware that it’s going to be used by ai and it’s a great thing, plus they can certainly afford the bandwidth.

  • Meowoem@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Traditional media says thing that displaces them is terrible and scary and should be stopped… we’ve heard it before with the internet, with social media, and right back to TV and radio…

    It will be the greatest discovery tool for human crested content that we’ve ever had. Imagine being able to sort all the junk and actually find what you’re looking for, being able to actually filter stuff and search within context. And imagine not needing a journalist to string together their assumptions and sketchy understanding of science but being able to ask questions and get answers that draw from press releases, released papers, interviews, and public statements.

    Yes it will get harder to use the web like we did ten years ago, but that’s ok because doing that is already rubbish.