A New York Times copyright lawsuit could kill OpenAI::A list of authors and entertainers are also suing the tech company for damages that could total in the billions.

  • Melllvar@startrek.website
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    6 months ago

    If OpenAI owns a Copyright on the output of their LLMs, then I side with the NYT.

    If the output is public domain–that is you or I could use it commercially without OpenAI’s permission–then I side with OpenAI.

    Sort of like how a spell checker works. The dictionary is Copyrighted, the spell check software is Copyrighted, but using it on your document doesn’t grant the spell check vendor any Copyright over it.

    I think this strikes a reasonable balance between creators’ IP rights, AI companies’ interest in expansion, and the public interest in having these tools at our disposal. So, in my scheme, either creators get a royalty, or the LLM company doesn’t get to Copyright the outputs. I could even see different AI companies going down different paths and offering different kinds of service based on that distinction.

    • gram_cracker@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 months ago

      If LLMs like ChatGPT are allowed to produce non-copyrighted work after being trained on copyrighted work, you can effectively use them to launder copyright, which would be equivalent to abolishing it at the limit.

      A much more elegant and equitable solution would be to just abolish copyright outright. It’s the natural direction of a country that chooses to invest in LLMs anyways.

  • SatanicNotMessianic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 months ago

    The NYT has a market cap of about $8B. MSFT has a market cap of about $3T. MSFT could take a controlling interest in the Times for the change it finds in the couch cushions. I’m betting a good chunk of the c-suites of the interested parties have higher personal net worths than the NYT has in market cap.

    I have mixed feelings about how generative models are built and used. I have mixed feelings about IP laws. I think there needs to be a distinction between academic research and for-profit applications. I don’t know how to bring the laws into alignment on all of those things.

    But I do know that the interested parties who are developing generative models for commercial use, in addition to making their models available for academics and non-commercial applications, could well afford to properly compensate companies for their training data.

    • LWD@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      6 months ago

      The danger of the rich and evil simply buying out their critics is a genuine risk. After all, it’s what happened to Gawker when Peter Thiel decided he personally didn’t like them, neutering their entire network.

      Regarding OpenAI the corporation, they pulled an incredibly successful bait and switch, pretending first to gather data for educational purposes, and then switching to being a for-profit as soon as it benefited them. In a better world or even a slightly more functional American democracy, their continued existence would be deemed inexcusable.

      • SatanicNotMessianic@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        6 months ago

        I completely agree. I don’t want them to buy out the NYT, and I would rather move back to the laws that prevented over-consolidation of the media. I think that Sinclair and the consolidated talk radio networks represent a very real source of danger to democracy. I think we should legally restrict the number of markets a particular broadcast company can be in, and I also believe that we can and should come up with an argument that’s the equivalent of the Fairness Doctrine that doesn’t rest on something as physical and mundane as the public airwaves.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    The problem with copyright is that everything is automatically copyrighted. The copyright logo is purely symbolic, at this point. Both sides are technically right, even though the courts have ruled that anything an AI outputs is actually in the public domain.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Works involving the use of AI are copyrightable. Also, the Copyright Office’s guidance isn’t law. Their guidance reflects only the office’s interpretation based on its experience, it isn’t binding in the courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          This ruling is about something else entirely. He tried to argue that the AI itself was the author and that copyright should pass to him as he hired it.

          An excerpt from your article:

          In 2018, Dr. Thaler sought to register “Recent Entrance” with the U.S. Copyright Office, listing the Creativity Machine as its author. He claimed that ownership had been transferred to him under the work-for-hire doctrine, which allows the employer of the creator of a given work or the commissioner of the work to be considered its legal author. However, in 2019, the Copyright Office denied copyright registration for “Recent Entrance,” ruling that the work lacked the requisite human authorship. Dr. Thaler requested a review of his application, but the Copyright Office once more refused registration, restating the requirement that a human have created the work.

          Copyright is afforded to humans, you can’t register an AI as an author, the same as a monkey can’t hold copyright.

          • wikibot@lemmy.worldB
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Here’s the summary for the wikipedia article you mentioned in your comment:

            Between 2011 and 2018, a series of disputes took place about the copyright status of selfies taken by Celebes crested macaques using equipment belonging to the British wildlife photographer David J. Slater. The disputes involved Wikimedia Commons and the blog Techdirt, which have hosted the images following their publication in newspapers in July 2011 over Slater’s objections that he holds the copyright, and People for the Ethical Treatment of Animals (PETA), who have argued that the copyright should be assigned to the macaque. Slater has argued that he has a valid copyright claim because as he engineered the situation that resulted in the pictures by travelling to Indonesia, befriending a group of wild macaques, and setting up his camera equipment in such a way that a selfie might come about. The Wikimedia Foundation’s 2014 refusal to remove the pictures from its Wikimedia Commons image library was based on the understanding that copyright is held by the creator, that a non-human creator (not being a legal person) cannot hold copyright, and that the images are thus in the public domain.

            to opt out, pm me ‘optout’. article | about

            • Even_Adder@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              6 months ago

              Then you should amend your comment to:

              even though the courts have ruled that anything atributed to an AI outputs as an author is actually in the public domain.

              Because as typed, it is wrong.

  • sugarfree@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    6 months ago

    We hold ourselves back for no reason. This stuff doesn’t matter, AI is the future and however we get there is totally fine with me.

    • Zaderade@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI without proper regulation could be the downfall of humanity. Many pros, but the cons may outweigh them. Opinion.

      • sugarfree@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        6 months ago

        AI development will not be hamstrung by regulations. If governments want to “regulate” (aka kill) AI, then AI development in their jurisdiction will move elsewhere.