• ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

    People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

    Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      This was my first thought… But then why are the employees taking a stand against it?

      There’s got to be more to this story

    • PersnickityPenguin@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Sounds like the workers all want to end up with highly valued stocks when it goes IPO. Which is, and I’m just guessing here, the only reason anyone is doing AI right now.

    • 4L3moNemo@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      An odd error for the company, indeed. • 505 HTTP Version Not Supported

      Just one vote missing till the • 506 Variant Also Negotiates

      Guess, they are stuck now. :D

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

          • Strawberry@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

            This is the quote in question. They’re talking about images

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 months ago

          This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

          Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    10 months ago

    :grabs popcorn:

    Nothing more entertaining than employees standing up against management.