ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • Coreidan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      What is intelligence?

      Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

      Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

      A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

      For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

      If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?

        • platypus_plumba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Things we know so far:

          • Humans can train LLMs with new data, which means they can acquire knowledge.

          • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

          • We know multi-modal is possible, which means these models can acquire skills.

          • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

          • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

          … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.

          • Coreidan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 months ago

            Can a LLM learn to build a house and then actually do it?

            LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.

            At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.

            There is a big difference between actual skill and just a predictive model based on statistics.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”

    Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it’s deep! But it’s a ChatGPT answer to the question “What is a computer?”

  • grandma@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    God I hate websites that autoplay unrelated videos and DONT LET ME CLOSE THEM TO READ THE FUCKING ARTICLE

  • Pratai@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    That shit should never have existed to begin with. At least not before it could be regulated/limited in function.

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.