ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Things we know so far:

      • Humans can train LLMs with new data, which means they can acquire knowledge.

      • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

      • We know multi-modal is possible, which means these models can acquire skills.

      • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

      • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

      … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.

      • Coreidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        Can a LLM learn to build a house and then actually do it?

        LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.

        At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.

        There is a big difference between actual skill and just a predictive model based on statistics.