• crystalmerchant@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    Of course they can’t. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.

  • chonglibloodsport@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.

    That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      First of all I agree with your point that it is all hallucination.

      However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

      The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

      I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won’t get truth-judging machines until we get actually-thinking machines.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I’m 100% sure he can’t. Or at least, not from LLMs specifically. I’m not an expert so feel free to ignore my opinion but from what I’ve read, “hallucinations” are a feature of the way LLMs work.

  • 🇰 🔵 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    Here’s how you stop AI from hallucinating:

    Turn it off.

    Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn’t mean jack shit. Even a broken clock is right twice a day.

    “Only feed it accurate information.”

    Even that doesn’t work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    That’s like saying you can’t be 100% sure you never have fake news at the top of search query results. It’s just a fact.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    It’s kind of funny how AI has the exact same problems some humans have.
    I always thought AI wouldn’t have that kind of problems, because they would be carefully fed accurate information.
    Instead they are taught from things like Facebook and the thing formerly known as Twitter.
    What an idiotic timeline we are in. LOL

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I thought the main issue was that AI don’t really know how to say I don’t know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.

      Anyway, LLM’s aren’t the only AI that do this. So them being trained on Facebook data certainly isn’t the whole issue.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Seeing these systems just making shit up when they’re not sure on the answer is probably the closest they’ll ever come to human behaviour.

    We’ve invented the virtual politician.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Even people hallucinate. Under your definition intelligence doesn’t exist

      • Ultraviolet@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        “Hallucination” is an anthropomorphized term for what’s happening. The actual cause is much simpler, there’s no semantic distinction between true and false statements. Both are equally plausible as far as a language model is concerned, as long as it’s semantically structured like an answer to the question being asked.

        • htrayl@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 month ago

          That’s also pretty true for people, unfortunately. People are deeply incapable of differentiating fact from fiction.

          • kaffiene@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            No that’s not it at all. People know that they don’t know some things. LLMs do not.

      • heavy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        No, really, if you understood how the language models work, you would understand it’s not really intelligence. We just tend to humanize it because that’s what our brains do.

        There’s a lot of great articles that summarize how we got to this stage and it’s pretty interesting. I’ll try to update this post with a link later.

        I think LLMs are useful (and fun) and have a place, but intelligence they are not.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I’m still waiting for the definition of intelligence that won’t have the same moving of goalposts the Turing Test did

          • Barbarian@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.

            LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.

            If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.

            • theherk@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              The human could be described in very similar terms. People think we’re magic or something, but we to are just a weighted neural network assembling outputs based strictly on training data built from reinforcement. We are just for the moment much much better with massive models. Of course that is reductive but many seem to forget that brains suffer similarly when outside of training data.

                • theherk@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 month ago

                  I’m slightly confused. Which part needs an academic paper? I’ve made three admittedly reductive claims.

                  • Human brains are neural networks.
                  • Its outputs are based on training data built from reinforcement.
                  • We have a much more massive model than current artificial networks.

                  First, I’m not trying to make some really clever statement. I’m just saying there is a perspective where describing the human brain can generally follow a similar description. Nevertheless, let’s look at the only three assertions I make here. Given that the term neural network is given its namesake from the neurons that make up brains, I assume you don’t take issue with this. The second point, I don’t know if linking to scholarly research is helpful. Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation? We also have… education, where we are fed information so that we retain it and can recount it down the road.

                  I guess maybe it is worth exploring the third, even though, I really wasn’t intending to make a scholarly statement. Here is an article in Scientific American that gives the number of neural connections around 100 trillion. Now, how that equates directly to model parameters is absolutely unclear, but even if you take glial cells where the number can be as low as 40-130 billion according to The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting, that number is in the same order of magnitude of current models’ parameters. So I guess, if your issue is that AI models are actually larger than the human brain’s, I guess maybe there is something cogent. But given that there is likely at least a 1000:1 ratio of neural connections to neurons, I just don’t think that is really fair at all.

    • dch82@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      Intelligence is whatever does the job and gets it done well.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    They can’t. AI has hallucinations. Google has shown that AI can’t even rely on external sources, either.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      1 month ago

      At least LLMs will. The only real fix we’ve seen was running it through additional specialized LLMs to try to massage out errors, but that just increases costs and scale for marginally low results.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I don’t know why they’re trying to shove AI down our throats. They need to take their time, allow it to evolve.

    • Snowclone@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      Because it’s all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So… it’s the AI done? Oh no no no, we’re at machine leaning AI is pretty far down the road actually, what we’re firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?

      It’s all the same reasons section 8 housing and low cost housing don’t work under corporate capitalism. It’s profitable to take government money, it’s profitable to have low rent apartments. That’s not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.

  • Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order