• Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order