Mind-reading AI can translate brainwaves into written text: Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thou…::A system that records the brain’s electrical activity through the scalp can turn thoughts into words with help from a large language model – but the results are far from perfect

  • nevemsenki@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    LLMs don’t do this though, it doesn’t do a lookup of past SAT questions it’s seen and answer it, it uses some process of “reasoning” to do it.

    The “reasoning” in LLM is literally statistical probability of which word would follow which word. It has no real concept of what it talks about beyond the pre-built relationship matrices between words and language rules. That’s why LLMs confidently hallucinate obvious bullshit time to time - to them there’s no meaning to either truthful or absolute bonkers text, it’s just words that should probably follow each other.

    • Not_mikey@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      11 months ago

      All inference is just statistical probability. Every answer you give outside of your direct experience is just you infering what might be the answer. Even things we hold as verifiable truth that we haven’t experienced is just a guess that the person who told it to us isn’t lying or has some sort of proof to there statement.

      Take some piece of knowledge like “Biden won the 2020 election” me and you would probably agree this is the truth, but we can’t possibly “know” it’s the truth or connect it to some verifiable experience, we never counted every ballot or were at every polling station. We “know” it’s the truth because more people, and more respectable people, told us it was and our brain makes a statistical guess that their answer is right based on their weight. Just like an LLM other people will hallucinate or bullshit and come on the other side of that guess and assert the opposite and even make up stuff to go along with that story.

      This in essence is what reasoning is, you weigh the possibilities of either side being correct, and pick the one that has more weight. That’s why science, an epistemological application of reason, is so heavily reliant on statistics…