• Devolution@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 months ago

    We fire humans to prop up AI. But AI is so not there yet that we need humans to double check.

    🙃

  • 12212012@z.org
    link
    fedilink
    arrow-up
    20
    ·
    2 months ago

    AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.

    The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.

    It’s a data center, not a psychiatric patient

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s a fancy marketing term for when AI confidently does something in error.

      How can the AI be confident?

      We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.

      Describing things in terms of “hallucinations” tell users that the output shouldn’t always be trusted, regardless of how “confident” the technology seems.

  • Sculptus Poe@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    I’m not anti-AI at all, but their LLM definitely isn’t ready for the top of a google search as if it is real information. Of course, posting promoted search results at the top of the searches as if it was a real result already devalued them. They at least need the LLM result to be an opt-in option with caveats. I would probably opt-in but I would like off to be the default.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Ive had many times where the LLM “spoils” the answer, my field of work requires me to search for exact pieces of text written by humans, it will pull those pieces of text and put them front and center, surrounded by text it wrote that never gets read.

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    Everybody must jump onto the AI train no matter how often it derails!

    So who profits?

    The unholy alliance of tech giants and government.

    Who loses?

    Everybody else. This is US tax money being thrown into a money burning machine.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      I’ve seen 100 shitty job postings for rating AI results. It’s rather complicated and pays pennies.