• MysticKetchup@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    7 months ago

    But simply knowing the right words to say in response to a moral conundrum isn’t the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don’t respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

    This brings about worries that an AI might just be “convincingly bullshitting” about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM’s moral evaluations even if and when that AI hallucinates “inaccurate or unhelpful moral explanations and advice.”

    Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. “If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice,” they write.

    Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      7 months ago

      Just another example of journalism, ignoring the science and content of their own articles and going for Clickbait headlines instead.

    • SharkAttak@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      I’m still to be convinced that all these AIs aren’t just very good chat bots, they can line up words (or pixels) in a realistic way, but I feel there’s no reasoning behind them.
      A lot of people, and not just commoners, see “AI” and think “sci-fi robot!”

      • Moobythegoldensock@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        What you described is exactly what an LLM is. I’m piloting one for work, and sometimes it is useful, while other times it makes up random shit.

      • Good_morning@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        They aren’t even “very good” I thought I would use it to generate a short story that used a few specific words. I used about half of the requested words, when asked, it said this is embarrassing and tried again, I eventually gave up retrying, it never got all of the words, and when asked which words it omitted it would get that wrong too. It feels like the quality has gone downhill from when they were first introduced.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    7 months ago

    The biggest problem with emerging AI is that we are absolutely terrible parents.

    Humanity has a child that going to become an amazing prodigy and instead of teaching them to be decent, open, honest, compassionate and helpful … we are raising an entity that is learning that making money and concentrating power is the motivation for everything in life.

    We are trailer trash parents who are raising a child that will grow up to become more powerful than we could ever be. Or at the very least become a monstrous pet that will be controlled by whoever has the most money and power.

    I wonder what could possibly go wrong.