• greenbit@lemmy.zip
      link
      fedilink
      English
      arrow-up
      24
      ·
      18 days ago

      The fascist social media influencers are already pushing generated bodycam and surveillance videos for xenophobia etc. A large enough mass of the population doesn’t know what’s real and that’s the goal

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    18 days ago

    I wish they had broke it out by AI. The article states:

    “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

    But I don’t see that anywhere in the linked PDF of the “full results”.

    This sort of study should also be re-done from time to time to track AI version numbers.

  • SaraTonin@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    18 days ago

    There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.

    And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.

    • Axolotl_cpp@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      I hear a lot of people say “let’s ask chatGPT” like the AI is god and know everthing 🙏, that’s a big problem to be honest

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    18 days ago

    Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    18 days ago

    Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven’t we reached the limits of this model and shouldn’t other types of engines be tried?

  • morrowind@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    5
    ·
    18 days ago

    “misrepresent” is a vague term. Actual graph from the study

    The main issue is usual… sources. AI is bad at sources without a proper pipeline. They note that Gemini is the worst at 72%.

    Note, they’re not testing models with their own pipeline. They’re testing other people’s products. This is more indicative of the product design than the actual models

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      This graph clearly shows that AI is also shockingly bad at factual accuracy and at telling a news story in such a way that someone who didn’t already know about it to understand the issues and context. I think you’re misrepresenting this graph as being bad about sources, but here’s a better summary of the point you seem to be making:

      AI’s summaries don’t match their source data.

      So actually, the headline is pretty accurate in calling it misrepresentation.