• U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    2
    ·
    edit-2
    2 months ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

    https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

    the actual danger of it all should be apparent, especially in any field related to health science research

    and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

    • tym@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 months ago

      the movie idiocracy was a prophecy that we were too arrogant to take seriously.

      now go away, I’m baitin

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 months ago

          Yep. I don’t care if a president is smart. I care if they listen to the experts. I don’t want one who thinks they know everything, because no one can.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yep. I don’t care if a president is smart. I care if they listen to the experts. I don’t want one who thinks they know everything, because no one can.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Wouldn’t it be batein?

        It’s important we get this right

        for the new national anthem

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      11
      ·
      2 months ago

      It’s new quantities, but an old mechanism, though. Humans were making up shit for all of history of talking.

      In olden days it was resolved by trust and closed communities (hence various mystery cults in Antiquity, or freemasons in relatively recent times, or academia when it was a bit more protected).

      Still doable and not a loss - after all, you are ultimately only talking to people anyway. One can build all the same systems on a F2F basis.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          2 months ago

          That part of the problem makes rules of the game more similar to how they were before the Internet. It’s almost a return to normalcy.

      • U7826391786239@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        2 months ago

        i’m not understanding what you’re saying. “Still doable and not a loss”??

        sounds like something AI would say

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    ·
    2 months ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

  • SleeplessCityLights@programming.dev
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 months ago

    I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      2 months ago

      Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I’m researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they’ll repeat some opinion or speculation as fact.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…

      • ironhydroxide@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

        • hardcoreufo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I do too. Its pretty good but I feel not as good as search engines used to be. Though through no fault of its own. I just think garbage sites have paid for SEO and clog up results no matter what.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        I mean. I’ve used AI to write my job mandated end of year self assessment report. I don’t care about this, it’s not like they’ll give me a pay rise so I’m not putting effort into it.

        The AI says I’ve lead a project related to windows 11 updates. I haven’t but it looks accurate and no one else will be able to dell it’s fake.

        So I guess the reason is they are using the AI to talk about subjects they can’t fact check. So it looks accurate.

    • jtzl@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      They’re really good.*

      • you just gotta know the material yourself so you can spot errors, and you gotta be very specific and take it one step at a time.

      Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      edit-2
      2 months ago

      I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe

  • B-TR3E@feddit.org
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 months ago

    No AI needed for that. These bloody librarians wouldn’t let us have the Necronomicon either. Selfish bastards…

  • zanzo@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    2 months ago

    I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.

  • U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

    https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

    the actual danger of it all should be apparent, especially in any field related to health science research

    and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 months ago

    This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.

    Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Good article with many links to other interesting articles. Acts like a good summary for the situation this year.

    I didn’t know about the MAHA thing, but I guess I’m not surprised. It’s hard to know how much is incompetence and idiocy and how much is malicious.

  • Petr Janda@gonzo.markets
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Good, people need realise AI is not intelligent. It’s like a program that has memorised millions of books, some truths some fiction but doesnt really have the intellectual capacity to distinguish truth from fiction

  • BilSabab@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    As if a huge chunk of genre section wasn’t already as formulaic as if it was written by AI