I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.
Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.
I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I’m researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they’ll repeat some opinion or speculation as fact.
I do too. Its pretty good but I feel not as good as search engines used to be. Though through no fault of its own. I just think garbage sites have paid for SEO and clog up results no matter what.
I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…
Agreed.
And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.
I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.
The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.
I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.
I mean. I’ve used AI to write my job mandated end of year self assessment report. I don’t care about this, it’s not like they’ll give me a pay rise so I’m not putting effort into it.
The AI says I’ve lead a project related to windows 11 updates. I haven’t but it looks accurate and no one else will be able to dell it’s fake.
So I guess the reason is they are using the AI to talk about subjects they can’t fact check. So it looks accurate.
you just gotta know the material yourself so you can spot errors, and you gotta be very specific and take it one step at a time.
Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???
I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe
I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.
Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.
I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I’m researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they’ll repeat some opinion or speculation as fact.
Usually the blurb is pure opinion.
I pay for Kagi search. It’s amazing
I do too. Its pretty good but I feel not as good as search engines used to be. Though through no fault of its own. I just think garbage sites have paid for SEO and clog up results no matter what.
I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…
I feel like I go through the whole alphabet of options before giving up and rtfming.
Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.
I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.
The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.
I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.
I mean. I’ve used AI to write my job mandated end of year self assessment report. I don’t care about this, it’s not like they’ll give me a pay rise so I’m not putting effort into it.
The AI says I’ve lead a project related to windows 11 updates. I haven’t but it looks accurate and no one else will be able to dell it’s fake.
So I guess the reason is they are using the AI to talk about subjects they can’t fact check. So it looks accurate.
deleted by creator
They’re really good.*
Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???
I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe