privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
In Google Search’s prime DDG was still terrible and not a viable competitor even with the privacy advantage. Now both services are almost comparable, so it’s kind of a no-brainer to ditch Google.
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
when browsers overload the url field to act as a search field, can you blame people for not knowing the difference? To the users its become a distinction without a difference.
They say that whats tolerated is whats encouraged. Browser software companies have encouraged people to be uninformed about the tool they are using. Easier to mess with them that way.
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
I use Kagi as my primary search engine for almost 2 years now and it’s really good! I started to use the Kagi assistant recently to explain complex concepts to me and I like it. I love how it links me to sources. When I’m using a LLM tool, like Kagi’s assistant, I want to learn about the topic, I don’t use it for quick answers.
A lot of people are just against ‘AI’/LLM’s, and I hate it too when it’s being shoved into my face. But consensual LLM’s are just another tool that I utilize to learn about something.
I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume?
a false sense of confidence where one mught have just said 'i dont know"
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point
Online polls are much more likely to be answered by people who like to answer polls than people who don’t. People who use Duck Duck Go are much more likely to be privacy-focused, knowledgeable enough to use a different search engine other than the default, etc.
This is also an echo chamber (The Fediverse) discussing the results of a poll on another similar echo chamber (Duck Duck Go). You won’t find nearly as many people on Lemmy or Mastodon who love AI as you will in most of the world. Still, I do get the impression that it’s a lot less popular than the AI companies want us to think.
The article already notes that
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
It still creeps me out that people use LLMs as search engines nowadays.
That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.
They WANT you to use Ai so they can cater the answers. (tin foil hat)
I really do believe that though. Call me a conspiracy theorist but damn it, it fits.
You mean Google.
All of them. I use DDG as a primary and even those results are worse.
In Google Search’s prime DDG was still terrible and not a viable competitor even with the privacy advantage. Now both services are almost comparable, so it’s kind of a no-brainer to ditch Google.
And Bing, and searches that use google and Bing results (DDG, ecosia)
They WANT you to use Ai so they can
cater the answerssell you ads and stop you from using the internet.SEO has been fucking up searches long before LLMs were a thing.
deleted by creator
Thankfully Google is not the only search provider.
deleted by creator
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
when browsers overload the url field to act as a search field, can you blame people for not knowing the difference? To the users its become a distinction without a difference.
They say that whats tolerated is whats encouraged. Browser software companies have encouraged people to be uninformed about the tool they are using. Easier to mess with them that way.
But they all suck, or rather the Internet kinda sucks these days. Google very much included in the sucking.
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
You can chose different models and they differ in quality. The default one can be a bit hit and miss.
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
@AmbitiousProcess @Warl0k3
I use Kagi as my primary search engine for almost 2 years now and it’s really good! I started to use the Kagi assistant recently to explain complex concepts to me and I like it. I love how it links me to sources. When I’m using a LLM tool, like Kagi’s assistant, I want to learn about the topic, I don’t use it for quick answers.
A lot of people are just against ‘AI’/LLM’s, and I hate it too when it’s being shoved into my face. But consensual LLM’s are just another tool that I utilize to learn about something.
I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
what makes it creepy?
it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume? a false sense of confidence where one mught have just said 'i dont know"
So many people were already using tiktok or youtube as google search. I think AI is arguably better than those
edit: New business, take your chatgpt question and turn it into a tiktok video. The Slop must go on
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point
I Prefer searx
Yeah, this is why polling is hard.
Online polls are much more likely to be answered by people who like to answer polls than people who don’t. People who use Duck Duck Go are much more likely to be privacy-focused, knowledgeable enough to use a different search engine other than the default, etc.
This is also an echo chamber (The Fediverse) discussing the results of a poll on another similar echo chamber (Duck Duck Go). You won’t find nearly as many people on Lemmy or Mastodon who love AI as you will in most of the world. Still, I do get the impression that it’s a lot less popular than the AI companies want us to think.