deleted by creator
The fascist social media influencers are already pushing generated bodycam and surveillance videos for xenophobia etc. A large enough mass of the population doesn’t know what’s real and that’s the goal
deleted by creator
this will sound far reaching, but what would you think about holding internet literacy talks with your (chronological age) peers?
I wish they had broke it out by AI. The article states:
“Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”
But I don’t see that anywhere in the linked PDF of the “full results”.
This sort of study should also be re-done from time to time to track AI version numbers.
And also which version of the models. Gemini 2.5 Flash is a completely different experience to 2.5 Pro.
There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.
And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.
I wonder if people trust ChatGPT more or less than an international celebrity who is also their best friend.
I hear a lot of people say “let’s ask chatGPT” like the AI is god and know everthing 🙏, that’s a big problem to be honest
Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”
Like the average American with an 8th grade reading comprehension.
Which is what they used for the training data.
So it’s about on par with humans, then.
Replace CEOs by AI
Parrot is wrong almost half of the time. Who knew?
And then I wonder how frequently humans misinterpret the mistranslated news.
Humans do it often, but they don’t have billions of dollars funding their responses.
Worse: One third of adult actually believe the shit the AI produces.
Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven’t we reached the limits of this model and shouldn’t other types of engines be tried?
“misrepresent” is a vague term. Actual graph from the study

The main issue is usual… sources. AI is bad at sources without a proper pipeline. They note that Gemini is the worst at 72%.
Note, they’re not testing models with their own pipeline. They’re testing other people’s products. This is more indicative of the product design than the actual models
This graph clearly shows that AI is also shockingly bad at factual accuracy and at telling a news story in such a way that someone who didn’t already know about it to understand the issues and context. I think you’re misrepresenting this graph as being bad about sources, but here’s a better summary of the point you seem to be making:
AI’s summaries don’t match their source data.
So actually, the headline is pretty accurate in calling it misrepresentation.
Could be better, but still a huge step up from the hate rhetoric magats get spoon fed 24/7 from Fox and friends.
So less of a percentage than the readers and mass media










