AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.
The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.
It’s a fancy marketing term for when AI confidently does something in error.
How can the AI be confident?
We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.
Describing things in terms of “hallucinations” tell users that the output shouldn’t always be trusted, regardless of how “confident” the technology seems.
AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.
The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.
It’s a data center, not a psychiatric patient
It’s also not intelligent but stochastic language models.
How can the AI be confident?
We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.
Describing things in terms of “hallucinations” tell users that the output shouldn’t always be trusted, regardless of how “confident” the technology seems.