exactly. the whole point of these things is that they MUST provide you a solution. Any solution. doesn’t have to be accurate, doesn’t have to work, can be completely made up as long as it’s a solution and as long as it’s provided quickly. I’ve seen people feed into the prompts stuff like “don’t hallucinate” or “verify all this online before proceeding” etc and it’s not going to do any of that. it might TELL you it’s doing that but it won’t.
Claude is notorious for guessing, not verifying, and providing the quickest possible solution. Unlike GPT which will fluff all it’s solutions to essentially waste your time and eat up more tokens, Claude just wants your problem out the door so you can feed it another problem ASAP.
If you use Claude for anything in your daily work you might as well just have a magic 8ball sitting on your desk. It’s a hell of a lot cheaper and provides about the same quality.
I kind of like this, with some modification. It’s a magic 8 ball of Stack Overflow answers. It’ll try to find the one you need. If it’s too hard to find that or if it doesn’t exist, it’s just gonna find the one that sounds good.
I love this idea. On shit, the load balancer isn’t responding, time to shake the Magic Stack Overflow Ball ™! The result is “signs point to power cycling the server”.
yeah, it gives you the answer it thinks you want based on your prompts.
I’d be interested to see what prompts they used to, uh, prompt this response.
I’m not attacking you but we really need to figure out how we use language to accurately describe what these programs are doing.
They are outputting a highly likely sequence of words that fit the type of output from their training data that matches the input.
They are fancy autocomplete.
Oh, I know. My comment was more about how we tend to anthropomorphize this stuff and give these models traits they don’t possess.
exactly. the whole point of these things is that they MUST provide you a solution. Any solution. doesn’t have to be accurate, doesn’t have to work, can be completely made up as long as it’s a solution and as long as it’s provided quickly. I’ve seen people feed into the prompts stuff like “don’t hallucinate” or “verify all this online before proceeding” etc and it’s not going to do any of that. it might TELL you it’s doing that but it won’t.
Claude is notorious for guessing, not verifying, and providing the quickest possible solution. Unlike GPT which will fluff all it’s solutions to essentially waste your time and eat up more tokens, Claude just wants your problem out the door so you can feed it another problem ASAP.
If you use Claude for anything in your daily work you might as well just have a magic 8ball sitting on your desk. It’s a hell of a lot cheaper and provides about the same quality.
I kind of like this, with some modification. It’s a magic 8 ball of Stack Overflow answers. It’ll try to find the one you need. If it’s too hard to find that or if it doesn’t exist, it’s just gonna find the one that sounds good.
I love this idea. On shit, the load balancer isn’t responding, time to shake the Magic Stack Overflow Ball ™! The result is “signs point to power cycling the server”.