I love how these models apologize like they mean it. It doesn’t mean it. It doesn’t feel bad, and it will do it again.
Apologies mean “I made a mistake and I learned from it so it won’t repeat.”
Sure it claims it added more notes to it’s config, but if it ignored the rules before, what makes you think that new rules are going to change anything?
I love how these models apologize like they mean it. It doesn’t mean it. It doesn’t feel bad, and it will do it again.
Apologies mean “I made a mistake and I learned from it so it won’t repeat.”
Sure it claims it added more notes to it’s config, but if it ignored the rules before, what makes you think that new rules are going to change anything?
yeah enough humans don’t know that as well unfortunately. But yeah obviously LLMs don’t understand anything. That’s not how they work
it is made to copy how humans write and speak
the AI had been scored for how good it learned from humans to sound sorry
At best it might not make the same mistake again if that memory is in the current context. But more likely: It will not remember.
Although latest Gemini in particular has much more room for “remembering” things, still.
But “I made a mistake”? It is not self-aware in any way shape or form to the degree where “I made a mistake” carries any real meaning.
But… but… it generates text that seems like a human wrote it!
Therefore it must be a human!
… A whole lot of humans are failing a reverse turing test, just, fundamentally.
If anything its context includes that it makes mistakes now and details about them. The mostly output is to create the same mistakes again
deleted by creator
deleted by creator