So this is important to remember about large language models: They're only working on "what would an answer to this question sound like?"
It certainly doesn't know it's "wrong;" it has no idea what right or wrong is. When you ask it to correct itself, it just goes back to programming: What would an answer to this sound like? That's all it'll ever come up with.
No comments:
Post a Comment
Just remember: No hittin' no spittin' no spammin' and no lesiure suits. Be nice.