A rundown of the paper is here.
From the website, and the paper:
A new paper from China contends that LLM models actually secretly know that they cannot answer a question posed by the user, but that they are nonetheless compelled to produce some kind of answer, most of the time, instead of having enough confidence to decide that a valid answer is not available due to lack of information from the user, or the limitations of the model, or for other reasons.
The paper states:
‘[We] show that [LLMs] possess sufficient cognitive capabilities to recognize the flaws in these questions. However, they fail to exhibit appropriate abstention behavior, revealing a misalignment between their internal cognition and external response.'
The paper points out some flaws in the research methodology, of course. But I like the idea of AI blithely expressing the same kind of idiot confidence many in the tech industry spout about their innovations when, deep down, they know they're blowing a lot of marketing smoke and hope the technology can eventually catch up to the dream. (I'm looking at you, Juicero and Theranos, among many, many others.)
I don't fault them for dreaming big. But there is fault in refusing to say "I don't know," or "I don't have enough information to give you a complete answer." That kind of talk doesn't get venture capital funding.
No comments:
Post a Comment