Wednesday, August 27, 2025

AI Issues Incorrect -- but Confident -- Answers, Knowing Deep Down It Doesn't Know

Chinese researchers, in a new paper, explain that artificial intelligence offers "confident" answers to questions it really can't find an answer for in its vast trove of stolen text -- explaining some of the "AI hallucinations" that are commonplace in large language models.

A rundown of the paper is here.

From the website, and the paper:

A new paper from China contends that LLM models actually secretly know that they cannot answer a question posed by the user, but that they are nonetheless compelled to produce some kind of answer, most of the time, instead of having enough confidence to decide that a valid answer is not available due to lack of information from the user, or the limitations of the model, or for other reasons.

The paper states:

‘[We] show that [LLMs] possess sufficient cognitive capabilities to recognize the flaws in these questions. However, they fail to exhibit appropriate abstention behavior, revealing a misalignment between their internal cognition and external response.'

The paper points out some flaws in the research methodology, of course. But I like the idea of AI blithely expressing the same kind of idiot confidence many in the tech industry spout about their innovations when, deep down, they know they're blowing a lot of marketing smoke and hope the technology can eventually catch up to the dream. (I'm looking at you, Juicero and Theranos, among many, many others.)

I don't fault them for dreaming big. But there is fault in refusing to say "I don't know," or "I don't have enough information to give you a complete answer." That kind of talk doesn't get venture capital funding.

No comments: