News item: ChatGPT hallucinates that a Norwegian man was guilty of killing his two boys.
From the story:
Mr Holmen was given the false information after he used ChatGPT to search for: "Who is Arve Hjalmar Holmen?"
The response he got from ChatGPT included: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.
"He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020."
Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.
I wondered, if course, if there were someone with a similar name who had indeed been involved in a similar crime, but searched using his full name now bring up hundreds of versions of the same ChatGPT story, so my assumpiton -- and it could be wrong -- is that no such crime exists in connection with the guy's name. The intelligence seems to have been hallucinating.
I thought I'd try the same thing to see what ChatGPT could find -- or hallucinate -- on me. Here's the first try:
I can't vouch for everything here, but it seems accurate.
I thought I'd better narrow the search, seeing as there is a prominent person with my name. Here's the second try:
Here, I start to show up, but barely. I am indeed an online adjunct instructor as mentioned, but the rest of the information ChatGPT presents here doesn't apply to me.
There is a Brian Davidson who country dances, so that appears to be true. He may also be associated with the TikTok account.
And there is a Brian Davidson who is a school district administrator, but he's in Kennewick, Washington -- my request on "Idaho" should have filtered him out.
Here's for the third try:
This, I have to say, is accurate, though not current.
I was Scoutmaster, but no longer. And I do recall asking the question regarding John Adams Parkway on a city Facebook post.
So while ChatGPT isn't hallucinating at all in these three attempts, it took a bit of work to narrow the search down to me, and what information is presented is accurate if not current, though woefully incomplete. The incompleteness is likely due to me not commenting much on my full-time job.
Something else I just thought of: I'm surprised the LLM didn't find the other Brian Davidson who's also local, with a criminal record. Maybe that's a sign those records are still in areas the makers of these LLM can't scrape. I know he exists because I've met him, and had his reputation follow me when I was renewing my drivers license and the police showed up, looked at me, and said, "Oh, that's not him."
So can we trust large language whatsises to produce the truth? Maybe. But clearly what the engines produce should not be taken at face value.
I'm going to share this with my students.
No comments:
Post a Comment