College of Computing News

AI Hallucinations Can’t be Stopped — but These Techniques Can Limit Their Damage

Blank Space (small)
(text and background only visible when logged in)
https://www.nature.com/articles/d41586-025-00068-5

It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. “They sound like politicians,” says Santosh Vempala, a theoretical computer scientist at Georgia Institute of Technology in Atlanta. They tend to “make up stuff and be totally confident no matter what.”

Read the complete article published in Nature to explore how Vempala and others are working on ways to make hallucinations less frequent and less problematic.