sashank varma

Lab Spotlight: Exploring What AI Can Teach Us About Human Cognition

Vijay Marupudi sees artificial intelligence (AI) as a gateway to better understanding the human mind. 

But instead of exploring if an AI can think like a human, he’s more interested in what AI can tell scientists about human cognition.

That’s one reason he became a Ph.D. student in human-centered computing after obtaining his bachelor’s in psychology and neuroscience.

“One of the things I find interesting is when we have a model that demonstrates behavior that we consider to be intelligent, it’s the first time in history we can compare a non-human form of intelligence to a human one,” Marupudi said. “What is the nature of that intelligence? Is it like humans? If so, what does that tell us about humans?”

Marupudi is a member of the Cognitive Architecture Lab at Georgia Tech. Professor Sashank Varma of the School of Interactive Computing directs the lab. He and his students focus part of their research on the cognitive similarities between humans and language and vision models.

“We’re interested in understanding human cognition from a computational perspective,” Varma said.

“We have projects about how people understand language, understand computational notions, understand concepts, and whether they learn or fail to learn. We ask whether computational models can help understand these processes.”

The most interesting pattern Varma and his students continue to find is that machine learning researchers are designing models to achieve state-of-the-art performance without intending to mimic human cognition. Still, the models are imitating that behavior. 

“We look at a lot of these models post hoc, and there’s no evidence from corresponding papers they were designed to be cognitively aligned,” Marupudi said. “But they happen to be, so that’s impressive.

“We’ve had papers that have looked at how large language models (LLMs) understand numbers,” Marupudi said. “The model showed an effect, a logarithmic curve, which isn’t necessary to understand numbers. Humans also show that same pattern. Why is that the case? Is there a specific advantage for all intelligence to represent numbers this way?”

Varma’s lab shined this summer at the Annual Meeting of the Cognitive Science Society (CogSci) in Rotterdam, The Netherlands. The lab presented six papers, which Varma said was the most his lab has had accepted into a single conference, on topics ranging from language to mathematics to conceptual reasoning.

The papers demonstrate the range of perspectives they use to approach cognitive alignment between artificial and human intelligence.

Image
lab meeting
Professor Sashank Varma points to a screen as he discusses a presentation with students Avni Madhwesh (right), Vijay Marupudi, and Sarah Mathew from his Cognitive Architecure Lab at the Technology Square Research Building. Photos by Terence Rushin/College of Computing.

“We’re also working in the opposite direction, taking principles of learning discovered by cognitive scientists and neuroscientists and asking if we can use these to enhance the training and effectiveness of ML models,” Varma said. “How do you get models to learn a sequence of tasks without forgetting what they learned? Models often have difficulty with this. There are various insights from how the human brain functions that can be imported into ML research.”

Emergent Cognitive Abilities

Another phenomenon the lab studies is how models can perform new functions as their training data increases. Raj Shah, a computer science Ph.D. student advised by Varma, said there are tasks that smaller models cannot perform, but new abilities emerge when more training data is added.

“When we scale up from smaller models to something like Llama-3, which has over 400 billion parameters, these tasks become possible,” Shah said. “We call those tasks emergent abilities. We try to build different-sized models to pinpoint the exact spot where these abilities become visible.”

Shah said another interesting feature is that models tend to learn new tasks without being trained to do them. 

“They are learning it as a consequence of the training,” he said. “We can see elements of cognitive intelligence in language models as a function of scaling model sizes.” 

Cognitive Development vs. Cognitive Alignment

Varma is exploring a new perspective on cognitive alignment that he said will be a key focus of his lab’s research for the next five years. 

“As models see more data and learn and improve, do their performance improvements match the developmental progressions observed in children’s thinking? That’s going beyond cognitive alignment to developmental alignment,” he said.

Varma thinks the lab’s work could inform human understanding of developmental disabilities like dyscalculia. 

“Once we have a model of typical number development in children, we can perturb the model to produce the patterns shown by children with mathematical learning disabilities,” Varma said. “The successful perturbations then become candidate mechanisms for developmental scientists to investigate.

“Our lab is uniquely positioned between developmental science and AI, and we ask these questions about whether models acquire competence the way humans do through development.”