Meredith Ringel Morris

Future of AI and Policy Among Key Topics at Inaugural IC Summit

The future of artificial intelligence (AI) was spotlighted last week as more than 120 academic and industry researchers participated in the Georgia Tech School of Interactive Computing’s (IC) inaugural Summit on Responsible Computing, AI, and Society.

With looming questions about AI's growing roles and consequences in nearly every facet of modern life, School of IC organizers felt the time was right to diverge from traditional conferences that focus on past work and published research.

“Presenting papers is about disseminating work that has already been completed. Who gets to be in the room is determined by whose paper gets accepted,” said Mark Riedl, School of IC professor.

“Instead, we wanted the summit talks to speculate on future directions and what challenges we as a community should be thinking about going forward.”

The two-day summit, held at Tech’s Global Learning Center, convened to discuss consequential questions like:

  • Is society ready to accept more responsibility as greater advancements in technologies like AI are made?
  • Should society stop to think about potential consequences before these advancements are implemented on its behalf, and what could those consequences be?
  • What policies should be enacted for these technologies to mitigate harms and augment societal benefits?

A highlight of the summit’s opening day was Meredith Ringel Morris's keynote address. As director of human-AI interaction research at Google DeepMind, she presented a possible future in which humans could use AI to create a digital afterlife.

Image
Man sits across from woman and another man holding a microphone at a conference
Main photo: Meredith Ringel Morris, director of human-AI interaction research at Google Deepmind, gives the keynote talk Oct. 29 at the School of Interactive Computing's Summit on Responsible Computing, AI, and Society. Above: Christopher MacLellan, assistant professor in the School of IC, discusses AI with Jeff Bigham, a professor in the Carnegie Mellon University School of Computer Science, and Ari Schlesinger, assistant professor in the University of Georgia School of Computing. Below: Josiah Hester (left), associate professor in the School of IC, and Cindy Lin, assistant professor in the School of IC, discuss AI's future impact on sustainability. Photos by Terence Rushin/College of Computing.

In her remarks, Morris discussed AI clones, which are AI avatars of specific human beings with high autonomy and task-performing capabilities. Someone could leave such an agent behind as a memory for loved ones to enjoy once they are gone, and future generations could access it to learn more about an ancestor.

On the other hand, it could easily lead to loved ones experiencing extended grief because they have trouble moving on from losing a family member.

These AI capabilities are in development and will soon be publicly available. As industry and academic researchers continue to develop them, the public needs to learn about their eminent impacts.

“There’s a lot that needs to be done in educating people,” Morris said. “It’s hard for well-intentioned and thoughtful system designers to anticipate all the harm. We must be prepared some people are going to use AI in ways we don’t anticipate, and some of those ways are going to be undesirable. What legal and education structures can we create that will help?”

In addition to Morris’s keynote, the summit’s first day included 20 talks about future and emerging technologies in AI, sustainability, healthcare, and other fields. 

The second day featured eight talks on translating interventions and safeguards into policy.

Day-two speakers included: 

  • Orly Lobel, Warren Distinguished Professor of Law and director of the Center for Employment and Labor Policy at the University of California-San Diego. Lobel worked on President Obama’s policy team on innovation and labor market competition, and she advises the Federal Trade Commission (FTC). 
  • Sorelle Friedler, Shibulal Family Professor of Computer Science at Haverford College. She worked in the Office of Science and Technology Policy (OSTP) under the Biden-Harris Administration and helped draft the AI Bill of Rights. 
  • Jake Metcalf, researcher and program director for AI on the Ground at the think tank Data & Society. The organization produces reports on data science and equity for the US Government. 
  • Divyansh Kaushik, Vice President of Beacon Global Strategies, has given testimony to the US Senate on AI research and development.
Image
Man sits across from woman as they discuss something during a conference

Kaushik earned a Ph.D. in machine learning from Carnegie Mellon University before beginning his career in policy. He highlighted the importance of policymakers fostering relationships with academic researchers.

“Policymakers think about what could go wrong,” Kaushik said. “Academia can offer evidence-based answers.”

The summit also hosted a doctoral consortium, which allowed advanced Ph.D. students to present their research to experts and receive feedback and mentoring.

“Being an interdisciplinary researcher is challenging,” said Shaowen Bardzell, School of IC chair.

“We wanted the next generation to be in the room listening to the experts share their visions and also to provide our own experiences when possible on how to navigate the challenges and rewards of doing work in the intersection of AI, healthcare, sustainability, and policy.”