CHI 2023: Examining Boundaries of AI 'Sensing' to Understand Office Workers’ Performance, Wellbeing
New research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation
Commercial monitoring tools are being introduced in offices alongside newer modes of work – screen meetings, remote collaboration, digital-first workflows – as a way for employers to better understand performance of their workforces.
Researchers at Georgia Tech and Northeastern University conducted a study with information workers to learn about their perspectives on being monitored and their information being collected with passive-sensing enabled artificial intelligence (PSAI), where computing devices can unobtrusively detect and collect user behaviors. That information could then be used to train machine learning models that infer performance and wellbeing of workers.
“We wanted to take a closer look at how workers perceive passive-sensing AI in order to make this technology work for the workers, as opposed to making them work for the technology,” said Vedant Das Swain, lead researcher and a Ph.D. candidate in computer science at Georgia Tech.
He says there is an organizational need – for both employer and employee alike – to get better insights.
“One of the underlying subtexts of the research is that there are these asymmetries at work because the employee doesn’t have as much power as the employer. And if these technologies keep progressing as they are, this gap is going to widen because the employer will just keep getting more and more worker information.”
Researchers found that some technologies – fitness trackers and web cams, for example – used for personal activities may not translate well to work life if they are implemented without considering new norms of work. Technologies can now “breach physical boundaries,” as Das Swain puts it, and using a web cam for work while at home might involve extra setup to close doors and blur backgrounds on the screen. Workers also want careful consideration of the context in which devices can gain information.
Work devices monitoring worker activity is appropriate in many cases but work-related apps on personal devices might be a tougher sell.
The research results fall in two primary categories:
- Appropriateness – Understanding socially acceptable data to collect with passive-sensing AI and acceptable circumstances to infer worker performance and wellbeing.
- Distribution – Determining what to share about worker data – and when – with other stakeholders and the methods used.
Regarding the appropriateness aspect, Das Swain says that people in general don’t want to feel dehumanized by algorithms. His team’s work takes that idea further by learning about the mental models different workers use to determine what’s appropriate for using PSAI.
“Different workers have different ideas of what’s insightful,” he said. “For example, if I don’t talk to my supervisor about my personal life, why should this machine be sensing that type of information? The alternative viewpoint is that I already know what I’m doing at work, so give me more data. I could use sleep and commute data to infer how those activities might affect my work.”
[MICROSITE: Georgia Tech at CHI 2023]
Das Swain says there is no one-size-fits-all solution.
“And it’s not just about privacy, it’s about utility,” he said. “People find utility in different things. Some want more precise information in a work context, and some might want the holistic view of the data, in both cases to find insights for themselves.”
The second category of results – distribution – is no less tricky. Worker information is ostensibly personal in nature, but collaborative and performance measures at work necessitate the sharing of this information.
The researchers found that participants strongly felt that if a machine predicted something related to performance or wellbeing, then they should have enough time to make changes and provide context, such as if a worker is on paternity leave and must alter project deadlines.
“Only at a later point, if at all, can the data be escalated to someone else to help as the situation requires,” said Das Swain. “That was very clear in the study.”
One red flag so to speak, for Das Swain as a researcher, is that these technologies don’t afford users any control to understand newer types of personal data that are being collected and stored at work.
With algorithmic uncertainty now at the forefront of many conversations, Das Swain views these results from the Georgia Tech and Northeastern group as tangible guideposts for regulators and companies making decisions around public and commercial deployment of AI sensing tech for information workers.
The published results will be presented at the ACM CHI Conference on Human Factors in Computing Systems, taking place April 23-28, in Hamburg, Germany. The academic paper, Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing, is co-authored by Das Swain, Lan Gao, William Wood, Srikruthi C. Matli, Gregory Abowd, and Munmun De Choudhury. The work is funded in part by Cisco.
CHI 2023 Research News Stories
As computing revolutionizes research in science and engineering disciplines and drives industry innovation, Georgia Tech leads the way, ranking as a top-tier destination for undergraduate computer science (CS) education. Read more about the college's commitment:… https://t.co/9e5udNwuuD pic.twitter.com/MZ6KU9gpF3
— Georgia Tech Computing (@gtcomputing) September 24, 2024