Student aims to reduce bias in automated surveillance
Saranya Dadi presents at RIT’s Undergraduate Research Symposium
Saranya Dadi, a second-year computer science student at RIT, is conducting research to make machine learning for automated surveillance systems fairer. However, she’s not just focusing on making it faster or more efficient, she also wants to make sure machine learning is ethical.
“Just because a computer doesn’t have feelings and emotions, doesn’t mean there’s no bias,” said Dadi, who is originally from India. “It is crucial that these systems ensure effective performance and accurately safeguard the interests of organizations, without compromising individual privacy.”
As Dadi points out, computers are increasingly being used to make decisions that affect human lives—from determining who should receive a loan to how to treat sick people. These machine learning algorithms use massive data sets that may not appear to be biased or racist, but may be influenced by longstanding social, cultural and institutional biases.
RIT will showcase a variety of research projects undertaken by students and faculty-mentors over the 2020-21 academic year during the annual Undergraduate Research Symposium, held virtually July 29 through Aug. 4. Read more about the symposium.
For example, as part of her research literature survey, Dadi reviewed a proposed automated surveillance system that identifies people using body shape. However, that system only gathered body shape data from people in Western nations.
“The biggest issue is considering accountability and autonomy when dealing with machine learning systems, as well as bias in the data that is used to train them,” said Dadi. “We, as human beings, need to be in control of these powerful systems.”
Dadi is presenting the preliminary state of her work at RIT’s Undergraduate Research Symposium July 29 through Aug. 4. The event is being held virtually online and features presentations from dozens of student researchers.
For the project, she has been conducting a literature review to better understand the current landscape of machine learning ethics and how it’s being used in automated surveillance systems around the world. She is investigating the ethical ramifications of biased results, examining applications of automated surveillance and their implications on privacy.
She is also recommending possible solutions. Based on her readings and ethical codes for computing professionals, she has begun to explore a six-step framework aimed at reducing bias in machine learning systems. During her time at RIT, Dadi said she looks forward to expanding on her research and taking courses in data management, data mining, and AI.
Dadi became interested in this research area after recent movements against racial injustice and taking an ethical perspective course at RIT that brought up a case where an insurance company was unintentionally using a healthcare algorithm that favored white patients over black patients. As someone who has lived in many different areas around the world and has always had an interest in international relations (even doing an international relations immersion at RIT), she began thinking about her major in a new way.
“I wondered what my role as a computer science student is and how I can make the world a little better with what I do,” said Dadi. “I want to make sure computer scientists understand the impact we have and consider ethics and who our work affects.”
Dadi is advised by Rajendra Raj, professor of computer science.