Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

Professor Evan Selinger receives honors for research on ethical development and implementation of AI

Carlos Ortiz

Evan Selinger, professor in RIT’s Department of Philosophy, is an expert in the ethics of artificial intelligence.

Many conversations about artificial intelligence (AI) center on the question of how the technology can impact the future and shape our world. In addition to considering the possible future benefits, it’s crucial for experts to consider regulating policy that can help mitigate the potential negative legal, ethical, and social impacts AI can have on the people creating it and using it.

Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach.

“AI ethics goes beyond technical fixes. Philosophers and other humanities experts are uniquely skilled to address the nuanced principles, value conflicts, and power dynamics. These skills aren’t just crucial for addressing current issues. We desperately need them to foster anticipatory governance,” said Selinger.

One example that illustrates how philosophy and humanities experts can help guide these new, rapidly growing technologies is Selinger’s work collaborating with the Institute for Defense Analyses on its Defense Advanced Research Projects Agency-funded AI projects.

“One of the skills I bring to the table is identifying core ethical issues in emerging technologies that haven’t been built or used by the public. Once we know what could go wrong, we can take preventative steps to limit risk, including changing how the technology is designed,” said Selinger.

Taking these preventative steps and regularly reassessing what risks need addressing is part of the ongoing journey in pursuit of creating responsible AI. Selinger explains that there isn’t a step-by-step formula for good governance. In fact, there is a lively debate among experts over how to even define what responsible AI entails.

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections.

In February, Selinger, Leong, and Fox Cahn received an award from The Future of Privacy Forum’s 14th Annual Privacy Papers for Policymakers Awards for this work. The group was also selected to present their ideas to representatives of the United States Federal Trade Commission. In both cases, they spoke directly to policymakers who can impact future laws governing AI.

At RIT, Selinger is making sure his students are informed about the ongoing industry conversations on AI ethics and responsible AI. In the fall, he’ll be teaching a course called AI Ethics. It will give students a deeper understanding of what AI ethics is about, why it matters, and what constraints exist in the real world that can get in the way of pursuing it.

“RIT students are going to be future tech leaders. Now is the time to help them think about what goals their companies should have and the costs of minimizing ethical concerns. Beyond social costs, downplaying ethics can negatively impact corporate culture and recruitment,” said Selinger. “To attract top talent, you need to consider whether your company aligns with their interests and hopes for the future.”

To see more articles by and about Selinger and his scholarship relating to AI, go to Selinger’s website.


Recommended News