Lucia Rafanelli publishes article on how AI can perpetuate injustice


March 14, 2022

Alt Text

In her latest entry in SAGE Journals titled “Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy," Lucia Rafanelli, assistant professor of political science and international affairs, explores the vocabularies and frameworks developed by political scientists and philosophers on the concept of justice and their usefulness in viewing artificial intelligence. 

Using political theory and philosophy as her foundation, Rafanelli showcases the concepts of institutional discrimination, structural injustice, and epistemic injustice by highlighting the role of artificial intelligence in perpetuating injustice through the real-world examples of a biased AI resume screener, AI photo sorter, and AI facial recognition software. As all artificial intelligence is “written by humans and trained on human data," Rafanelli writes that, contrary to popular belief, artificial intelligence is an extension of human power rather than a replacement and is "far from representing a decision to take power out of human hands" and raises questions on the role of justice in its use. As a result, she posits that "it is our responsibility as consumers, programmers, and researchers to ensure these questions don't go unanswered." 

To view more of Professor Rafanelli’s research on justice, read the full SAGE Journals article or listen to the podcast on her most recent book Promoting Justice Across Borders: the Ethics of Reform Intervention.