Episode 77 of the Hearing is now available. We, along with the rest of the legal industry, have talked at length about the impact of artificial intelligence (AI) on our profession. But this time we concern ourselves not with the efficiencies of machine learning and automation, but instead with potential injustice and the need for vigilance in the face of seismic developments.
AI is already being used extensively in areas like recruitment, policing and the courts as it’s often assumed that machines, unlike humans, are objective and neutral. But, as this episode discusses, computer algorithms can easily perpetuate and amplify human biases.
Becky Annison and our three guests, who are each working to understand and fight against algorithmic injustice, discuss the causes as well as the cumulative and damaging effects of coded bias. The computer scientist and digital activist Joy Buolamwini describes this phenomenon as the “exclusion overhead” – the cost of systems that exclude and discriminate against those with experiential differences who are outside the code writing room.
Contributors
- Sandra Wachter, Associate Professor & Senior Research Fellow, Oxford Internet Institute, University of Oxford.
- Kristian Lum, Research Assistant Professor, Dept. of Computer & Information Science, University of Pennsylvania.
- Ivana Bartoletti, Technical Director – Privacy, Deloitte; Visiting Policy Fellow, University of Oxford.