Skip to Main Content

AI Bias

Explore the ways that bias can occur in AI systems.

Introduction

The emergence of AI has brought many ethical issues to the forefront. Researchers in the humanities, social sciences, sciences, and engineering have focused their attention on issues such as bias in AI systems, copyright and intellectual property concerns, environmental issues, labor market impacts, and the application of of AI systems in warfare, criminal justice, employment decisions, and other contexts that involve the potential for state violence or extreme imbalances of power.

The ethical stakes of how we choose to implement AI are high - can we imagine a future where AI systems can be used as a tool to build a better world and support healing from systematic injustice, or are we doomed to create systems that reproduce or even amplify existing inequality and biases?

This tutorial builds on peer-reviewed research on bias in AI systems. It gives you a chance to try our the same methods researchers use to detect and quantify bias, and to try out your own tests and inquiries.

AI Bias

Dr. Joy Buolamwini is an engineer, artist and researcher whose research is rooted in her work in AI facial recognition. While building facial recognition systems, Dr. Buolamwini found that she could not test her own systems unless she donned a white mask. This video explains how this experience provides a profound insight into the ways that bias can be reflected in AI systems.

Buolamwini’s work uncovered significant racial and gender bias in commercial facial recognition systems. She found that these systems are less accurate for people with darker skin tones, especially women, compared to lighter-skinned individuals, and that these issues are the result of unrepresentative and biased training data used to create those systems.

Facial recognition systems use training data to “learn” what a human face looks like. If the faces in the training set don’t look like you, the facial recognition system may have trouble recognizing your face.

This problem is not limited to facial recognition - all AI models rely on training data during development - this is the process known as machine learning.

People who create AI systems rely on training data to develop the predictive models that power those systems. The responses or results that AI systems produce reflect both the data that is used in the creation of those systems and decisions made by the designers and developers of those systems. As Buolamwini argues, having diversity in both the training data and the development team is one way to work against the potential for bias in these systems.