There is considerable alarm among academics and civil rights campaigners at the adoption of machine learning tools in areas such as law enforcement, healthcare delivery, and recruitment, given that AI replicates and amplifies existing inequalities. For instance, an ACLU study using Amazon’s facial recognition software to compare every member of the US Senate and House against a database of criminal mugshots disproportionately misidentified Black and Latino legislators as criminals.

Adjustments are made to training data, labels, model training, scoring systems, and other aspects of machine learning in an effort to iron out these biases; there is a theoretical assumption that these adjustments render a system less accurate.

A team of Carnegie Mellon researchers, who tested that assumption and found the supposed trade-off negligible, hope to dispel that assumption.

“You can actually get both [accuracy and fairness]. You don’t have to sacrifice accuracy to...