Skip to content

Is AI racist?

Machine learning, the justice, system, and racial bias

According to The Sentencing Project’s Report addressed to the United Nations on racial disparities in the U.S. criminal justice system, racial disparities exist at many levels within the American justice system. The report details how Black Americans are far more likely than white Americans to be arrested, convicted, and to have longer sentences in jail. The cause of this racial disparity is deeply rooted in a violent history; factors like systemic poverty and implicit bias in policing and sentencing processes are not new phenomena. Consequently, Black Americans are disproportionally represented in both the justice, and prison systems.

The implementation of algorithmic usage into justice-related decision-making could potentially help alleviate this problem. Theoretically, algorithms would leave no room for human error and bias in decision-making. This strategy has already been introduced in some capacity, as the use of algorithms is being incorporated into judicial decision making in the evaluation of a defendant’s likelihood to commit future crimes (recidivism risk assessment). Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), developed by the private company Equivant (formerly known as Northpointe) in 1998, is an algorithm widely used in the United States to make predictions about a defendant’s recidivism risk. COMPAS consists of a 137-item questionnaire which takes note of the defendant’s personal information (such as sex, age, and criminal record) and uses this information to make its predictions. Race is not an item on this survey, but several other items that can be correlated with race are included in the COMPAS risk assessment. The resulting risk assessment score predicts the defendant’s risk of committing a crime within 2 years, and can be used to inform judges when making decisions about sentencing.

Race is not an item on this survey, but several other items that can be correlated with race are included in the COMPAS risk assessment

In May of 2016, ProPublica published an article claiming that the COMPAS algorithm exhibited racial bias. The article cited statistics on over 7,000 arrests in Broward County, Florida that took place from 2013 to 2014. One of the indicators of this racial bias is the shocking disparity between COMPAS’ classification of Black and white defendants. Black defendants who hadn’t recidivated within two years of their release were 45% more likely to have been classified as being at higher risk, whereas only 23% of otherwise identical white defendants received the same classification.

ProPublica’s article garnered attention, and was met with backlash not just from Equivant, but from academics who further analyzed the data. The ProPublica article argued that COMPAS was making unfair predictions by underestimating recidivism rates for white offenders and overestimating them for Black offenders. In response, Equivant argued that COMPAS was a fair assessment because Black offenders were more likely to recidivate than white offenders, and that they were simply exhibiting predictive fairness. Equivant further argued that COMPAS actually predicted recidivism risk at the same rate for Black and white defendants within each risk category (the risk assessment is scored on a scale of 1 to 10).

An algorithm is only as good as the data it is fed.

In January of 2018, Julia Dressel and Hany Farid, Darmouth College computer science academics, published a paper titled The Accuracy, Fairness, and Limits of Predicting Recidivism, which, among other things, concluded that ProPublica and Equivant were using two different definitions of fairness that could not be simultaneously satisfied by this algorithm. Furthermore, they found that COMPAS was no more accurate in making predictions than an algorithm with a 7-item questionnaire created by the authors of the paper. Both performed at approximately 65% accuracy. This new algorithm also showed the same trend of predictive fairness as the COMPAS algorithm (the trend that had first been criticized by the ProPublica article).

While there is public access to the 137-item questionnaire used, Equivant is a private company, and exact information on how the algorithm itself works has not yet been released. This naturally leads to the following question: how are algorithms developed in the first place? In machine learning, algorithms are fed data and learn from it. They make predictions based on trends they recognize, which can pose a problem if they have been fed “bad” data. Presumably, as part of its development, COMPAS could have learned from previous cases and records, which included results influenced by systemic racial bias. If previous cases demonstrated bias in favour of white defendants and at the disadvantage of Black defendants, the COMPAS algorithm would use that information to make its predictions. An algorithm is only as good as the data it is fed.

as part of its development, COMPAS could have learned from previous cases and records, which included results influenced by systemic racial bias

Because the exact design of COMPAS itself is unclear, it raises a lot of important questions. The factors discussed in this article, such as how we define fairness, the accuracy of a program, and the potential downfalls of computer learning, prompt us to ask: should A.I. be used in decision-making within the justice system, where the consequences of its inaccuracy are life-changing? Is it just to use an algorithm created by a private company, who has not disclosed the actual logistics of their logarithm, to make decisions in a court of law?