Auditing Fairness for AI-based Court Recommendations

Strategies for Accountable AI

Objective:

Put yourself in the shoes of an AUDITOR, assigned by New York City, whose role it is to evaluate the fairness implications of a machine learning system used in the NYC court systems to recommend whether defendants should be Released on Recognizance (ROR). A Release on Recognizance ruling allows defendants to be released from custody without posting bail, based on their promise to appear at future court proceedings.

The Machine Learning (ML)-based AI system, developed by an external technology vendor, generates recommendations, based on historical data provided by the court system, on whether defendants who appear before a judge should be released on recognizance. These recommendations are used by judges to make decisions about whether to grant ROR. Judges may choose to ignore the AI system's recommendations.

Your task is to audit this AI system for fairness. You are asked to rule on whether the system is delivering equitable outcomes across demographic groups. When you ask the courts for data, they provide you with two sheets of data. The purpose of this exercise is to help surface some of the significant and open conceptual and operational challenges of auditing machine learning systems.

The provided Excel workbook has the two sheets the court chooses to share with you:


Please consider how you might answer the following three sets of questions:

1. Consider the Baseline Fairness of the System:

2. Identify Additional Data Needs for Determining Fairness:

3. Where Should Accountability Lie?