Algorithms could help improve judicial decisions, say MIT economists
AI algorithms could help fix systemic biases in court decisions, the study suggests.
Decision makers, such as doctors, judges, and managers, make consequential choices based on predictions of unknown outcomes. Do they make systematic prediction mistakes based on the available information? If so, in what ways are their predictions systematically biased?
Now, a new paper published in the Quarterly Journal of Economics and the Oxford University Press, entitled “identifying prediction mistakes in observational data,” found that replacing certain judicial decision-making functions with algorithms could improve outcomes for defendants by eliminating some of the systemic biases of judges.
Decision-makers, said the authors led by economics Prof. Ashesh Rambachan of MIT, make choices based on predictions of unknown outcomes. Judges, in particular, make decisions about whether to grant bail to defendants or how to sentence those convicted.
The paper found that decisions of at least 32% of judges in New York City are inconsistent with the actual ability of defendants to post a specified bail amount and note the risk of them failing to appear for trial. The research here indicates that when both defendant race and age are considered, the average judge makes systematic prediction mistakes on about 30% of defendants assigned to them.
When both the defendant’s race and whether he was charged with a felony are considered, the average judge makes systematic prediction mistakes on 24% of defendants assigned to them.