Cristopher Moore
London E1W 1YW, UK
Portland, ME 04101
2nd floor
11th floor
Boston, MA 02115
London E1W 1LP, UK
Talk recording
Many courts use risk assessment algorithms to advise judges whether to release a defendant pretrial, and if so under what conditions. These algorithms are highly controversial, and have been criticized for perpetuating historical biases. On the other hand, they can remind judges that most defendants can be safely released, and help us think about how much risk—and risk of what—could justify detention. But they can only play this role if they are transparent, and if judges know what their outputs mean. Vague labels like “high risk” are not enough. We audited a widely-used risk assessment algorithm for accuracy and fairness using a dataset of fifteen thousand defendants in Albuquerque, New Mexico. By digging deeper than previous studies, we learned that most crime is not pretrial crime, rearrest for high-level felonies is very rare, and that most people who “fail to appear” in court miss only one hearing. We also audited proposed state laws, treating them as algorithms, and showed that they would detain many people unnecessarily while only preventing a small fraction of crime. We close with some reasons that computer scientists should engage in studies like this, and how doing so can broaden your view of the both algorithms and human systems.