Litigation

Cal. Litig. MAY 2024, VOLUME 37, ISSUE 1

WHY BLACK BOX AI EVIDENCE SHOULD NOT BE ALLOWED IN CRIMINAL CASES

Written by Hon. Abraham C. Meltzer*

I. BLACK BOX AI

Artificial Intelligence (AI) systems make mistakes. This fact is not sufficiently acknowledged: AI systems will make errors a certain percentage of the time. That is inherent in how machine-learning algorithms are designed. How, then, should lawyers and judges think about AI-generated evidence — which, increasingly, parties are seeking to present in court?

AI-generated evidence might arise from, for example: identifications made by facial recognition software; recidivism risk-prediction algorithms used in determining a defendant’s pretrial release status, and in sentencing; medical diagnoses made by AI trained to interpret X-rays and MRI’s; and automated underwriting programs used to approve or deny bank loans. Such evidence may well be both valid and relevant. But we cannot assume that all AI evidence is true, because sometimes it is not.

Join CLA to access this page

Join Now

Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment