Litigation
Cal. Litig. MAY 2024, VOLUME 37, ISSUE 1
Content
- 2023 Year-end Report On the Federal Judiciary
- A Day Without a Court Reporter
- Ai - Use With Caution
- Editor's Foreword: No Waiting: Litigaition Is Here!
- Editor's Foreword: No Waiting: Litigation Is Here!
- From the Section Chair Our 2024 Hall of Fame Inductions, Including Our Trial Lawyer Hall of Fame 30Th Anniversary Reception and Event
- Interview With United States District Judge Troy L. Nunley
- Navigating the New Legal Landscape For Child Sexual Abuse Civil Litigation In State and Federal Court
- PAST SECTION CHAIRS & EDITORS-IN-CHIEF
- Reporting Another Lawyer's Professional Misconduct: Implications For California Lawyers
- SECTION OFFICERS & EDITORIAL BOARD
- Table of Contents
- What Will Artificial Intelligence Mean For Litigation?
- Whither Chevron? the Past, Present, and Possible Futures of Judicial Deference
- Working: Conversations With Essential Workers Behind the Scenes In the Court System
- Why Black Box Ai Evidence Should Not Be Allowed In Criminal Cases
WHY BLACK BOX AI EVIDENCE SHOULD NOT BE ALLOWED IN CRIMINAL CASES
Written by Hon. Abraham C. Meltzer*
I. BLACK BOX AI
Artificial Intelligence (AI) systems make mistakes. This fact is not sufficiently acknowledged: AI systems will make errors a certain percentage of the time. That is inherent in how machine-learning algorithms are designed. How, then, should lawyers and judges think about AI-generated evidence â which, increasingly, parties are seeking to present in court?
AI-generated evidence might arise from, for example: identifications made by facial recognition software; recidivism risk-prediction algorithms used in determining a defendant’s pretrial release status, and in sentencing; medical diagnoses made by AI trained to interpret X-rays and MRI’s; and automated underwriting programs used to approve or deny bank loans. Such evidence may well be both valid and relevant. But we cannot assume that all AI evidence is true, because sometimes it is not.