The Age of Quantitative Legal Prediction
By Daniel Martin Katz,Assistant Professor of Law Michigan State University – College of Law
and Michael J. Bommarito IICo-Founder @ Quantitative Legal Solutions, LLP
Do I have a case? What is our likely exposure? How much is this matter going to cost? What will happen if we leave this particular provision out of the contract? How can we best staff for this particular legal matter? Is this a relevant document? These questions are core to the practice of law, and each calls for some sort of a forecast â a prediction.
Every day, lawyers and law firms provide predictions to clients regarding the impact of their choices in business planning and transactional structures, as well as their prospects and costs in litigation. Generating informed answers to these questions is the value by which many lawyers are measured, and these answers are typically based on heuristic predictions leveraging prior experience and domain knowledge.
Just as in political and sports contexts, the precision and accuracy of these expert predictions varies widely. In some cases, experts reliably forecast correct outcomes; in others, experts fail to beat naïve random models, especially when adjusted for cost. To better understand this phenomenon, we need to understand how expert predictions are being generated. What are the classes of actors being considered? What rules or models guide the understanding of these actorsâ interactions? What measurable data, if any, is being used to drive the rules and models? Could the quality of these predictions be improved by drawing models and data from similar questions?