Privacy Law

Fault Without Proof: Comparative Responsibility for Statutory Privacy Violations

By: Derek Song, 3L, UCLA School of Law[i]

Congratulations to Derek Song for winning the 2026 Inaugural Privacy Law Writing Competition!

Introduction

When facial recognition systems misidentify thousands of customers, both the vendor who built the untested system and the retailer who relied on the vendor’s accuracy claims are exposed to identical statutory damages of $1,000 to $5,000 per person.[ii] Current privacy doctrine often borrows statutory damages from consumer protection to avoid quantifying dignitary harm.[iii] But in the privacy context, statutory damages operate bluntly by treating reckless and diligent defendants alike. Courts should preserve statutory damages while adopting comparative fault apportionment from products liability to anticipate challenges as AI becomes increasingly complex. This framework preserves victim protection while aligning fault with culpability by allocating responsibility based on informational asymmetry, actual control over processing, profit from the violation, and inducement of reliance.

Why Privacy Law Should Learn Comparative Fault from Torts

Products liability has already solved the accountability problem facing privacy law.

Under strict products liability, sellers of defective products are liable regardless of negligence.[iv] Because consumers cannot detect manufacturing defects, sellers are better positioned to prevent harm.[v] Anyone in the distribution chain faces full liability to the plaintiff.[vi]

Yet comparative responsibility is recognized even in strict liability regimes.[vii] When a manufacturer knowingly sells a defective product to a retailer, contribution claims allocate responsibility fault-based.[viii] The manufacturer who knew of the defect bears greater fault than the retailer who sold a sealed package. This principle of fault-based apportionment among jointly liable defendants operates among defendants without reducing the victim’s recovery.

Privacy law has been slow to adopt this evolution because it relies on a binary accountability structure from European data protection regulation.[ix] The GDPR uses “data controllers” and “data processors” to distinguish who determines processing purposes versus who acts on those instructions.[x] Similarly, California Consumer Privacy Act (CCPA) defines “businesses” as entities that determine the purposes and means of processing, while treating “service providers” as derivative actors bound to process data only on the business’s instructions.[xi] This role-based architecture favors categorical classification over graduated fault, leaving little conceptual space for comparative apportionment when multiple defendants contribute to the same privacy harm.

This gap in privacy law will prove inadequate as AI deployments increasingly involve genuinely shared responsibility. Deployers who make privacy promises to users should continue to face strict or quasi-strict liability under state regimes, preserving consumer protection without requiring proof of actual harm. But where both vendors and deployers contribute to privacy harms, courts should adopt apportionment to reflect fault, maintaining fairness and substantive accountability.[xii] A bifurcated approach modeled on product liability would address the complexity of AI-driven business operations while calibrating deterrence correctly.

The Four-Factor Test

Courts should apportion privacy liability among defendants using the following four factors. These factors synthesize product liability’s comparative responsibility framework with privacy law’s control- and reliance-based accountability principles.

Factor One: Informational Asymmetry

Modern data transactions are defined by informational asymmetry, leaving downstream actors and consumers without access to the operational facts that shape privacy risk.[xiii] Vendors who design AI systems possess superior knowledge of confidential research and development.[xiv] Where vendors knew or should have known of privacy risks that deployers could not reasonably discover, greater fault should be apportioned to the vendor. Conversely, where harm flows from a deployer’s operational choices, particularly the use of a model outside disclosed limitations, accountability and fault should shift toward the deployer.[xv] Most cases fall along this spectrum. Assessment turns on relative access to training data source, model constraints, documented failure modes, and privacy-relevant deployment conditions.

Factor Two: Control Over Processing

The control over processing factor asks whether the defendant retained meaningful authority over the system behavior that produced the privacy violation. Unlike informational asymmetry, which emphasizes actual or constructive knowledge, this factor focuses on the defendant’s practical ability to prevent, modify, or constrain system behavior. 

In AI systems, control is often divided. Vendors may lock in architectural features, define default data flows, or retain authority over model updates and retraining.[xvi] By contrast, deployers may control deployment context, data inputs, downstream integration, and operational parameters.[xvii] Though control may overlap, it is rarely exercised to the same degree. 

A party that could have altered system behavior but failed to do so is more culpable than one constrained to operate within imposed technical limits.[xviii] Vendors must not foreclose meaningful downstream mitigation by deployers. On the other hand, fault shifts where deployers expand privacy risk by disregarding vendor disclosures in their discretionary control.[xix]

Factor Three: Profit from Violation

Profit from violation addresses the economic benefit derived from risky processing. Economic benefit allocates fault proportionately to the extent a defendant profited from the violation—including cost avoidance, revenue generation, or the externalization of privacy risk.[xx] In the AI context, this analysis considers: (1) vendor licensing fees; (2) deployer business cost savings; (3) direct data monetization; and (4) methods of externalizing privacy risk. Profit functions as a proxy for defendant incentive, not as a punitive measure. The inquiry is relative profitability: which party derived greater economic benefit from the processing that caused harm? A larger profit margin or direct monetization of personal data warrants greater fault. 

Factor Four: Inducement of Reliance

Inducement of reliance allocates fault based on whose representations caused users to disclose personal data. Under consumer protection standards, representations are material when they are likely to affect user choice.[xxi] Deployers’ direct, consumer-facing assurances—such as privacy policies, marketing claims, or consent flows—induce greater reliance than backend technical specifications communicated in business-to-business settings.[xxii] Where a deployer makes privacy representations that exceed or contradict vendor documentation, fault should shift toward the deployer for breaches of those assurances. But if deployers merely transmit vendor claims without independent verification, responsibility may be shared in proportion to the vendor’s role in shaping the misleading representation.[xxiii]

When Apportionment Becomes Necessary

Current privacy enforcement typically targets a single defendant. But AI systems increasingly rely on distributed architectures that complicate attribution of responsibility and require regulators and courts to allocate accountability among multiple independent actors.[xxiv]

Three hypothetical scenarios are illustrative.

Scenario one involves professional AI co-development: a law firm deploys an AI research tool built by a vendor using confidential client materials. The vendor designs model architecture, training pipelines, and data retention defaults; the firm selects matter types, uploads documents, and governs attorney use. Both qualify as CCPA “businesses” because each determines core processing decisions.[xxv] Informational asymmetry favors the vendor, which controls system design and training risks. Control is shared but weighted toward the vendor’s infrastructure. Profit is split between vendor licensing fees and firm productivity gains. Inducement favors the firm, whose confidentiality assurances caused client disclosures. Where three of the four factors favor vendor fault, accountability should fall mostly on the vendor.

Scenario two involves platform integration: a social media company deploys a third-party facial recognition tool to auto-tag users. The vendor improves its model using facial data, while the platform deploys results to increase engagement. Both act as joint controllers because each determines processing purposes.[xxvi] Informational asymmetry and architectural control favor the vendor, while profit and inducement favor the platform, which monetizes user trust through advertising. Comparative fault requires equal responsibility.

Scenario three involves algorithmic hiring: an employer deploys a vendor’s AI screening tool marketed as age-neutral but never validated for disparate impact. The vendor controls model design and training, while the employer defines applicant pools and hiring criteria. Where age discrimination results, fault tracks knowledge and misrepresentation by the vendor, but inducement and deployment decisions remain with the employer. But even where the factors split evenly, greater weight may attach to the vendor’s informational asymmetry and misrepresentation as the source of undisclosed, system-level risk.[xxvii]

Conclusion

Comparative fault operationalizes human oversight by assigning liability to the actors who designed, controlled, and profited from privacy risk. As AI distributes decision-making across firms, accountability must follow actual control rather than formal roles.


[i] Derek is a 3L at UCLA School of Law focused on media, entertainment, and technology. He is a Berkley College of Music alumnus and previously worked as a music producer and audio engineer. Following graduation, he will join White & Case in the Bay Area, where he plans to practice IP litigation and privacy law.

[ii] Illinois Biometric Information Privacy Act, 740 ILL. COMP. STAT. ANN. 14/20 (West 2024) (providing statutory damages of $1,000 per negligent violation and $5,000 per intentional or reckless violation and applying to any “private entity”). See, e.g., Patel v. Facebook, Inc., 932 F.3d 1264, 1274–75 (9th Cir. 2019); Cothron v. White Castle Sys., Inc., 2023 IL 128004, ¶¶ 23–24.

[iii] See Danielle Keats Citron & Daniel J. Solove, Privacy Harms, 102 B.U. L. Rev. 793, 816–22 (2022) (discussing the difficulty of quantifying dignitary privacy harms); see also e.g.,Telephone Consumer Protection Act of 1991,47 U.S.C. § 227(b)(3);California Invasion of Privacy Act,Cal. Penal Code §§ 631, 632.7;California Consumer Privacy Act,Cal. Civ. Code § 1798.150(a).

[iv] See RESTATEMENT (SECOND) OF TORTS § 402A (AM. L. INST. 1965); Greenman v. Yuba Power Prods., Inc., 59 Cal. 2d 57, 62–63 (1963).

[v] See Escola v. Coca Cola Bottling Co., 24 Cal. 2d 453, 462–68 (1944) (Traynor, J., concurring); see also Guido Calabresi, The Costs of Accidents 135–73 (1970).

[vi] See Vandermark v. Ford Motor Co., 61 Cal. 2d 256, 262–63 (1964).

[vii] See RESTATEMENT (THIRD) OF TORTS: APPORTIONMENT OF LIABILITY §§ 1–3 (AM. L. INST. 2000). 

[viii] See Daly v. Gen. Motors Corp., 20 Cal. 3d 725 (1978); see also Am. Motorcycle Ass’n v. Superior Ct., 20 Cal. 3d 578 (1978).

[ix] See Article 29 Working Party, Opinion 1/2010 on the Concepts of “Controller” and “Processor” (Feb. 16, 2010); see also ORLA LYNSKEY, The Foundations of EU Data Protection Law (Oxford Univ. Press 2015).

[x] See GDPR arts. 4(7)–(8), 28.

[xi] See Cal. Civ. Code § 1798.140(d), (ag), (j).

[xii] Cf. Ari Ezra Waldman, Privacy Law’s False Promise, 97 WASH. U. L. REV. 773, 825–31 (2020).

[xiii] See Alicia Solow-Niederman, Beyond the Privacy Torts: Reinvigorating a Common Law Approach for Data Breaches, 127 YALE L.J. F. 614, 628–31 (2018) (arguing that consumers lack practical capacity to prevent or assess data-security failures and that common law can respond to trust-based informational relationships); see also Elisa Jillson, Hey, Alexa! What Are You Doing with My Data? (FTC Blog, June 13, 2023).

[xiv] See Ryan Calo & Danielle K. Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797, 833–35 (2021); Anastasiya Kiseleva et al., Transparency of AI in Healthcare as a Multilayered System of

Accountabilities: Between Legal Requirements and Technical Limitations, 5 FRONTIERS IN ARTIFICIAL INTELLIGENCE 879603, 8–9 (2022), https://doi.org/10.3389/frai.2022.879603; Marius Busuioc, Accountable Artificial Intelligence: Holding Algorithms to Account, 81 PUB. ADMIN. REV. 825, 828–31 (2021).

[xv] See In re Everalbum, Inc., FTC File No. 192-3172, Decision and Order (Jan. 11, 2021); cf. Joshua A. Kroll et al., Accountable Algorithms, 165 U. PA. L. REV. 633, 634–36 (2017).

[xvi] See Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) 35–36 (2023).

[xvii] Id.

[xviii] See RESTATEMENT (THIRD) OF TORTS: APPORTIONMENT OF LIABILITY § 8 (AM. L. INST. 2000); see also In re InMarket Media, LLC, FTC File No. C-4803, Complaint ¶¶ 20–24 (Apr. 29, 2024).

[xix] See FTC Policy Statement on Deception, appended to Cliffdale Assocs., Inc., 103 F.T.C. 110, 165–66 (1984).

[xx] See, e.g., FTC v. Ring LLC, compl. ¶¶ 13–15, 48–52, No. 1:23-cv-01549 (D.D.C. filed May 31, 2023) (alleging that Ring deprioritized data security and privacy safeguards to gain a market advantage, thereby avoiding security related costs and externalizing privacy risks to consumers); cf. RESTATEMENT (THIRD) OF RESTITUTION AND UNJUST ENRICHMENT § 51 (AM. L. INST. 2011).

[xxi] See Cliffdale, 103 F.T.C. at 182–83; see also In re Everalbum, Inc., compl. ¶¶ 9–10, 18–26.

[xxii] See, e.g., Prepared Statement of the Fed. Trade Comm’n, Improving Sports Safety: A Multifaceted Approach 3–4 (Mar. 13, 2014) (explaining that consumer-facing representations are evaluated based on their impression on users and materiality); see also FTC v. Wyndham Worldwide Corp., 799 F.3d 236, 244–46 (3d Cir. 2015) (recognizing that public representations about data security shape consumer expectations and create liability).

[xxiii] See Cliffdale, 103 F.T.C. at 175–76.

[xxiv] See generally Ian Brown, Allocating Accountability in AI Supply Chains (Ada Lovelace Inst. 2023), https://www.adalovelaceinstitute.org/resource/ai-supply-chains/ (explaining how AI systems are built through complex supply chains involving multiple actors and overlapping responsibilities).

[xxv] See Cal. Civ. Code § 1798.140(d).

[xxvi] See Patel at 1272–74 (treating Facebook as responsible for facial recognition processing it designed and deployed); cf. GDPR art. 26 (recognizing joint controllership where multiple entities determine processing purposes).

[xxvii] See Frank Pasquale, Data-Informed Duties in AI Development, 119 COLUM. L. REV. 1917, 1919–20, 1928–29, 1938–40 (2019) (arguing courts should translate regulatory guidance into AI data standards of care, and warning that AI can be used to deflect accountability in the absence of clear attribution rules).


Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment