Privacy Law
Is California Leading the Way on AI or Just Causing Chaos?
By Susan Rohol
By the end of its 2023-24 session, the California Legislature passed, and Governor Gavin Newsom signed into law 17 artificial intelligence (“AI”) bills. While California may have won the race for most U.S. state AI bills to become law, it is not the first jurisdiction to attempt to tackle these issues. The European Union’s (“EU”) AI Act and Colorado’s AI Act both beat California to the punch and are arguably far more sweeping and comprehensive in their approach to AI regulation and risk assessment.
The variety in approaches begs the questions: How will this phase in the development of AI legislation play out? Will other U.S. states follow California and pass legislation that is sector- and/or use-case specific (i.e., targeted at digital replicas, sexually explicit deep fakes, use of AI in the health care sector, etc.)? Or will we see more U.S. states follow Europe and Colorado? Will U.S. states follow the trend we’ve seen with privacy laws, where jurisdictions are actively competing to demonstrate they are passing evermore restrictive regulations? Or will there be concern that this approach will hamper AI innovation? We have already seen Texas introduce legislation that is arguably broader and more sweeping than anything California, Colorado or Europe has passed into law—will other states follow suit? How will the new administration affect the federal response to AI regulation, and how will states respond? The one thing we can expect in 2025 is many AI bills, but in the meantime, here is a quick overview of the new laws in this space.
THE EU AND COLORADO’S COMPREHENSIVE APPROACHES
The EU and Colorado each have enacted a single comprehensive AI bill of general applicability that covers almost all AI systems, with a focus on classifying those systems based on the risk they pose to consumers and creating obligations on the developers or deployers of those systems based on that risk categorization.
The EU AI Act
Arguably the most comprehensive approach to AI regulation is found in the European Union. It was initially proposed in 2021 (prior to the introduction of generative AI systems) and ultimately was enacted on March 13, 2024. The EU AI Act’s[2] primary goal is to ensure that AI systems are safe and transparent. The European Commission found that the General Data Protection Regulation (GDPR)[3] did not adequately account for the changing technological landscape AI creates and the evolving dangers it poses, such as bias in systems or impacts to critical infrastructure.
The EU AI Act takes a risk-based approach to regulating the entire AI life cycle, from development to deployment, of different AI systems which operate in the EU or provide services to users in the EU, and it applies irrespective of the industry in which the AI system primarily operates. AI systems are classified based on the risk they generate, and each tier corresponds to certain obligations. For instance, the EU AI Act outright bans certain “prohibited AI systems”–practices that are considered harmful or pose an unacceptable risk to people’s safety, livelihoods, or rights (e.g., systems deploying subliminal or deceptive techniques or social scoring).[4]

The EU AI Act focuses primarily on “high-risk systems.”[5] It identifies eight types of systems that are deemed to be high-risk,[6] including biometric identification systems, critical infrastructure systems, and systems that determine access for admissions to educational or vocational training programs or evaluate employment or creditworthiness. High-risk systems are potentially exempt where the system is narrowly used, improves previously completed human activities, or involves decision-making that does not replace or influence a human assessment.[7] High-risk systems are also those that are used as a safety component of a product. These systems are required to undergo a third-party conformity assessment in order to place them on the market and are also subject to EU health and safety harmonization legislation.[8] Finally, an AI system will always be high risk if it performs profiling of persons.
While high-risk systems are permitted, they face a wide range of obligations in order to be developed and deployed in the EU, and require registration with the EU government prior to product release.[9] The broad exemptions that exist for high-risk systems will likely mean many companies do not classify their systems as high-risk in order to avoid registration and these onerous obligations.
The EU AI Act also addresses General Purpose AI Models (“GPAI”)—i.e., those AI systems which can perform a wide array of generally applicable functions, such as image and speech recognition, audio and video generation, or pattern detection, and can be integrated into a variety of downstream systems, such as large generative AI models—placing obligations on these systems regardless of how they are placed on the market.[10]
The EU AI Act stipulates a right of natural and legal persons to lodge a complaint with a market surveillance authority, to explain individual decision-making, and to report instances of non-compliance. The EU AI office will supervise implementation and enforcement alongside national authorities. Penalties for non-compliance range from €35 million or 7% of worldwide annual turnover to €15 million or 3% of worldwide annual turnover, depending on the size of the violator and if the system is classified as a GPAI.
THE COLORADO AI ACT
Colorado followed in the footsteps of the EU, enacting its own law governing AI use based on risk.11 Colorado’s law predominantly regulates high-risk AI systems, which it defines as “any AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”12 The Act creates duties for developers and deployers of high-risk AI systems to implement a risk management policy and conduct an impact assessment, using reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
It also creates transparency requirements for any consumer-facing AI systems, not just those that are high-risk. For instance, deployers and developers must make disclosures to inform users that they are interacting with an AI system (unless it would be obvious to a reasonable person), and they must notify users if a high-risk system is deployed to make a consequential decision about the user. Deployers and developers must also notify the Colorado Attorney General within 90 days if the deployed AI system has caused, or is reasonably likely to have caused, algorithmic discrimination. Violations of the Colorado AI Act constitute unfair trade practices, and punishments can include fines or injunctive relief.13 Though not identical to the EU AI Act, with the EU focused more on risk management and Colorado favoring transparency and consumer rights, Colorado has similarly adopted the approach of one comprehensive piece of AI legislation.
THE CALIFORNIA APPROACH
California, on the other hand, has opted to legislate in a more piecemeal manner. Rather than adopting one comprehensive bill, it has created a patchwork of legislation, with each bill aimed at a different sector or identified issue. Seventeen bills on AI issues were enacted into law in the most recent legislative session, though many more were introduced. These new laws cover a wide breadth of matters,14 ranging from implementing a uniform definition of AI in California law15 to promoting election integrity by using AI to combat the spread of misinformation,16 and legislating against deepfakes.17 Several of these laws could, if adopted more broadly by other states, significantly impact the privacy landscape, including California’s approach to digital replicas and training data disclosures.
Important to the privacy community is AB 1008,18 a relatively short bill that packs a punch. This law amends the California Consumer Privacy Act (“CCPA”) to clarify that personal information “can exist in various formats, including . . . artificial intelligence systems that are capable of outputting personal information.” This addition means that any company utilizing AI must be aware that its AI system could generate information that California would consider subject to the protection of the CCPA (i.e., access, deletion, correction, and opt-out rights).
AB 201319 will also likely interest the privacy community. It implements a transparency requirement on developers20 of generative AI systems before the system is made available to Californians by requiring certain disclosures regarding the underlying datasets used to train the generative AI system. Such disclosures to the developer’s website must include: the sources or owners of the datasets; a description of the types of data within the datasets; and whether any of the datasets include data protected by intellectual property rights, were purchased or licensed, or include any personal information, amongst many other things. With its emphasis on data transparency–a core privacy principle–this law has significant implications and seems a likely candidate to be replicated by other states.
Another bill of note is AB 260221 which applies to any contract “between an individual and any other person for the performance of personal or professional services” to ensure performers can control use of their own digital replicas.22 Though this bill is especially significant in California given the large entertainment industry, its effects are more wide-reaching, especially as other states could imitate its contents to apply to companies seeking to use digital replicas, such as in advertising, automated customer service bots, video games, or even something as mundane as corporate training videos.
California also enacted AB 303023 and SB 1120,24 which require healthcare community members using generative AI to provide a disclaimer to patients, while also imposing numerous requirements, such as fair and equitable application of AI systems by healthcare service providers or
insurers. These bills are part of a larger legislative trend, as Utah has taken a similar approach and imposed transparency obligations on companies using generative AI, particularly in regulated industries like medicine.25 This could be indicative that an industry-specific approach to legislating will become popular in an effort to better moderate how AI is used in higher-risk or regulated industries such as financial services, health care, and housing.
Notably, one heavily lobbied bill that would have adopted an approach much closer to the EU and Colorado approach did not make it past Governor Newsom. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047)26 would have imposed safety measures on large AI models to mitigate potential “critical harms” like the creation of biological or chemical weapons or a large-scale cyberattack against critical infrastructure. Unlike the rest of California’s approach, this bill was not designed with a particular industry in mind and was, instead, more sweeping. Despite this, SB 1047 proved somewhat controversial, and while Governor Newsom agreed with state legislators that California cannot wait for a major catastrophe to occur before taking action to protect the public, he found that SB 1047, as drafted, was ineffective.27 Because protecting against large-scale harms is a priority, the Governor has indicated that he will work on pushing forward a similar bill in the next legislative session.
LOOKING TO THE FUTURE
The one thing that is certain is that AI is not an issue that will be disappearing anytime soon, and we should expect more legislation in California, other U.S. states, and around the globe on this topic. Each of the new California laws, when taken individually, seems clear and doable. But it remains to be seen whether this volume of laws will address many of the critical issues that the public is concerned about. With such a decentralized approach, California will undoubtedly need to continue legislating and regulating in order to accommodate the gaps not yet covered through this patchwork of laws. The evolution of the landscape will also need to account for how these existing AI-related efforts are meant to work together and how compliance with them all is possible. Is such a system too abundant and variable? Or will it turn out to lead the charge in a new way of legislating AI? Time may tell.
Endnotes
1. With great thanks to Daniel Alvarez, Stefan Ducich and Alexandra Barczak for their contributions to this article.
2. 2024 O.J. (L 1698) [hereinafter the EU AI Act].
3. 2016 O.J. (L 119) 33.
4. See Ch. II Art. 5 of the EU AI Act.
5. See Ch. III Sections 2-5 for a comprehensive discussion of how the EU AI Act classifies and regulates high-risk systems.
6. The eight types are: non-prohibited remote biometric identification systems, biometric categorization, or systems used for emotional recognition; critical infrastructure; educational and vocational training used to determine access or admission to institutions, assess learning outcomes, or evaluate traits of individuals in employment; essential public or private services that are used to assess eligibility or creditworthiness; systems permitted for use by law enforcement; migration, asylum, and border-control management; and administration of justice, particularly voting. See Annex III of the EU AI Act.
7. See Ch. III, Section 1, Art. 6 of the EU AI Act (“[A]n AI system . . . shall not be considered to be high risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making . . . . [This] shall apply where . . . the AI system is intended to perform a narrow procedural task . . . is intended to improve the result of a previously completed human activity . . . is intended to detect decision-making patterns or deviations . . . and is not intended to replace or influence the previously completed human assessment. . . .”).
8. See Ch. III, Section 1, Art. 6 of the EU AI Act.
9. These include establishing a risk-management system; training, validating, and testing data sets; keeping technological documentation up to date; deploying recordkeeping through automatic logging; ensuring transparency and human oversight; developing systems that achieve an appropriate level of accuracy, robustness, and cybersecurity; and conducting a fundamental rights impact assessment prior to deployment.
10. See Ch. V of the EU AI Act for a complete discussion of GPAI classifications and obligations. Requirements involve preparing and keeping up to date technical documents which must be provided to downstream providers integrating GPAI into their systems, performing fundamental rights impact assessments, implementing risk and quality management to assess and mitigate systemic risk, maintaining transparency in data used to train the model, informing individuals when they are interacting with AI, and ensuring AI generated output is marked and detectable as artificially generated.
11. S.B. 24-205, 74th Gen. Assemb., Reg. Sess. (Co. 2024).
12. Id. A consequential decision is a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, education enrollment or opportunity, employment or employment opportunity, financial or lending service, an essential government service, healthcare services, housing, insurance, or legal services.
13. The Colorado Attorney General has exclusive enforcement authority, and there is no private right of action.
14. Note that two of the 17 bills apply only to the California government, so they are not discussed here.
15. A.B. 2885, 2023-24 Sess. (Cal. 2024) (defining AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”)
16. See A.B. 2355, 2023-24 Sess. (Cal. 2024) (requiring political action committees to disclose whether political advertisements have been generated or substantially altered by AI); A.B. 2655, 2023-24 Sess. (Cal. 2024) (requiring large online platforms, during specified periods leading up to elections, to (1) block materially deceptive content related to CA elections, (2) label certain additional content as inauthentic or fake, and (3) develop procedures for CA residents to report content that has not been otherwise blocked or labeled); A.B. 2839, 2023-24 Sess. (Cal. 2024) (prohibiting knowingly distributing, with malice, materially deceptive content within 120 days before an election in CA and, in certain circumstances, 60 days after an election). This last bill was found to be unconstitutional in October 2024 due to a lack of narrow tailoring.
17. See A.B. 1831, 2023-24 Sess. (Cal. 2024) (expanding the scope of existing child pornography laws to include matter that is digitally altered or generated by AI systems); S.B. 926, 2023-24 Sess. (Cal. 2024) (criminalizing the creation and distribution of deepfake pornography that reasonably depicts another person); S.B. 981, 2023-24 Sess. (Cal. 2024) (requiring social media platforms to establish channels for users to report sexually explicit digital replicas and temporarily blocking such material while the platform determines if permanent removal is required).
18. A.B. 1008, 2023-24 Sess. (Cal. 2024).
19. A.B. 2013, 2023-24 Sess. (Cal. 2024).
20. Id. A developer is any “person, partnership, state or local government agency, or corporation that designs, codes, produces, or substantially modifies an artificial intelligence system or service for use by members of the public.”
21. A.B. 2602, 2023-24 Sess. (Cal. 2024).
22. It invalidates contracts that (1) allow for the creation of a digital replica to perform work that could have been done by the performer, (2) fail to specifically describe the intended users of the digital replica, and (3) were negotiated without legal and/or union representation.
23. A.B. 3030, 2023-24 Sess. (Cal. 2024).
24. S.B. 1120, 2023-24 Sess. (Cal. 2024).
25. S.B. 149 Artificial Intelligence Amendments, §§ 13-2-12(1)(a)-(c), 13-2-12(5) (Utah 2024).
26. 3 S.B. 1047, 2023-24 Sess. (Cal. 2024)
27. Governor Gavin Newsom, SB 1047 Veto Message, Office of the Governor (Sept. 29, 2024) https://www.gov.ca.gov/wpcontent/uploads/2024/09/SB-1047-Veto-Message.pdf. The Governor cited the following reasons for his veto: (1) it only focused on the most expensive and large-scale models, and could give the public a false sense of security about controlling AI; (2) it did not take into account whether an AI system was deployed in high-risk environments, involved critical decision-making, or used sensitive data; and (3) it was not informed by an empirical trajectory analysis of AI systems and capabilities.