Privacy Law
CCPA and the EU AI ACT
June 2024
By Danielle Ocampo
Steve Millendorf is a member of the California Lawyerâs Associationâs (CLA) Privacy Law executive committee and is a partner in the Technology Transactions, Cybersecurity, and Privacy practice at Foley & Lardner LLP.
In April, Danielle Ocampo, a member of the CLAâs Law Section, interviewed Steve Millendorf to gain a deeper understanding of how California is approaching and implementing the EU AI Act.
How do the principles outlined in the EU AI Act align or conflict with existing regulations in California, such as the California Consumer Privacy Act?
There are no AI-specific regulations in California yet. However, the EU AI Act and existing California regulations like the CCPA are fundamentally different laws. The AI Act seeks to regulate the output and use of certain AI systems, whereas the CCPA seeks to regulate collection and use of personal information. The EU AI Act complements the GDPR. The GDPR covers what happens to personal information and is more focused on privacy rights. The EU AI Act focuses on the use of artificial intelligence and the use of AI systems and more about what AI does and the impact AI can have on society, regardless of whether the system uses personal information or not. To compare, the CCPA does not specifically target AI systems but the CCPA did grant authority to the California Privacy Protection Agency (âAgencyâ) to issue regulations in relation to automated decision-making technologies. On this basis, the Agency is working on draft Automated Decision-Making Technology (ADMT) regulations (which are expected to be finalized later this year). Specifically, the ADMT regulations would apply to any âtechnology that makes decisions or that a person relies upon to make a decisionâ AI can be ADMT, but not all AI is ADMT. These ADMT regulations would apply if personal information is used for automated decision making which significantly impacts an individual or if âextensive profilingâ is performed.
In theory, an AI system not using personal data or any user information may still fall within the scope of the EU AI Act. For example, an AI system may process (both in training and in operation) data that is not considered personal information, either under the GDPR or under the CCPA, which have similar definitions of personal information (although the GDPR uses the term âpersonal dataâ instead of âpersonal informationâ). The EU AI Act would regulate that training and/or operational use, and potentially require things like risk analysis, depending on the categorization of risk applied. AI systems that are used to manipulate or exploit individuals are banned, regardless of where they get their training data or data processed. The data doesnât have to be personal information and can be completely anonymized, yet still fall within the scope of the EU AI Act. Either way, such AI use is banned. The CCPA would have nothing to do with this scenario because it does not involve personal information.
But if the same AI system did process personal information, both laws would presumably be in play. The CCPA would give individuals the opportunity to opt-out of the automated decision making.. Under the CCPA, businesses have an obligation to notify how their personal data is being used and the purposes of processing. Presumably, that would include some disclosure at the time of collection of the business using consumer data for AI training purposes. If the data is used for just training and no decision is made on consumersâ personal data, then there is no opportunity to opt-out of such processing because the opt-out right applies to ADMT. However, under a privacy law like GDPR, a business needs a lawful basis for the use of personal information in the training data such as a legitimate interest, which I am not quite sure a business would be able to easily establish A business would likely have to get consent under this regime. But CCPA and U.S. state laws do not really stop companies from using training data because these laws are all talking about the outputs. The EU AI Act, in this case, would prohibit use of the AI system in its entirety. These laws are therefore very different in what theyâre trying to do.
There is a slight overlap when it comes to automated decision making, but those regulations are still in pretty early draft stages from the CPPA, and it’s hard to determine where things will fall out or if there are going to be some conflicts. Iâd like to think that the CPPA is at least looking at the EU AI Act [and guidance issued by EU authorities] and making sure that there are no conflicts between EU and California requirements, but we could have said the same thing about the GDPR, so we cannot take anything for granted. Californiaâs proposed AI laws and the EU AI act have different focuses where California is looking at AI through a privacy lens instead of an AI lens. So, who knows where this will wind up. And I do think that the EU AI Act and the CCPA have somewhat different standards. The EU AI Act requires human oversight for what they call âhigh risk AI systems.â Thereâs a list, but I donât think this is a complete list of these technologies. Itâs also up to some of the individual member states to figure out which AI technology is in what risk bucket. The CCPA, at least in the current regulations, says the consumer would have the right to opt-out of automated decision making, except for some specifically defined use cases like security, regardless of the risk posed by the functionality of that AI system. For the EU AI Act, the human oversight kicks in for a certain threshold AI system.
One of the biggest challenges that arises with AI in general is how to handle data subject rights in an AI system, particularly in the right to delete/right to be forgotten. I personally think this is a little bit of a red herring because, assuming the AI system is not used for identification, in a general system, the personal information is not in the AI model. Most of these non-identifying AI systems take training data in and develop a system of weights and relationships between data elements, but the actual data is not stored there. So thereâs not necessarily anything to delete, even though personal information may have been involved in the training process.
That said, weâve seen some manufactured cases that exist where the AI model was able to generate the exact information as it was trained on. For example, in one case, an AI system was asked to create a song about shooting a sheriff in Chicago, and it came out with the song âI Shot the Sheriffâ nearly word for word. If you ask the AI system a very limited question or inputs with very limited outputs, there is a good chance that it is going to spit out someoneâs personal data even though that personal data is not there. The AI is technically generating it, though not necessarily storing the data, but does this give rise to someoneâs right to delete? How can such data be deleted? Does this mean the AI should not ever generate it? I donât know. We have no jurisprudence to figure that out yet or guidance from the regulations.
If it is expected that people can delete their information from this type of AI system, I don’t think we even know how to do it. Youâd have to retrain the whole model because itâs not like that one set of training data can be pulled out. An AI model is the culmination of all the training data given. Knowing where your data is located is an understandable problem under the GDPR and CCPA and similar laws: we just need to know where our muffins are, but we do not necessarily know how to fix that problem. With AI, weâre basically talking about un-baking a cake here. We canât do it. We canât pull out the eggs from a cake thatâs already baked.
Given the differences in regulatory frameworks between the EU and California, what challenges might global tech companies face when operating in both regions simultaneously?
There is no comparable regulatory AI framework in the U.S. There are some limits on automated decision making in the CCPA, but as I described, that is really a different focus than the EU AI Act. In California and the U.S., we are focused on the individual and impact on the individual from making decisions on biased AI. If there will ever be an AI law in the U.S. even close to the EU AI Act, I expect this law would be broader based.
In terms of compliance, there will likely be the same challenges that exist between the CCPA and the GDPR, or any other state law for that matter. Compliance with one privacy law may not mean compliance with the other. Companies need to decide whatâs the best way for them to operate whether they just want a one-size-fits-all model and take the most stringent requirements of each law they are subject to and come up with a single scheme. However, the company may be missing out on opportunities to use AI in this case that might be legal in one jurisdiction and a problem in another, and avoid the AI altogether. Alternatively, a company could have different compliance programs and approaches for different jurisdictions, which is a pain to administer and may mean they have to spin up different instances of the AI model that are not allowed to talk to each other. Maintaining two AI systems also increases the compliance costs such as requiring two compliance customer service representatives.
Regardless of whether youâre operating in both CA and the EU simultaneously, some of the documentation that exists today under GDPR and CCPA, particularly with policies and procedures, are going to have to change in the world of AI. Even the privacy documentation will have to be updated to include the new uses of personal information with AI, some of which may be in accordance with AI regulations. While we had some use cases and previous policies and procedures, you can imagine a company with multiple AI systems and itâs possible that you can do some activity and not others with the AI systems. How do you make this analysis? None of this is addressed in any CCPA or GDPR documentation. So, the handling of personal information for training data or for actual model use cases and verifying the outputs of the model use cases are not described in any policies or procedures today. I think this is true generally for the AI but will be more required now under the risk analysis and impact assessments and transparency requirements under the EU AI Act regulations which we donât have today. We can expect that some of the regulations will probably somewhat discuss at least how to conduct either a new or updated risk analysis for AI and what documentation is needed to show your compliance because this AI is a new use and brings a whole new set of risks. I expect there will be overlap with personal information in most cases for the uses of AI.
In what ways do you anticipate the EU AI Act influencing the development and deployment of AI technologies in California, especially within the tech hub of Silicon Valley?
Thatâs a really good question. I think the Act may influence some companies to do a little course correction. This certainly happened even with GDPR: Silicon Valley companies looked at how they were using personal information and tried to justify use by having a legitimate interest. I anticipate that tech companies will try to justify the AI system by squeezing the AI use into one of the allowable buckets, even if they are seemingly trying to fit a square peg into a round hole. On the other hand, it may also encourage companies in Silicon Valley to start developing AI models now that they know the rules of the road, at least the rules of the EU. I suspect some of them may have had some great ideas for products but were afraid of the regulatory challenges, especially between the time the EU AI Act was introduced back in 2021 until it was finally approved this year.
The other way I expect that it will influence companies in SV is regarding their contracting processes. Some companies will need to amend their contracts, but I suspect that weâll see a bunch try to read in broader things than the EU AI Act was clearly meant to regulate and restrict. I still see stuff like that now almost 6 years after the GDPR and 4+ years after the CCPA. I also think that this may influence companies to do quite a bit more analysis than they may have otherwise done, including for things like bias and discrimination. While CCPA and GDPR both have restrictions on using personal information for bias or discrimination (and prohibit discriminating against someone for exercising their data subject rights), the EU AI Act brings it to a whole new level in requiring testing outcomes and robustness.
On the EU AI Actâs Enforcement, if European regulators have experience with companies trying to fit their data collection and uses under lawful purposes under the GDPR, do you think they will be more critical of how companies fit AI systems into the Actâs risk buckets? If so, will more guidance be provided?
Yes, European enforcement will slap down on risk categorization and application based on what theyâve seen with the GDPR. Traditionally, EU regulators have had smaller enforcement actions, but the bigger enforcement actions gain more attention. The EU AI Act provides for some guidance Board, but coming up with general guidance that everyone understands exactly what the Act is without some creative colorable argument is almost impossible. Unless the use is something very specific and a company is exactly on point, companies will try to fit the square peg in the round hole as we saw with GDPR. But we will have companies that would have legitimately thought they were on the right side or something, and the EU regulators will come out and say this was wrong, at least not as applied to the companyâs AI use.
On the positive side again, weâll see some companies who might have been waiting, either waiting to develop or waiting to release something with AI, who at least now have some guidance. Many start-ups just want to be given some rules because the uncertainty of regulations is hurting innovation. They may try to push the boundaries, but this is better than ignoring (or not being made aware of) any obligations they may have had before, and at least now they know where the guardrails are.
How does the EU AI Act’s cybersecurity regulations compare with the CCPA r egulations?
I think they are similar, but different. The CCPA cybersecurity regulations, which again, are still in draft form and subject to change, focus on the idea of independent cybersecurity audits on the access and use of personal information. For example, the definition of âInformation systemâ in § 7120 of the proposed regulations are the resources organized for the processing of information, including personal information. Does this include the AI system itself? Itâs unclear but § 7120 is all about completing a cybersecurity audit as a business that processes personal information that presents a significant risk.
I think the EU AI Act requirements are a bit broader, focusing on the cybersecurity aspects (and risks) of the model rather than the personal information, including using the model for malicious purposes. This requires a look at systems that may not otherwise be looked at (like where the model is running rather than where the personal information is stored), as well as things that could corrupt the model above and beyond the typical cybersecurity audit. For example, since the AI model may be constantly ârelearningâ from input data through prompts, a new threat vector may be the prompting itself, rather than the typical hacking/penetration. Again, I think in this sense the California regulations and the Act are complementary to each other but are not the same. Also consider that things like cybersecurity in AI includes malicious attempts to alter the modelâs use, behavior, and performance. Integrity of personal data is relatively easy to detect, but detecting malicious alterations from those that are just part of proper relearning may prove to be a huge challenge.
This is also an area where it is similar: While clearly different types of reviews, both the EU AI Act and the proposed CCPA cybersecurity regulations have a risk threshold. If there isnât a âsignificant riskâ to personal information in CCPA or if the AI system isnât a âhigh riskâ system in the EU AI Act, you donât have the obligations to perform cybersecurity audits. There are also likely to be different standards. In fact, the EU AI Act requires the development of some cybersecurity standards. Non-AI specific cybersecurity standards, such as in ISO/IEC 27001 and 27002, have existed for years. But these canât be used âout of the boxâ for AI. ISO/IEC 27090 will come out with some specific mitigations and controls, but these are not quite there yet. One of my colleagues on the ABA Information Security Committee is part of the US delegation for the ISO standards, and I hope to get an update actually soon on where that is at. But I do think it goes to show that if the ISO/IEC thinks that the existing standards donât quite fit, itâs not only different from what is required under the CCPA, which again focuses on personal information, but itâs going to be a challenge for those trying to implement these standards today.
What about the use of cybersecurity? How do the EU AI Act and CCPA compare?
In terms of the usage of cybersecurity, the use of defensive cybersecurity has been out for a while. On the other hand, the use of AI in offensive cybersecurity is a new beast. For example, threat actors (TAs) are creating ChatGPT variants to help generate malware. TAs are using AI to fingerprint a system to accelerate the process of trying to see what version number of an application software is used or ports opened to find potential points of vulnerability. Now there is a more automated system to get the AI to respond for different results with a particular version number to respond back to a web server. What used to take days and weeks for a hacker, now takes them minutes when employing AI for the same. This is going to be a new regime in the battle of the AIs that will both learn from each other.
If we have the battles of the AI, how would Europe respond and how would California respond?
This is an interesting question. At least in Californiaâs proposed regulations, thereâs an exception for automated decision making and nothing for AI in general, but the regulations create an exception for ADMT use for AI in cybersecurity. The EU hasnât really touched on the use of cybersecurity for these purposes. I think there’s a good chance that at least on the defensive version of cybersecurity functions may be considered high-risk or may not. I think it depends on the capability of the model. As currently written, AI used for cybersecurity that is strictly defensive like adjust firewall settings, blocks ports, and does reactive things. Under whatâs currently in the risk buckets of the EU AI Act, this very well may be low risk. However, if the AI starts hacking back to stop the offensive cybersecurity from attacking and trying to disable it, then this defensive-turned-offensive AI may probably be a high-risk category because now the AI will be affecting some other computer system and someone elseâs rights. What if the AI identifies it wrong and itâs actually a legitimate entry into the system? Thatâs where I think problems will arise: What are the chances the AI will get it wrong in identifying someone as a hacker who is not really a hacker? This harm may justify putting in this AI in the high risk back and require testing and risk analysis.
Given that CA was a leader in the U.S. pushing out a law that models the GDPR, do you anticipate that CA will lead the way with AI legislation in the U.S.? If so, how soon do you anticipate that will be and/or what will that look like? If not, why not?
Yes, I think California will try to lead the way. There are already bills proposed to regulate AI; however, the challenge will be similar to CCPAâs formulation because the legislature doesnât actually know what the AI does. California will need to do a better job at taking recommendations from industry experts at the very early stages. This makes sense given the Silicon Valley influence and the types of companies and products that will have better AI personal assistance. We are not that far off from Bicentennial Man or C-3PO!