by: Max Moyer
As Artificial Intelligence (AI) becomes omnipresent, reshaping business practices across diverse sectors, such as healthcare, finance, and communication, the intricate interplay between AI and privacy law assumes heightened significance, demanding careful consideration and prompt adaptation. While many hail the release of generative AI to the public as a generational shift in technology, many still fear the perils that come with it.
The White House AI Bill of Rights
The rise in businesses seeking to harness the power of AI is already accelerating the trend for pursuing federal data privacy laws. At an AI forum, Senate Majority Leader Charles E. Schumer (D-N.Y.) noted that he believes companies should have a duty of care standard requiring them to take steps to mitigate risk. Earlier in the year the White House released its blueprint for an AI Bill of Rights. The blueprint highlights five core principles the administration believes must guide the development and use of automated systems: protection from unsafe or ineffective systems, avoiding discrimination from algorithms through equitable design and use of systems, built-in protections from abusive data practices, notice and explanation of the use of automated systems, and human alternatives. The data privacy principle hinges upon the developers and deployers of automated systems seeking informed consent. Further, the most sensitive data should be used only when necessary and protected by ethical review and use prohibitions.
California Consumer Privacy Act: Proposed Automated Decisionmaking Regulations
On November 27, 2023, the California Privacy Protection Agency (CPPA) announced a proposed framework for the regulation of “automated decisionmaking [sic] technology.” The proposal seeks to regulate any system that “processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking. Automated decisionmaking technology includes profiling.” Further, the proposal includes a requirement for businesses to provide consumers with a notice of rights to opt out as well as a plain-language explanation of the purpose behind the businesses’ use of AI systems.
European Union AI Act
Meanwhile, the European Union (EU) is negotiating its law known as the AI Act. The bill proposes regulating AI systems based on their capacity to cause harm. As currently written there are four categories of risk: unacceptable, high, limited, and minimal/none. The EU enters the final phase of the legislative process as negotiations between the EU Commission, Council, and Parliament continue. The final phase is focused on foundation models, sanction regime, access to source code, governance, and ultimately entry into force. The final discussions are expected to take place on December 1, 2023. Further, the act has been discussed as a catalyst to update the GDPR. On December 8, 2023, EU officials reached a historical deal on the AI Act.
As the use of AI rises, the calls for regulation of automated systems grow, and along with it a renewed focus on data privacy. Governments worldwide seek to strike the right balance between harnessing the benefits of AI and safeguarding individual privacy.