Business Law
Summary of developments related to Artificial Intelligence, taken from Chapter 3A of the July 2025 update to Internet Law and Practice in California– Courtesy of CEB
April 2025
AI Copyrightability
In Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc. (D Del, Feb. 11, 2025, No. 1:20-cv-613-SB) 2025 US Dist Lexis 24296, the court reversed its prior opinion and granted Thomson Reuters’s motions for partial summary judgment on direct copyright infringement for 2,243 West headnotes, and for partial summary judgment on fair use. Ross Intelligence used West’s headnotes to train its AI legal research system that could compete with Westlaw. After comparing Ross’ AI training data and West’s headnotes, the court held as a matter of law that substantial similarity was so obvious that no reasonable jury could find otherwise. 2025 US Dist Lexis 24296, at *18. Regarding fair use, like other AI systems, Ross turned West’s headnotes into numerical data about the relationships among legal words to feed into its AI, and the output did not resemble West’s headnotes. Because Ross’ copying occurred at an intermediate step, Ross argued that its use was transformative, which is a factor in the fair use analysis. The court, however, distinguished Ross’s use from other intermediate step cases because in those cases the user copied computer code at an intermediate step to reverse engineer the unprotectable elements of computer programs. There, intermediate copying was necessary to achieve the copier’s ultimate purpose. 2025 US Dist Lexis 24296, at *20–25. Whereas here Ross copying was not necessary to achieve its new purpose. 2025 US Dist Lexis 24296, at *25. After rejecting Ross’s arguments, the court found that the most important fair use factors weighed in Westlaw’s favor. 2025 US Dist Lexis 24296, at *27–28. It was clear that Ross intended to commercially compete with Westlaw. Accordingly, the court granted Thomson Reuter’s motion for partial summary judgment on fair use. Next the case will go to a jury to determine whether Ross infringed the remaining West headnotes and Key Number System. For the remaining headnotes, the only remaining factual issue on liability is that some of those copyrights may have expired or been untimely created. 2025 US Dist Lexis 24296, at *30. See §3A.8.
In Concord Music Group v Anthropic PBC (ND Cal, Mar. 25, 2025, No. 5:24-cv-03811-EKL) US Dist Lexis 55639 several music publishers alleged that Anthropic’s generative AI tool, Claude, relies on large language models (LLMs) that were built using millions of musical compositions, including the publishers’ copyrighted works. The music publishers’ complaint, filed in the federal district court in Nashville, Tennessee and later transferred to the Northen District of California, claims that Claude can be prompted to produce nearly identical lyrics to songs such as Gloria Gaynor’s “I Will Survive” and the Rolling Stones’ “You Can’t Always Get What You Want.” On March 26, 2025, Judge Lee granted Anthropic’s motions to dismiss the music publishers’ claims for contributory infringement, vicarious infringement, and DMCA violations. The publisher’s only remaining claims are for direct copyright infringement, but they will have a chance to amend their complaint. See §3A.8.
In Doe v Github, Inc. (ND Cal 2023) 672 F Supp 3d 837, the plaintiffs’ claims include that defendants intentionally designed their generative AI program, Copilot, to remove copyright management information from any licensed code it outputs in violation of §1202(b) of the Digital Millenium Copyright Act. On January 22, 2024, due to the Ninth Circuit’s identicality requirement under the DMCA, and because the plaintiffs alleged that Copilot’s output is often a modification of their original works, the court dismissed plaintiffs’ DMCA claims. Doe v Github, Inc. (ND Cal, Jan. 22, 2024, No. 22-cv-06823-JST) 2024 US Dist Lexis 11068. On September 27, 2024, the district court granted plaintiff’s motion to seek interlocutory appeal regarding the question of whether Sections 1202(b)(1) and (b)(3) of the DMCA impose an identicality requirement. Doe v Github, Inc. (ND Cal, Sept. 27, 2024, No. 22-cv-06823-JST) 2024 US Dist Lexis 175951.See §3A.8.
In Thaler v Perlmutter (DC Cir, Mar. 18. 2025, No 23-5233) 2025 US App Lexis 6294, the D.C. Circuit Court of Appeals held that AI cannot be an author for purposes of U.S. copyright registration. The plaintiff in that case, Stephen Thaler, owns a computer system, which he claims generated a piece of visual art of its own accord. He sought to register the work for a copyright, listing the computer system as the author and explaining that the copyright should transfer to him as the owner of the machine. The Copyright Office denied his application on the grounds that the work lacked human authorship. In Thaler v Perlmutter (D DC 2023) 687 F Supp 3d 467, the court granted the cross‑motion for summary judgment filed by the defendants (the United States Copyright Office and its director Shira Perlmutter). Now the DC Circuit has affirmed. See §3A.9.
Released in January 2025, the second Part of the Copyright Office’s Report on Copyright and Artificial Intelligence addresses the copyrightability of works created using generative AI. The report affirms the fundamental principle in Thaler v Perlmutter, supra: that copyright protection requires human authorship. The Report explored three kinds of human contribution to AI-generated outputs and outlined the following principles: (1) providing prompts to an AI system does not make the user the author of the resulting output; (2) when users input their own copyrightable work into an AI system and the work remains perceptible in the output, they may claim copyright on that portion of the output; and (3) users who substantially modify or arrange AI-generated material may be able to claim copyright in the modified material or arrangement as a whole, even if the individual AI-generated elements are not protectable. See §3A.9.
AI Legislation
Beginning January 1, 2026, under AB 2013 (2024), developers of generative artificial intelligence systems must publicly post on their websites an immense stock of specific information about their training data in a “high level summary.” CC §3111. To comply with the law, the developer’s high-level summary must contain at least the 12 specific pieces of information outlined in CC §§3111(a)(1)–3111(a)(12), including descriptions of the training datapoints, copyrights, licenses, synthetic data, and any personal information included in the datasets.
Also beginning January 1, 2026, the California AI Transparency Act (CC §§22757.1–22757.6) applies to “covered providers,” or any person that creates a generative AI system capable of generating text, images, video or audio content, has over 1,000,000 monthly users, and is publicly accessible within California. A covered provider must offer its users a free “AI detection tool” that allows users to detect whether content was generated or manipulated by the covered provider’s generative AI system. Additionally, a covered provider must give the users of its generative AI system the ability to stamp their creations with an optional mark or “manifest disclosure” that clearly and conspicuously identifies AI generated content, as well as a mandatory watermark or “latent disclosure” with metadata that is detectable by the provider’s AI detection tool. A covered provider that violates the Act is liable for $5,000 civil penalty per violation, with each day of violation deemed a separate violation, and reasonable attorney costs and fees.
On September 17, 2024, California Governor Gavin Newsom signed three landmark bills to crack down on election-related deepfakes ahead of the 2024 general election. Two of three bills, AB 2839 (2024) and AB 2655 (2024), were enjoined by the Eastern District of California Court. Kohls v Bonta (ED Cal, Oct. 2, 2024, No. 2:24-cv-02527 JAM-CKD) 2024 US Dist Lexis 179933. The third bill, AB 2355 (2024) went into effect in September 2024 and mandates disclaimers for AI-generated political advertisements created by political committees, as defined in the Govt C. See §3A.19.