By: Mengting Xu
On September 15, 2022, the California Age-Appropriate Design Code Act (CAADCA) was signed into law. The CAADCA recognizes that “children need special safeguard[s] and care in all aspects of their lives.” Importantly, the CAADCA requires online service providers to offer a high level of privacy by design and by default to children under the age of 18. The law is set to take effect July 1, 2024.
On December 14, 2022, NetChoice, a trade association of online businesses, sued the California Attorney General Rob Bonta, challenging the constitutionality of CAADCA.
NetChoice based its suit on several grounds, including a claim that the CAADCA violates the First Amendment because the Act is content-based regulation and compels companies to serve as “roving censors of speech.” Also, NetChoice argues CAADCA is preempted by the Children’s Online Privacy Protection Act (COPPA) and Section 230 of the Communications Decency Act. The suit also claims that the Act’s requirement of age estimation “with a reasonable level of certainty” incentivizes businesses to collect more data, conflicting with other state privacy laws, such as biometric privacy laws.
NetChoice’s First Amendment Argument
Vague and Overbroad
Under the CAADCA, covered businesses need to configure all default privacy settings to a high level of privacy if their published content is “likely to be accessed by children.” NetChoice asserts the definition of “likely to be accessed by children” is vague and overbroad. It would impose undue burden on a sweeping majority of online businesses, including, for example, all major news outlets and all sports league websites.
In addition, NetChoice claims that the CAADCA does not define certain terms, such as “materially detrimental” to the well-being of a child, age estimation “with a reasonable level of certainty,” “harmful” or “potentially harmful” content. Lacking a clear guidance, NetChoice argues, online businesses will choose to self-censor to avoid “draconian penalties.” Self-censorship compromises the values of the First Amendment and chills free speech.
Under the CAADCA, online services are also required to complete a Data Protection Impact Assessment (DPIA) before offering any new online services, products or features to the public. NetChoice states that the requirement of the DPIA amounts to a prior restraint because it is imposed on businesses before publishing online.
Acknowledging the well-being of children is an important government interest, NetChoice nonetheless argues the CAADCA cannot pass the constitutional muster because it “regulates far beyond privacy,” “is not confined to children,” and “is unnecessary to achieve” the privacy goals.
NetChoice’s Preemption Argument
NetChoice states that CAADCA is preempted by COPPA because the Act is inconsistent with the scope and substantive obligations under COPPA. On one hand, COPPA regulates online services directed to children under the age of 13, whereas CAADCA applies to any service, product or feature that is likely accessed by children under the age of 18. On the other hand, in contrast to COPPA’s “notice and consent” regime for children’s privacy, CAADCA imposes obligations such as creating DPIAs, configuring a high level of default privacy settings or estimating user age, all of which are not covered by COPPA.
Section 230 shields online service providers from liability for hosting and moderating user content. NetChoice contends that its members, who fall within the scope of “interactive computer services,” are protected by Section 230 and can review, moderate or promote third-party content, to decide whether there is violation of its content-moderation policies, and to enforce those policies based on its own discretion.
By shifting the monitoring and enforcement power of published policies from online businesses to the California Attorney General, NetChoice claims, the CAADCA in effect restricts the businesses’ editorial discretion protected by Section 230.
In a separate lawsuit, NetChoice challenged the constitutionality of social media laws deplatforming political candidates based on “viewpoint” in Texas and Florida, arguing both laws violated the First Amendment. Both cases are pending petition of certiorari from the United States Supreme Court.
In February 2023, the Supreme Court will weigh in and interpret the scope of Section 230 for the first time when hearing Gonzalez v. Google and Twitter v. Taamneh. When reviewing Gonzalez, the Court will consider whether algorithm-generated recommendations should be deemed a platform’s own content and be exempt from the protection of Section 230.
The Supreme Court’s decisions on these cases will likely have a profound implication on free speech and Section 230, and “could fundamentally change the future of the modern internet.”