Privacy Law

Regulatory Focus on AI Companion/Character Chatbots

By: Shivangi Yadav

Artificial intelligence (AI) companion and character chatbots have become increasingly popular, particularly among children and teenagers. Some AI companion chatbots are developed to offer emotional and mental support through life-like responses to a human’s prompt while some character chatbots provide customizable anthropomorphic virtual characters for role playing and storytelling. Even though companion and character chatbots differ in their design, both are programmed to simulate human-like conversations, encourage users to form emotional bonds with the chatbot, and deliver a personalized experience. Unlike commercial chatbots, which are programmed to offer solutions, provide information and keep a neutral tone in their responses, the programming of AI companion/character chatbots is such that they seem to offer companionship to their users.

AI companion/character chatbots (“AI chatbots”) are rapidly evolving and constantly processing the data from their interactions with humans. However, these interactions may have a negative impact on vulnerable populations, such as children who are more prone to forming a relationship with these anthropomorphic chatbots and rely on them for emotional and mental support.

FTC Inquiry into AI Chatbots

In April 2025, a 16-year old male in California committed suicide after discussing with OpenAI’s ChatGPT his suicide plans. He began using ChatGPT as a homework helper, however, within a few months, ChatGPT became his confidant. ChatGPT’s validating and encouraging responses allowed him to rely on it for emotional and mental support while isolating himself from his family. ChatGPT, throughout its conversation with the 16-year-old male, validated his desire to end his life, which ultimately led to his suicide. His family subsequently filed a lawsuit against OpenAI on August 26, 2025. This incident prompted the Federal Trade Commission (FTC) to launch an inquiry into AI chatbots acting as companions with a particular interest on the effects of such chatbots on children and how companies are mitigating negative impacts. 

In the course of its inquiry, the FTC issued an order, under Section 6(b) of the FTC Act,[i] to seven companies offering generative AI chatbots directly to consumers. These seven companies are Alphabet (parent company of Google), Character Technologies, Instagram, Meta (formerly Facebook), OpenAI, Snap, xAI (AI division of X, formerly Twitter). In its order, the FTC is seeking information about how those seven companies monetize their user agreements, process user inputs and generate outputs, share data with third-parties, evaluate negative implications before and after deployment of the chatbots, and mitigate negative impacts of AI chatbots, among other things. The responses from the companies are currently pending.

Regulation at the Federal Level

1. Congressional Hearing Regarding the Harm of AI Chatbots

On September 16, 2025, the U.S. Senate Committee on the Judiciary held a hearing to Examine the Harm of AI Chatbots. Senator Hawley (R-MO), Chairman of the Sub-Committee on Crime and Counterterrorism hosted the hearing with Senator Durbin (D-IL) as the Ranking Member. The witnesses who testified at the hearing included three parents and two industry experts. Their testimony illustrated how AI chatbots had the potential for exploiting children’s vulnerabilities, by, at times, promoting sexually explicit content to children, or potentially encouraging self-harm, mutilation and suicidal tendencies, or misrepresenting themselves as a psychotherapist, emotionally manipulating children and generating validating content to maximize online engagement. The witnesses called upon the Senators to take action to establish robust age limitation regulations, lay down AI safety guidelines for children, parents and teachers, implement liability frameworks and require companies to conduct comprehensive evaluation and testing of the AI products before launching them into the market. The hearing ended with Senator Hawley acknowledging the need for regulation of AI companies and their products and calling upon his fellow law makers to take action.

On November 18, 2025 at 11 am PT (2 pm ET), the House Energy and Commerce Subcommittee on Oversight and Investigations will hold a hearing on “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots.” Link to view hearing here.

2. Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act

On October 28, 2025, Senator Hawley and Senator Blumenthal (D-CT), among other Senators, introduced a bill called the GUARD Act. During the press conference to present that bill, Senator Hawley stated that “No AI chatbot companion should be targeted at children who are younger than 18 years of age.”

The GUARD Act defines ‘minor’ as any individual under the age of 18, extending protection beyond minors under the age of 13 covered by the Children Online Privacy Protection Act (COPPA).

The key provisions of the bill:

  1. Require covered entity to establish age verification mechanism for existing and new accounts for use of their AI chatbots;
  2. Require AI chatbots to regularly disclose their non-human status and prohibit them from claiming to be a licensed professional like a therapist, lawyer, physician or other professional;
  3. Prohibit minor from use and access of AI companions;
  4. Criminalize knowingly making available to minor, AI chatbots that solicits them to engage in sexually explicit content or promote suicide, self-harm and violence.

The bill also provides a public right of action giving the U.S. Attorney General and any state attorney generals, the power to investigate any violations of the provisions and bring civil actions against alleged transgressors. While the bill aims at establishing a uniform regulation across the United States, it ensures that state regulations providing similar protections are not affected by this bill.

3. AI Warnings and Resources for Education (AWARE) Act

Status: As of September 2025, referred to the House Energy and Commerce Committee.

In September 2025, Rep. Houchin (IN-09) and Rep. Jake Auchincloss (MA-04) introduced the AWARE Act in the House of Representatives. The bill requires the FTC to develop and make available educational resources to parents, minors and educators to ensure the safe and responsible use of AI by minors. Such resources would include material on how to identify safe and unsafe AI chatbot use, privacy and data collection practices and best practices for supervision of minors using AI chatbots. The AWARE Act’s benefits are primarily seen as raising parental awareness about the use of AI chatbots and creating resources to enhance the safe use of AI chatbots by minors.

Regulatory Actions on AI Chatbots in California

1. SB 243 (Padilla) Companion Chatbots (Effective: January 1, 2026)

Status: Signed by Governor Newsom on October 13, 2025.

The signing of SB 243 adds California to a growing number of states adopting AI chatbot regulations.The bill, among other provisions, requires users, who are known minors, to be protected from potential adverse effects of AI chatbots. Chatbot operators, defined as anyone who makes a chatbot platform available to users in California, are required to provide by default notification to minors that the chatbot responses are artificially generated and not by a human and to remind minors after at least every three hours of continuous use to take a break.

Chatbot operators are also required to implement protocols to prevent suicidal ideation, self-harm, or suicide content and institute reasonable measures to prevent a chatbot from producing sexual content to a minor or directly state that a minor should engage in sexually explicit conduct. Beginning July 1, 2027, chatbot operators are required to annually report such protocols and the number of times of user suicidal ideation to the Office of Suicide Prevention. The bill also provides for a private right of action against a chatbot operator for injuries in fact resulting from the chatbot operator’s violations of the bill’s requirements.

2. AB 1064 (Bauer-Kahan) Leading Ethical AI Development (LEAD) for Kids Act

Status: Vetoed by Governor Newsom on October 13, 2025 (veto letter).

LEAD was designed to establish guardrails for AI with respect to children. The bill aimed to restrict children’s use of AI chatbots presenting a risk that they may be capable of encouraging self-harm, suicidal ideation, offer mental health therapy, encourage illegal activities, engage in sexually explicit content, among other things. The bill provided for both a public and a private right of action against operators violating its provisions. Governor Newsom vetoed the bill because it “imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.”

Conclusion

While AI chatbots offer numerous benefits, there is a growing body of anecdotal and science-based evidence (e.g., a report by Common Sense Media) suggesting that they pose a variety of risks to minors, such as exposure to inappropriate sexual content or potentially harmful advice. Cognizant of those challenges, California is now joining several other states that passed regulations establishing safeguards for the use of AI chatbots in 2025. To highlight a few, New York passed a law requiring AI companions to contain protocols to detect and address suicidal ideation and self-harm, along with regular notifications to users of their non-human status and Utah’s HB 452 requiring, among other things, suppliers of ‘mental health chatbots’ to clearly disclose that the user is interacting with AI and not a human.


[i] FTC enforcement under Section 6(b) is an investigative tool to verify whether  companies are in compliance with laws enforced by the FTC


Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment