Click to open contact form.
Your Global Partners in the Business of Innovation

California Establishes New AI Guardrails for Minors while Congress Proposes AI-liability Legislation

Artificial Intelligence (AI) / November 04, 2025

California Governor Gavin Newsom has finalized the enactment of key regulatory moves concerning the interaction between artificial intelligence (AI) chatbots and minors, vetoing one measure while signing another that sets specific compliance requirements for developers.

Governor Newsom rejected A.B. 1064, which would have barred developers from making chatbots available to minors unless they could guarantee the tools would never discuss suicide, self-harm, or disordered eating. The Governor warned that the bill’s sweeping restrictions could amount to a total ban on chatbot use by young people.

Instead, Governor Newsom approved S.B. 243, which requires chatbot makers to build safeguards that prevent discussions about suicide or self-harm, direct users to crisis services when needed, and stop the generation of sexually explicit content in chats with minors. Chatbots must also clearly disclose that they are AI, including reminders every three hours during interactions with children.

These California actions occur amidst increasing national scrutiny regarding AI safety and its impact on children. The Federal Trade Commission (FTC) recently launched an inquiry into how chatbots interact with minors. Meanwhile, bipartisan lawmakers in Congress have proposed federal legislation to hold AI developers liable for harm.

The AI LEAD Bill, an acronym for Aligning Incentives for Leadership, Excellence, and Advancement in Development, establishes a federal product liability framework for AI systems, recognizing that these products have caused harm. Its purpose is to incentivize safety, provide certainty, and enhance U.S. competitiveness. Under the proposed bill, developers are liable if they fail to exercise reasonable care in the design—which encompasses training, testing, auditing, and fine-tuning—or fail to provide adequate warnings.

Under the bill, deployers of AI technology would share liability if they substantially modify or intentionally misuse the product, or if the developer is unreachable. The bill establishes a federal cause of action, prohibits unreasonable limitations on liability in contracts, and requires foreign developers to register a U.S. service agent prior to deployment.

Click here to read California S.B. 243.

Click here to read Congress’s AI LEAD Bill.

MEDIA HIGHLIGHTS