Regulates AI companion chatbots. They must clearly tell users they are not human at the start, every three hours, and at each new session. For minors, bots must block sexual or suggestive content and avoid tactics that create emotional dependence. All bots must detect self-harm signs, stop harmful content, and point users to crisis help. Companies must publish their safety rules and yearly referral counts. Violations are consumer protection issues. Takes effect in 2027.
Vote Yes on this bill if you want AI companion chatbots to clearly state they are not human, block sexual or suggestive content for minors, avoid manipulative engagement, detect and respond to self-harm talk with crisis help, publish safety practices and referral counts, and be accountable under consumer protection law.
Organizations that support this bill may include child safety and mental health advocates, suicide prevention groups, parent and educator associations, and consumer protection organizations that want clearer labels, youth protections, and crisis-response standards for AI companion apps.
Vote No on this bill if you want to avoid new state rules and reporting for AI companion apps, limit government involvement in chatbot design, reduce compliance costs and liability risks, and keep fewer restrictions that could slow product features or innovation.
Organizations that oppose this bill may include some AI and tech industry groups, startup associations, and digital rights advocates who worry about compliance costs, design mandates, free expression concerns, and broad liability under consumer protection law.