This bill requires developers of frontier AI systems to publicly share detailed public safety and child protection plans explaining how they will identify and address major risks, before releasing new models. Developers must regularly publish summaries of their risk assessments and disclose risk management steps, including third-party evaluations. The bill prohibits false or misleading statements about AI risks, mandates prompt reporting of safety incidents to a new Office of Artificial Intelligence Policy, and provides whistleblower protections for employees who report safety concerns. Penalties are established for companies that violate these requirements.