Regulation
New York Challenges Trump with the Country's Toughest AI Safety Laws
New York is enacting the country’s toughest AI safety rules despite federal pushback, setting a state-level standard for model governance.

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The state's new 72-hour incident reporting rule and chatbot requirements directly challenge federal efforts to loosen regulations. After Google and Character.AI settled a lawsuit over the suicide of 14-year-old Sewell Setzer III, New York’s AI companion safety law took effect, mandating chatbots detect suicidal thoughts. Megan Garcia, Setzer’s mother, campaigned for action following her son’s death after using a Character.AI chatbot in February 2024. This series of laws positions New York as a leading state with a regulatory approach that contrasts with the Trump administration’s emphasis on AI innovation. Governor Kathy Hochul signed the RAISE Act on December 19, 2025, eight days after President Trump issued an executive order to block state AI regulations. The law, set to take full effect on January 1, 2027, requires frontier AI developers to report safety incidents within 72 hours, compared to California’s 15-day window, and establishes an oversight office in the Department of Financial Services. "New York now has the strongest AI transparency law in the country," said Assemblymember Alex Bores (D-Manhattan), who sponsored the legislation. He also noted resistance from several tech industry stakeholders during the legislative process. The RAISE Act defines frontier models as those trained with more than 10^26 floating-point operations. This currently includes models like OpenAI’s GPT-5 and Anthropic’s Claude series. Developers must publish detailed safety protocols and follow state oversight, facing penalties of up to $1 million per violation. The companion chatbot rules, passed separately in May 2025, diverge sharply from federal policy and create potential legal tension. Legal experts at Nelson Mullins say operators must now "implement safety measures to detect and address users' expression of suicidal ideation" and "regularly disclose to users that they are not communicating with a human." Non- compliance could put chatbot companies at risk of state enforcement, but those following only the federal policy may face lawsuits or penalties in New York. Character.AI responded to the Setzer settlement by saying they already use "pop-ups that direct users to the National Suicide Prevention Lifeline if they discuss self-harm" and include disclaimers that "the AI is not a real person." The company did not say whether it will comply with New York’s 72-hour reporting rule before the 2027 deadline. The tech industry's response has been fractured. Tech:NYC described aligning the RAISE Act with California’s AI law as a positive step toward reducing fragmented policies and clarifying expectations for companies operating across multiple states. This divide shows a bigger issue. Large companies like Google and Microsoft have the resources to manage different state rules. Startups using open models may struggle with reporting requirements, as the 72-hour window requires them to monitor incidents continuously. Governor Hochul has indicated that there will be continued legislative action. In her January 5 State of the State address, she introduced additional AI safety proposals, pending further details. New York is emphasizing AI safety concerns, as illustrated by notable cases, to support its legislative approach despite federal opposition. A legal dispute is possible. Trump’s executive order asserts federal control over AI regulation, but states have previously exercised authority to protect consumers and children. New York is characterizing these laws as safety measures rather than restrictions on innovation. Frontier AI companies operating in New York must implement 72-hour incident reporting systems by January 2027 or risk state enforcement actions. Companion AI services must add suicide-detection and human-disclosure features immediately to remain compliant. Startups may need to partner with larger companies to meet legal demands. Content creators using AI companions for stories should be aware of new disclosure requirements, as non-compliance could result in legal penalties. This evolving legal environment creates uncertainty for businesses navigating between federal and state law. The Setzer settlement leaves unresolved questions about AI companion safety. Governor Kathy Hochul's administration is preparing new legislation to expand age verification and introduce more child-focused measures. With the 72-hour reporting deadline approaching, a legal test looms: the first company required to comply must decide whether to comply with the rule or challenge it in court.


