New York Governor Signs RAISE Act to Regulate AI Safety

What the Law Does
Responsible AI Safety and Education (RAISE) Act has been signed by New York Governor Kathy Hochul. It sets the bar for the regulation of artificial intelligence in the US at the state level and New York is leading the way. The law requires major AI players to give out the safety measures they use and in the case of any accidental deaths or serious injuries, the company has to report it to the government within 72 hours - this is usually the time frame set for such occurrences.
A new office within the New York State Department of Financial Services (DFS) will be in charge of overseeing the new regulations. The office will not only be responsible for monitoring compliance but also for issuing regulations, assessing fees, and making annual reports on AI safety practices.
Background and Legislative Negotiations
The RAISE Act had earlier gotten through the New York State Legislature in the year; however, the final wording was the product of negotiations between the legislature and the governor’s office. Hochul first put forward amendments that would make the law similar to California’s existing state AI safety law. Eventually, a compromise was achieved that kept much of the initial thinking while mixing elements meant to facilitate innovation with supervision.
One of the crucial negotiated points is the developers' obligation to report incidents within 72 hours — this is a big step forward compared to the 15-day reporting window for similar situations in California. The law also makes it mandatory for companies to disclose latent threats rather than waiting for their absolute proof of harm.
What Counts as a Safety Incident
The RAISE Act categorizes “critical harm” as accidents leading to death or serious injury, the total of damage to property and finances being at least one billion dollars caused or mainly supported by AI applications. By doing so, a rather high limit for the definition was given, though developers are still answerable for the performance of AI that is not only pricy but also very seriously harmful.
The law authorizes enforcement actions such as civil penalties to be taken against the companies that do not provide proper safety reports or tell lies. A repeat violation might possibly bring about a penalty of $3 million.
Why This Matters
Given the delay of the federal AI regulation, New York and California are the states that have created the first draft of the rules on how to deal with advanced AI systems. Hochul said that the RAISE Act is a "nation-leading standard" for safety and transparency. Witnesses to this regulation argue that the public has been protected from unexpected harms but also developers have clearer expectations from them in return.
The critics of this law who include certain industry associations and tech lobbyists are warning that strict state laws might lead to a variety of rules that would make it harder for companies with a national and international presence to comply. Other parties are concerned that excessive regulation would put a stopper on innovation or just drive firms to more friendly states regulations-wise.
What’s Next
The RAISE Act will gradually become effective, thus enabling the companies to time their safety board reporting systems and management processes accordingly. The law-established DFS office will accept the safety protocols submitted by the companies and also contribute to the regulations of the future. When New York starts practicing the law, the watchers predict that the arguments regarding the proper ways to handle AI technologies in a responsible manner would be raised at the state as well as the federal levels.
Business News
Taking Care of Your Employees: 5 Tips for Empowering Your Team
From Zero to Certified: The Journey Behind Every Home System Expert
When Expenses Get Creative: Survey Exposes Bizarre Claims
Electronic Wills and Testamentary Freedom: Inside the Wills Bill 2025
Splat App Transforms Photos Into Coloring Pages Using AI



















