OpenAI is throwing its help behind an Illinois state invoice that might protect AI labs from legal responsibility in instances the place AI fashions are used to trigger severe societal harms, resembling dying or severe harm of 100 or extra individuals or a minimum of $1 billion in property injury.
The hassle appears to mark a shift in OpenAI’s legislative technique. Till now, OpenAI has largely performed protection, opposing payments that might have made AI labs liable for his or her expertise’s harms. A number of AI coverage consultants inform WIRED that SB 3444—which might set a brand new normal for the trade—is a extra excessive measure than payments OpenAI has supported previously.
The invoice would protect frontier AI builders from legal responsibility for “important harms” attributable to their frontier fashions so long as they didn’t deliberately or recklessly trigger such an incident, and have revealed security, safety, and transparency experiences on their web site. It defines a frontier mannequin as any AI mannequin educated utilizing greater than $100 million in computational prices, which possible might apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We help approaches like this as a result of they deal with what issues most: Lowering the chance of great hurt from essentially the most superior AI techniques whereas nonetheless permitting this expertise to get into the arms of the individuals and companies—small and massive—of Illinois,” mentioned OpenAI spokesperson Jamie Radice in an emailed assertion. “Additionally they assist keep away from a patchwork of state-by-state guidelines and transfer towards clearer, extra constant nationwide requirements.”
Beneath its definition of important harms, the invoice lists a couple of widespread areas of concern for the AI trade, resembling a nasty actor utilizing AI to create a chemical, organic, radiological, or nuclear weapon. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a prison offense and results in these excessive outcomes, that might even be a important hurt. If an AI mannequin had been to commit any of those actions below SB 3444, the AI lab behind the mannequin might not be held liable, as long as it wasn’t intentional they usually revealed their experiences.
Federal and state legislatures within the US have but to cross any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, might be accountable for all these hurt attributable to their expertise. However as AI labs proceed to launch extra highly effective AI fashions that increase novel security and cybersecurity challenges, resembling Anthropic’s Claude Mythos, these questions really feel more and more prescient.
In her testimony supporting SB 3444, a member of OpenAI’s World Affairs group, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s per the Trump administration’s crackdown on state AI security legal guidelines, claiming it’s essential to keep away from “a patchwork of inconsistent state necessities that might create friction with out meaningfully enhancing security.” That is additionally per the broader view of Silicon Valley lately, which has typically argued that it’s paramount for AI laws to not hamper America’s place within the international AI race. Whereas SB 3444 is itself a state-level security regulation, Niedermeyer argued that these will be efficient in the event that they “reinforce a path towards harmonization with federal techniques.”
“At OpenAI, we imagine the North Star for frontier regulation must be the secure deployment of essentially the most superior fashions in a manner that additionally preserves US management in innovation,” Niedermeyer mentioned.
Scott Wisor, coverage director for the Safe AI undertaking, tells WIRED he believes this invoice has a slim probability of passing, given Illinois’ repute for aggressively regulating expertise. “We polled individuals in Illinois, asking whether or not they assume AI corporations must be exempt from legal responsibility, and 90 % of individuals oppose it. There’s no purpose current AI corporations must be dealing with lowered legal responsibility,” Wisor says.

