AI

How California’s AI safety bill could impact the tech industry

Opponents claim it’s too vague, and bad for innovation. Supporters say it’s sorely needed.
article cover

J Studios/Getty Images

4 min read

A controversial California bill that would set a new bar for AI regulation in the United States has cruised its way to Governor Gavin Newsom’s desk, despite heavy opposition from some corners of the tech industry.

The bill passed both houses of California’s legislature with wide margins and now awaits a signature or veto from the governor by the end of the month. If passed, it would mandate that certain large AI models—above a set cost and size threshold—would have to adopt a regimen of safety steps, like building in a killswitch, undertaking annual audits, and other rules subject to a new regulatory board set up by the bill.

Despite the lack of pushback within the state’s legislature, there is plenty of pressure on Newsom to veto the bill. Outside of Big Tech industry opponents, which have included Meta, Google, OpenAI, and Y Combinator, a host of Democratic congresspeople from across the state have voiced opposition, most prominently former House Speaker Nancy Pelosi.

Those who have voiced support for the bill include so-called AI godfathers Geoffrey Hinton and Yoshua Bengio, Elon Musk, and—in a cautious, qualified statement in favor—Anthropic. Dozens of current and former employees of big AI labs also signed a letter backing the bill this week, and a poll from the AI Policy Institute showed widespread voter support.

Opponents of the bill argue that it places undue liability on model developers rather than end users, which they say could hamper open-source innovation in particular. They also argue the text of the bill is too vague and could be interpreted in a manner that’s overly restrictive.

Senator Scott Wiener, the bill’s author, has said that the text of the bill makes specific accommodations for open-source developers. The bill has also undergone amendments that have watered down some of its stricter penalties.

Danny Manimbo, a principal who leads the AI practice at compliance certification firm Schellman, said the bill would be a major step toward holding companies to new AI safety standard frameworks that have been developed in the past year or so.

“This is huge,” Manimbo said, noting accountabilities for model developers, transparency requirements, and third-party audits and adding that those provisions are “great for firms like Schellman.”

“In the past 12 months, we’ve seen some frameworks come out for how to demonstrate what is responsible and trustworthy use of AI.”

Question marks remain

But others claim that the language makes the ultimate impact of the bill unclear. Manasi Vartak, chief AI architect at data infrastructure company Cloudera, said more research is needed to determine exactly what the guardrails mandated by the bill should look like.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

If the bill is signed into law, Cloudera’s legal team will be examining what cloud companies like itself need to do to comply, she said.

“One question that’s going to come up is, if we let customers run open-source models, do we become a model provider in that case? We might have added some software on the models to make them run better,” Vartak told Tech Brew. “Even people who provide infrastructure to run these models—are they suddenly liable in some way that’s unexpected? Our legal team is going to have to look into that and see what that means.”

Startup effects

The bill has aimed to set a high bar for the types of models covered—they must be trained with 10^26 flops of data and cost $100 million to develop. That threshold will also be up for a reevaluation in 2027.

Still, Arun Subramaniyan, CEO of Articul8 AI, an enterprise generative AI startup spun off from Intel, worries that the bill might deter companies from releasing large open-source models, thus harming the startups that rely upon them. He did, however, call California’s approach more “balanced” than other efforts like Europe’s AI Act.

“Making sure that companies that don’t necessarily have the hundreds of millions of dollars to go build a general purpose foundation model can take advantage of things like [Meta’s] Llama [model], that is open-sourced, and that companies like Meta don’t get discouraged [is important],” Subramaniyan said.

Despite a letter in opposition from a slew of Y Combinator founders, not all startup founders oppose the bill. Michael Diolosa, CTO of AI-based consumer preference platform Qloo, said the bill would be a step in the right direction.

“It’s a good initial start with thinking about protecting consumers, protecting humanity from these very large AI models,” he said. “It is one of those situations where the rules that they suggest are at least sane enough without being overly burdensome on the majority of AI models.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

T
B