AI

California’s AI safety bill has divided the tech industry

The bill, which would add more guardrails to certain models, is headed for a final vote in August.
article cover

Emily Parsons

5 min read

A showdown is brewing over a California bill that could have big implications for how regulators treat generative AI models.

The Golden State’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act would add guardrails and require mandatory third-party audits for AI models above a certain cost and computing power threshold. Amid vocal opposition from segments of the tech industry, the bill has cruised through committee hearings and state Senate approval with a final vote due by the end of August.

Beyond drawing lines for AI in the country’s most populous state—which also happens to be home to Silicon Valley—supporters hope and opponents fear that the law could set a precedent for other states and countries to follow, much like the EU’s recent passage of its AI Act or California’s landmark data privacy law, which passed in 2018. It also comes as efforts to legislate AI rules on Capitol Hill have stalled.

Who’s in what corner: Among the bill’s backers are so-called godfathers of AI, Geoffrey Hinton—who has been outspoken about the risk of AI since quitting Google last year—and Yoshua Bengio. AI safety groups including Encode Justice and the Center for AI Safety Action Fund are co-sponsoring the bill.

“It’s critical that we have legislation with real teeth to address the risks,” Hinton said in a press release from bill author Senator Scott Wiener’s office. “California is a natural place for that to start, as it is the place this technology has taken off.”

On the other side, the bill’s biggest detractors include the startup incubator Y Combinator (YC) and venture giant Andreessen Horowitz (a16z). The former, along with dozens of startup founders, sent an open letter to Wiener last month with a laundry list of concerns, as first reported by Politico.

Those opposed to the bill argue, among other things, that the mandate that models have an emergency shutdown would harm open-source developers, who don’t necessarily have control over how their code is modified. They also claim one of the computing power thresholds that determines whether a model is subject to the law is arbitrary and that the overall language is too vague.

Anjney Midha, a general partner at a16z, claimed in a Q&A posted by the firm that the provisions amounted to “blatant regulatory capture,” contending that the provisions favor big companies that can more easily shoulder compliance costs.

“Large tech companies that have armies of lawyers and lobbyists will be able to shape the definitions to their advantage, while smaller companies, open-source researchers, and academics will be completely left out in the cold,” Midha wrote.

In an interview with Tech Brew, Wiener called these charges “very, very off-base.”

“Both Meta and Google are opposing the bill. If this were some sort of regulatory capture, you wouldn’t expect that,” Wiener said. “The bill only applies to huge models…startups aren’t even going to have to comply with it. So this does not in any way favor the big players.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

In separate letters to Wiener’s office, policy execs from Meta and Google raised concerns about the bill’s potential to shift liability to the developers behind models rather than those using (or misusing) the models themselves. The letters also spelled out lists of more granular issues.

“The bill imposes liability on model developers for downstream harms regardless of whether those harms relate to how the model was used rather than how it was built, effectively making them liable for scenarios they are not best positioned to prevent,” Rob Sherman, Meta’s VP of policy and deputy chief privacy officer, wrote in that company’s letter.

YC declined to comment, and a16z pointed us toward what the firm has already posted about the bill.

FLOP era: In contrast with laws like the EU’s AI Act, which determines regulatory strictness based on the riskiness of the AI’s use, California’s bill would apply to models that cost more than $100 million to train and use more than 10^26 floating-point operations per second (FLOPs) of computing power, the same metric used by President Joe Biden’s executive order.

Wiener responded to YC and a16z in a letter of his own that aimed to rebut their concerns. In it, he claims he supports the open-source ecosystem and that the shutdown mandate doesn’t apply to models outside a “developer’s control.” He also noted that the regulatory body outlined in the bill, the Frontier Model Division, would have the authority to revisit the 10^26 FLOP figure after 2027.

“We’ve worked very hard to craft the bill’s language as tightly and clearly as possible,” Wiener wrote in the July 1 letter. “As with any legislation, language can always be improved. It’s for that exact reason that, for months now, I’ve sought constructive feedback from YC and a16z.”

What will come of this debate remains to be seen. Despite the industry pushback, the bill easily passed a state Senate vote in May, with 32 senators in favor and one opposed. The bill next goes to the Assembly, where it must pass by August 31, after which it needs only a signature from Governor Gavin Newsom.

“I do not know where the governor stands on this,” Wiener said. “As we do with any big bill, we reached out and made sure that we’re briefing the administration on what the bill is and where we’re heading on it, and we, of course, will always welcome feedback from the administration.”

Newsom’s office did not return requests for comment.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

T
B