SB 53 in California: A New AI Transparency Law Aimed at Tech Firms
California's State Senator, Scott Wiener, recently brought forth a new set of changes to his pioneering bill, SB 53. If passed, this will necessitate global AI titans to publicize safety and security measures and generate reports whenever safety incidents arise.
By imposing strict transparency requirements on major AI makers, California would thereby take the lead and maybe attract giants like Google, OpenAI, Anthropic, and xAI.
Wiener had previously attempted to implement similar provisions for AI developers via his prior AI legislation, SB 1047. However, this initiative met fierce resistance from Silicon Valley and was eventually negated by Governor Gavin Newsom. Post this confrontation, Governor Newsom rallied leading AI leaders, including Stanford's renowned researcher and World Labs co-founder, Fei-Fei Li, to strategize the state’s AI safety mechanisms.
Recently, California's AI strategy panel issued their final recommendations, advocating for an industry-level obligation to publicize system-specific information. Drawn from these insights, the deviations made to SB 53 were proposed by Wiener's office.
Wiener expressed his commitment to continuously refine the bill in collaboration with diverse stakeholders, with the aim of making it the most scientific and equitable statute possible.
The goal of SB 53 is to strike a balance, one that its predecessor, SB 1047, purportedly failed to achieve. This would ideally entail enforcing significant transparency protocols for leading AI developers, without hampering California's fast-paced AI industry growth.
Nathan Calvin of the nonprofit AI safety organization, Encode, shared in an interview that a reasonable first step would be compelling corporations to disclose their risk mitigation measures to the public and government.
The bill also sets provisions for whistleblowers within AI labs, echoing concerns about technology imposing societal risks. It also proposes the creation of CalCompute, a public cloud computing conglomeration to back startups and research entities focusing on large-scale AI development.
Unlike SB 1047, Wiener’s latest proposal doesn't impose liability on AI model developers for any potential harms. It also aims to not burden startups and researchers refining AI models.
Currently, SB 53 is on its way to the California State Assembly’s Privacy and Consumer Protection Committee for approval. It will still need to pass through numerous legislative bodies before reaching the governor’s desk.
Parallelly, across the country, New York's Governor Kathy Hochul is considering a similar AI safety bill, the RAISE Act. However, the fate of such proposals was not certain until recently. A proposal to enforce a 10-year freeze on state-level AI regulations failed in the Senate with a landslide 99-1 vote.
Ex-Y Combinator president, Geoff Ralston, stated that instead of treating AI safety norms as controversial, they should be foundational. Ralston enthusiastically supported SB 53 as a good example of state leadership.
Historically, efforts to get AI companies on board with state-regulated transparency have not been entirely successful. While some companies have shown support, others have been resistant.
With SB 53, a more palatable version of previous AI safety bills, companies might have to reveal more than currently required. It remains to see whether Senator Wiener can push the boundaries once again.