Anthropic Endorses Major California AI Regulation Bill
Most, if not all, U.S. states have enacted or proposed legislation on AI.
Anthropic endorsed a proposed California bill, SB 53, which would regulate powerful frontier AI systems, the company stated Tuesday in a post on its website.
“Anthropic is endorsing SB 53, the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We’ve long advocated for thoughtful AI regulation,” the post read.
SB 53, if passed, would enact the Transparency in Frontier Artificial Intelligence Act (TFAIA), regulating large frontier AI models, according to the language of the bill.
For instance, the legislation would require frontier model developers to publicly disclose safety and security details about the model, how the model is assessed for risks and how it meets industry standards, among other requirements.
The bill would also establish the Department of Technology within the Government Operations Agency and requires the department to create a comprehensive inventory of all high-risk automated decision systems.
In addition, SB 53 would require development of a framework for a public cloud computing cluster dedicated to the cause of safe, equitable and sustainable AI.
The bill, which currently is in committee, also offers protections for whistleblowers “working with foundation models.”
“SB 53’s transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete. But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional,” Anthropic also said in its post.
Californian lawmakers had proposed an earlier version of SB 53—SB1047—which would have established the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, providing guidelines for developers training AI models.
“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” Anthropic said in its post.