Ad Image

Understanding the Impact of California’s SB 1047 AI Bill on All Stakeholders

Nasuni’s Jim Liddle offers insights on understanding the impact of California’s SB 1047 AI bill for stakeholders. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.

The state of California is in the final stages of approving a significant piece of legislation related to AI regulation. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – SB 1047 – was passed by the California State Assembly in late August. The bill was initially introduced by Senator Scott Wiener in February and passed through the state Senate in May. Now, SB 1047 goes to the desk of Governor Gavin Newsome, who has until September 30th to make a decision.

SB 1047 has sparked intense debate on what the role of government should be in AI innovation. AI has exploded into the mainstream over the last several years, and federal governments are still trying to discern how to properly regulate the technology. California’s SB 1047 bill represents a major attempt to curb AI risks at the subnational level in one of the most active regions for AI innovation in the world. The legislation has drawn out opinions from all sides, including those in big tech, academia, and open-source development.

Currently, SB 1047 aims to implement safeguards for large “frontier” models – those at the cutting edge of research and execution that will determine the future of AI in our world.  The bill initially defined “covered models” as those exceeding a certain development and fine-tuning cost, but this has been revised to include any model exceeding a specific computing power threshold determined by the Government Operations Agency. These thresholds are subject to annual updates based on technological advancements and scientific literature.

For covered models, SB 1047 puts forth a number of pre-training and pre-development requirements that are intended to prevent critical harm, like mass casualty events and serious cyberattacks. The bill defines “critical harm” broadly to include any grave harm to public safety and security comparable in severity to the previously mentioned harms. This broader definition highlights the bill’s intent to address a wider range of potential risks posed by AI.

The bill also creates the Board of Frontier Models (BFM) within the Government Operations Agency, comprised of nine members representing various stakeholders, including the open-source community, academia, and industry. The BFM will play a key role in updating the definition of “covered models” and approving regulations and guidance. Additionally, the bill establishes a consortium to develop a framework for a public cloud computing cluster called “CalCompute,” designed to promote safe, ethical, and sustainable AI development. California’s bill differs from most other AI legislation that has focused more on the consequences of AI outputs, such as biased results and lack of transparency, rather than on inputs.

SB 1047 would require frontier model developers to have the ability to “promptly enact a full shutdown,” which would include all model operations and training. Starting in 2026, model developers would also have to employ third-party auditing services and make documented security and safety protocols available for California’s Attorney General (AG). Additionally, organizations that manage computing clusters would have to start gathering identifying information about any customers using enough computing capacity to train regulated models.

The debate surrounding SB 1047 extends beyond California’s borders, with diverse perspectives emerging from key players in the AI field. Big tech companies have expressed mixed reactions to the bill. Google, for instance, has raised concerns about its potential to stifle innovation by focusing on model training rather than downstream results. OpenAI, while advocating for federal regulation, believes that some form of regulation at the state level is necessary.

On the other hand, AI startup Anthropic and renowned researchers such as Yoshua Bengio and Geoffrey Hinton have voiced support for SB 1047, recognizing its potential to address the critical safety concerns posed by emerging AI technologies. These diverse viewpoints underscore the complexity of regulating a rapidly evolving technology like AI, and the need for careful consideration of all potential impacts.

How SB 1047 Could Impact Smaller Players and the Open-Source Community

If approved, SB 1047 could disproportionately hurt smaller companies and the open-source community. Even though big tech firms have expressed concerns about the bill, those organizations are better positioned to handle the compliance and security requirements. They can afford to build the full shutdown system needed and establish robust processes to mitigate potential risks. Furthermore, they could more easily absorb civil penalties and fines from the office of the AG resulting from critical harm, whether malicious or unintentional.

Plus, even though SB 1047 is attempting to target only larger frontier models, future AI model generations will require more time, money, and computing resources. Companies that fall below SB 1047’s minimum limits today could potentially exceed them in a few short years. What’s more, many in the open-source community and academic researchers will hesitate to build on top of models that could face a government-mandated shutdown. This fear will decelerate innovation and keep important professionals away from AI innovation.

In response to feedback, the writers of SB 1047 have expanded a proposed Board of Frontier Models (BFM) from five members to nine, with one seat dedicated to the open-source community. The BFM would be in charge of maintaining appropriate thresholds for what should be a covered model under the legislation. While a step in the right direction, there is no dedicated seat currently for AI startups or SMBs.

To keep a well-rounded perspective as AI technology continues to evolve, a better approach would be to accommodate different scales of AI development. This could mean developing requirements specifically for builders of smaller frontier models that are less cumbersome from a compliance standpoint while keeping relevant safeguards.

SB 1047 is a well-intentioned bill that is trying to get ahead of dangerous AI. However, it’s important to remember that we are still in the early stages of mainstream AI and have a lot to learn when it comes to regulating the technology appropriately for many different stakeholders. A more nuanced approach that better accounts for the open-source community and smaller AI companies would benefit the entire industry.

Share This

Related Posts

Insight Jam Ad

Insight Jam Ad

Follow Solutions Review