Beyond SB 1047: The Path to Effective AI Legislation

CalypsoAI’s James White offers insights on looking beyond SB 1047 and the path to effective AI legislation. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.
As someone deeply entrenched in the development and deployment of AI technologies, I’ve experienced the delicate balance between innovation and responsibility. Regulating AI requires nuance and precision—qualities that were missing in California’s Senate Bill (SB 1047). Governor Gavin Newsom’s recent veto of the bill was a pivotal moment for AI regulation, and I believe it was the right decision.
While the bill aimed to address the real dangers posed by advanced AI systems, its “one-size-fits-all” approach would have stifled innovation without adequately addressing the specific risks involved.
Global and U.S. Approaches: Lessons for California
Looking at global efforts, regulation is already taking shape in various forms. The EU AI Act, for example, is an early attempt to categorize AI applications based on risk. While the EU has opted for a centralized approach, in the U.S., regulation has been more fragmented, with states like Utah and California leading their own initiatives. Utah, for instance, passed a simple law requiring chatbots to disclose when they are AI, with fines for non-compliance. This straightforward, focused approach has merit.
At the federal level, there’s finally some movement toward more thoughtful regulation. A recent House resolution aims to integrate AI vulnerabilities into the National Institute of Standards and Technology’s (NIST) security tracking system. This is a step in the right direction—starting small by tracking AI risks while giving lawmakers the data they need to craft better-informed regulations
California, however, aimed higher with SB 1047, which would have imposed sweeping regulations on “covered models”—AI systems meeting certain thresholds for computational power and financial cost. The bill treated models broadly without adequately distinguishing between the different stages of AI development. The Governor’s veto reflects a recognition that while proactive safeguards are necessary, overly broad regulations could unnecessarily burden developers, particularly in smaller companies, without really improving safety.
What SB 1047 Needs to Succeed
Where there is will, there is a way, the lawmakers in California just need to find it. So what do they need to consider?
The original bill was the wrong combination of arbitrary and targeted. Some criteria like computing power and dollar cost stand a high chance of being the wrong fit even with built-in checkpoints to reevaluate. When you consider recent research into cheaper methodologies of training models and the scale of models increasing, it becomes obvious that different criteria are needed.
To get this right, there is a trichotomy of stakeholders we need to consider: Trainers, Builders and Consumers. Each of these stakeholders have different but related responsibilities to create the best environment to accelerate and realize the AI’s possibilities in a responsible and thoughtful way.
Trainers
Trainers train the models, which is pretty simple. They are a small number of organizations with a lot to win or lose and provide the foundation models that everyone else will use. A few of them are located in California, which means this legislation will be influenced and written in their backyard. The next bill drafted must ensure Trainers are responsible and accountable as the vast majority of AI use cases will use their models.
Builders
Builders modify models or integrate them into applications, representing a large group of AI users. Their part in this ecosystem is taking the amazing power of foundation models and using it for specific purposes—the last mile from potential to useful. They have the burden of being the direct touch point with the consumer and thus have a lot of responsibility. Therefore, they should be held to the existing standards and laws of their given vertical and jurisdiction as well as new AI-specific laws.
Consumers
Consumers represent the end-users of AI technologies. They represent the biggest group and are the most at risk. While legislation is ultimately designed to protect consumers, they too have responsibilities, such as understanding the limitations of AI and that opting out of AI-driven services comes with reasonable trade-offs. On the flip side, while consumers should be protected, they cannot expect the same level of safety when using AI maliciously or outside its intended parameters.
The details of the criteria for each group is something that should be debated to achieve the optimal conditions for AI to be successfully and safely adopted. And I must caution, that ignoring any of these three groups dramatically reduces the chances of this happening.
Capitalize on the Opportunity
Ultimately, the veto of SB 1047 was a crucial step in the right direction. Now, California has the opportunity to lead the way in creating smart, adaptable AI regulations that account for the right criteria and stakeholders. This is our chance to build a regulatory framework that fosters both safety and innovation—one that ensures AI can be used responsibly to solve the world’s most pressing challenges.