Governor Gavin Newsom Vetoes California AI Bill SB 1047

Governor Gavin Newsom has vetoed California AI bill SB 1047, a bill that divided the AI community and drew both praise and criticism....
Governor Gavin Newsom Vetoes California AI Bill SB 1047
Written by Matt Milano
  • Governor Gavin Newsom has vetoed California AI bill SB 1047, a bill that divided the AI community and drew both praise and criticism.

    SB 1047 was designed to address some of the biggest issues with AI development, including establishing measures to ensure AI models are developed in a safe manner. For example, outside of specific circumstances, such as research and evaluation, developers would be prohibited from deploying or selling AI models that pose an unreasonable risk of causing harm. Similarly, the bill would require developers to retain a third party to perform an annual audit to ensure AI models are being developed safely.

    Catch our chat on Newsom’s veto of California AI bill SB 1047!

     

    A number of AI firms opposed the bill, although Anthropic had assisted with its final form by making suggestions for a number of amendments AI firms wanted. Ulitmately, despite the progress made, Governor Newsom decided to veto the bill, saying it was “well-intentioned,” but ultimately fell short.

    “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” said Newsom. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

    A Good Start, but SB 1047 Fell Short

    Ultimately, in his full message about the veto, Newsom raised concerns about the bill’s focus on only the largest AI models and the false sense of security that could cause.

    SB 1047 magnified the conversation about threats that could emerge from the deployment of Al. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system’s actual risks regardless of these factors. This global discussion is occurring as the capabilities of Al continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving.

    By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

    Newsom also emphasized that he agreed with the bill in spirit but felt more needed to be done to ensure any such bill accomplished its goal.

    Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

    To those who say there’s no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree. A California-only approach may well be warranted – especially absent federal action by Congress – but it must be based on empirical evidence and science. The U.S. Al Safety Institute, under the National Institute of Science and Technology, is developing guidance on national security risks, informed by evidence-based approaches, to guard against demonstrable risks to public safety. Under an Executive Order I issued in September 2023, agencies within my Administration are performing risk analyses of the potential threats and vulnerabilities to California’s critical infrastructure using Al. These are just a few examples of the many endeavors underway, led by experts, to inform policymakers on Al risk management practices that are rooted in science and fact. And endeavors like these have led to the introduction of over a dozen bills regulating specific, known risks posed by Al, that I have signed in the last 30 days.

    Just One Bill of Many

    Newsom’s office also took the opportunity to highlight the many other bills Newsom has signed in the last 30 days to regulate AI. The list includes:

    • AB 1008 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Clarifies that personal information under the California Consumer Privacy Act (CCPA) can exist in various formats, including information stored by AI systems. (previously signed)
    • AB 1831 by Assemblymember Marc Berman (D-Menlo Park) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.
    • AB 1836 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Prohibits a person from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent, except as provided. (previously signed)
    • AB 2013 by Assemblymember Jacqui Irwin (D-Thousand Oaks) – Requires AI developers to post information on the data used to train the AI system or service on their websites. (previously signed)
    • AB 2355 by Assemblymember Wendy Carrillo (D-Los Angeles) – Requires committees that create, publish, or distribute a political advertisement that contains any image, audio, or video that is generated or substantially altered using AI to include a disclosure in the advertisement disclosing that the content has been so altered. (previously signed)
    • AB 2602 by Assemblymember Ash Kalra (D-San Jose) – Provides that an agreement for the performance of personal or professional services which contains a provision allowing for the use of a digital replica of an individual’s voice or likeness is unenforceable if it does not include a reasonably specific description of the intended uses of the replica and the individual is not represented by legal counsel or by a labor union, as specified. (previously signed)
    • AB 2655 by Assemblymember Marc Berman (D-Menlo Park) – Requires large online platforms with at least one million California users to remove materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election, if the content is reported to the platform. Provides for injunctive relief. (previously signed)
    • AB 2839 by Assemblymember Gail Pellerin (D-Santa Cruz) – Expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content from 60 days to 120 days, amongst other things. (previously signed)
    • AB 2876 by Assemblymember Marc Berman (D-Menlo Park) – Require the Instructional Quality Commission (IQC) to consider AI literacy to be included in the mathematics, science, and history-social science curriculum frameworks and instructional materials.
    • AB 2885 by Assemblymember Rebecca Bauer-Kahan (D-Orinda) – Establishes a uniform definition for AI, or artificial intelligence, in California law. (previously signed)
    • AB 3030 by Assemblymember Lisa Calderon (D-Whittier) – Requires specified health care providers to disclose the use of GenAI when it is used to generate communications to a patient pertaining to patient clinical information. (previously signed)
    • SB 896 by Senator Bill Dodd (D-Napa) – Requires CDT to update report for the Governor as called for in Executive Order N-12-23, related to the procurement and use of GenAI by the state; requires OES to perform a risk analysis of potential threats posed by the use of GenAI to California’s critical infrastructure (w/high-level summary to Legislature); and requires that the use of GenAI for state communications be disclosed.
    • SB 926 by Senator Aisha Wahab (D-Silicon Valley) – Creates a new crime for a person to intentionally create and distribute any sexually explicit image of another identifiable person that was created in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. (previously signed)
    • SB 942 by Senator Josh Becker (D-Menlo Park) – Requires the developers of covered GenAI systems to both include provenance disclosures in the original content their systems produce and make tools available to identify GenAI content produced by their systems. (previously signed)
    • SB 981 by Senator Aisha Wahab (D-Silicon Valley) – Requires social media platforms to establish a mechanism for reporting and removing “sexually explicit digital identity theft.” (previously signed)
    • SB 1120 by Senator Josh Becker (D-Menlo Park) – Establishes requirements on health plans and insurers applicable to their use AI for utilization review and utilization management decisions, including that the use of AI, algorithm, or other software must be based upon a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and not supplant health care provider decision making. (previously signed)
    • SB 1288 by Senator Josh Becker (D-Menlo Park) – Requires the Superintendent of Public Instruction (SPI) to convene a working group for the purpose of exploring how artificial intelligence (AI) and other forms of similarly advanced technology are currently being used in education. (previously signed)
    • SB 1381 by Senator Aisha Wahab (D-Silicon Valley) – Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI.

    Mixed Response to the Veto

    Predictably, the response to Newsom’s veto was mixed. Nancy Pelosi was quick to thank Newsom for vetoing the bill, as she has been a vocal critic of it and the impact it would have on California’s place as the cradle of AI.

    https://twitter.com/SpeakerPelosi/status/1840498822549528793

    At the same time, the bill’s author, State Senator Scott Wiener, said the veto was “a setback.”

    “This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet. The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policy makers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.

    “This veto is a missed opportunity for California to once again to lead on innovative tech regulation—just as we did around data privacy and net neutrality—and we are all less safe as a result.

    “At the same time, the debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage. Major AI labs were forced to get specific on the protections they can provide to the public through policy and oversight. Leaders from across civil society, from Hollywood to women’s groups to youth activists, found their voice to advocate for commonsense, proactive technology safeguards to protect society from foreseeable risks. The work of this incredible coalition will continue to bear fruit as the international community contemplates the best ways to protect the public from the risks presented by AI.

    California will continue to lead in that conversation—we are not going anywhere.”

    Ultimately, as both sides of the debate agree, the conversation about regulating AI is just beginning. Although SB 1047 has been killed, attempts to regulate the AI industry have only just begun.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit