HB Ad Slot
HB Mobile Ad Slot
California AI Policy Report Outlines Proposed Comprehensive Regulatory Framework
Wednesday, June 18, 2025

On June 17, 2025, the Joint California Policy Working Group on AI Frontier Models released a final version of its report, “The California Report on Frontier AI Policy,” outlining a policymaking framework for frontier artificial intelligence (AI). Commissioned by Governor Gavin Newsom and authored by leading AI researchers and academics, the report advocates a 'trust but verify' approach. 

Its recommendations emphasize evidence-based policymaking, transparency, adverse event reporting and adaptive regulatory thresholds. Given California's role as a global AI innovation hub and its history of setting regulatory precedents, these recommendations are highly likely to influence its overall AI governance strategy.

Key Proposed Reccomendations

The California report provides recommendations that are likely to inform future legislative or regulatory action (although no legal obligations currently arise from its findings):

Enhanced Transparency Requirements: The report proposes public disclosure of AI training data acquisition methods, safety practices, pre-deployment testing results, and downstream impact reporting. This represents a fundamental shift from current practices where companies maintain proprietary control over development processes. If implemented, organizations could face reduced competitive advantages based on data acquisition methods while experiencing increased compliance costs for documentation and reporting requirements. This concern isn't just about the method of acquisition, but whether certain methods (like exclusive licensing) create anti-competitive advantages.

Adverse Event Reporting System: The report recommends mandatory reporting of AI-related incidents by developers, voluntary reporting mechanisms for users, and a government-administered system similar to existing frameworks in aviation and healthcare. The report highlights that this system “does not necessarily require AI-specific regulatory authority or tools.”

Third-Party Risk Assessment Framework: The report states that companies "disincentivize safety research by implicitly threatening to ban independent researchers" and implicitly calls for a “safe harbor for independent AI evaluation." This approach could reduce companies' ability to prevent external security research while requiring formal vulnerability disclosure programs and potentially exposing system weaknesses through independent testing.

Proportionate Regulatory Thresholds: Moving beyond simple computation-based thresholds, the report proposes a multi-factor approach considering model capabilities (e.g., performance on benchmarks), downstream impact (e.g., number of commercial users), and risk levels, with adaptive thresholds that can be updated as technology evolves.

Regulatory Philosophy and Implementation

The report draws from past technology governance experiences, particularly emphasizing the importance of early policy intervention. The authors analyze cases from internet development, consumer products regulation, and energy policy to support their regulatory approach.

While the report doesn't specify implementation timelines, California's regulatory history suggests potential legislative action in the 2025–2026 session through a phased approach: initial transparency and reporting requirements, followed by third-party evaluation frameworks and, ultimately, comprehensive risk-based regulation.

Potential Concerns

The California Report on Frontier AI Policy's acknowledgment of an “evidence dilemma” (the challenge of governing systems without a large body of scientific evidence), however, captures inherent limitations in regulating technology still characterized by significant opacity. 

For example, the report notes that "[m]any AI companies in the United States have noted the need for transparency for this world-changing technology. Many have published safety frameworks articulating thresholds that, if passed, will trigger concrete, safety-focused actions." But it also notes that much of the transparency is performative and limited by "systemic opacity in key areas." 

And while the report proposes governance frameworks based on “trust but verify,” it also documents AI systems that have exhibited “strategic deception” and “alignment scheming,” including attempts to deactivate oversight mechanisms. This raises profound questions about the feasibility of verifying the true safety and control of these rapidly evolving systems, even with proposed transparency and third-party evaluation mechanisms. 

Looking Ahead

The California Report on Frontier AI Policy represents the most sophisticated attempt at evidence-based AI governance to date. While these recommendations are not yet law, California's influence on technology regulation suggests these principles are likely to be implemented in some form.

Organizations should monitor legislative developments, consider engaging in public comment and proactively implementing recommended practices, and develop internal capabilities for ongoing AI governance. 

The intersection of comprehensive state-level regulation and rapidly evolving AI capabilities requires flexible compliance frameworks that can adapt to changing requirements while maintaining operational effectiveness.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters