California’s new rules on Automated Decision-Making (ADM) are big news for businesses subject to the CCPA. These rules have serious implications for developers, organizations, and users leveraging AI for high-impact decisions—and they invite comparisons with Europe’s GDPR. Here’s a breakdown of how California’s approach differs and what it means for your compliance strategy.
No Blanket Ban on ADM
Under the GDPR, ADM that results in legal or similarly significant effects is generally prohibited unless it meets specific conditions:
- It is necessary for entering into or performing a contract,
- It is authorized by law, or
- The individual has given explicit consent.
California, on the other hand, does not outright ban ADM. Instead, it permits such systems under the condition that organizations meet a set of compliance obligations. These include conducting risk assessments and issuing clear transparency notices among other things (see below).
Defining “Significant Decisions”
California’s rules provide a more concrete definition of “Significant Decisions.” These include decisions affecting:
- Financial or lending services,
- Housing,
- Education or employment opportunities,
- Access to healthcare services.
In contrast, the GDPR uses broader language, referring to decisions that produce “legal effects” or similarly significant impacts. Regulatory bodies like the European Data Protection Board (EDPB) have offered guidance to help clarify what falls under this scope.
Real Human Involvement Is Required
Both the GDPR and California’s rules allow for sufficient human involvement to potentially take a decision outside the scope of ADM. However, California goes further by defining what qualifies as meaningful human involvement. Specifically, the individual involved must:
- Understand how to interpret and use the AI’s output,
- Review the AI’s results alongside other relevant information, and
- Have the authority to change or override the AI-generated decision.
The Right to Human Intervention Isn’t Absolute
While the GDPR grants individuals the right to human intervention and the ability to appeal decisions made solely by automated systems, California’s framework takes a different approach. If a business is not required to offer an opt-out under the California rules, it only needs to provide a method for individuals to appeal an ADM decision after it has been made.
Risk Assessments Are Mandatory
Under EU guidance, ADM often triggers the need for a Data Protection Impact Assessment (DPIA), although this requirement is not always explicitly mandated by the GDPR itself. In California, however, risk assessments are compulsory. Organizations must conduct assessments for:
- Any ADM use that leads to significant decisions, and
- Any use of personal information to train AI models that will eventually make such decisions.
What’s Next?
For businesses subject to the California Consumer Privacy Act (CCPA) and the new ADM regulations, now is the time to start adapting. Compliance will likely require updates to existing privacy and data governance processes. Specifically, companies should make plans to:
- Conduct ADM-specific risk assessments,
- Update notice and disclosure frameworks,
- Provide at least two opt-out submission methods,
- Implement formal appeal and information request processes.
If finalized, California’s ADM rules are set to take effect on January 1, 2027, giving organizations some lead time—but not much.
Need help preparing?
Our team is ready to assist with risk assessments, governance strategies, and process design tailored to the new rules.