A practical framework for determining scope, identifying "significant decisions," and auditing risk under California’s new ADMT rules
.png)
What's inside
A simple way to tell whether AI tools, from resume parsing to auto-scheduling, fall under ADMT rules.
Examples showing what’s in scope vs out of scope, including ranking, filtering, and candidate suppression.
Where liability shows up across common staffing models, including VMS algorithms, MSP workflows, and RPO ownership gaps.
A practical path from assumptions to evidence, covering documentation, vendor accountability, and independent verification.
The introduction of the California Privacy Protection Agency’s (CPPA) rules on Automated Decision-making Technology (ADMT) creates new obligations for staffing firms. However, applying broad legal definitions to high-volume recruitment workflows can be challenging.
This guide breaks down the ADMT rules as written, and maps them to staffing workflows, where sourcing, screening, and matching happen at scale.
It explains the difference between AI that helps recruiters make decisions and AI that actually changes who gets seen, shortlisted, or hired.
Key areas covered:

Industry perspectives
Transparency is the new currency. Today, the ability to show a client exactly how your AI works, and prove it’s fair, won't simply be a compliance requirement. It will be one of the biggest differentiators in winning enterprise business.
For the last decade, we asked, 'Can we automate this?' The question for 2026 is, 'Can we explain this?' The industry is moving from an era of 'move fast and break things' to an era of 'move fast and show your work.' The firms that survive will be the ones that can do both.
Warden's Platform
Warden provides the independent assurance needed to prove fairness and satisfy regulators.
Whether built in-house or provided by vendors, third-party bias and compliance testing gives clients and regulators proof your AI systems are assessed.
Ongoing re-testing and oversight help you catch issues early and show that your AI is actively governed, not reviewed once and forgotten.
Timestamped, versioned records of AI performance over time help you demonstrate reasonable oversight and respond confidently to legal or client scrutiny.