For decades, hiring was slow. Recruiters sorted paper resumes by hand, scheduled phone screens manually, and spent weeks moving candidates through multi-round interviews before a single offer went out. Then came software that promised to speed things up: applicant tracking systems, AI resume screeners, video interview analyzers, and predictive scoring engines that could process thousands of candidates in the time it once took to review a hundred.
These tools are now common across staffing agencies, HR tech platforms, and enterprise talent teams. They also carry a name with specific legal meaning: Automated Employment Decision Tools, or AEDTs. And regulators are paying close attention to them.
What Is an Automated Employment Decision Tool?
An Automated Employment Decision Tool is any software, algorithm, or artificial intelligence system that substantially assists or replaces human judgment in employment-related decisions. The term was codified in New York City Local Law 144, which became enforceable in July 2023 and was one of the first laws in the United States to directly regulate this category of technology.
Under NYC Local Law 144, a tool qualifies as an AEDT when it is used to screen candidates or employees for promotion, and when it relies on machine learning, statistical modeling, data analytics, or AI to generate a prediction, recommendation, or score that is used to substantially assist or replace the judgment of a human decision-maker.
That definition is intentionally broad. It captures:
- Resume screening software that ranks candidates by predicted fit
- Chatbot assessments that score candidate responses in real time
- Video interview platforms that analyze speech patterns, facial expressions, or word choice
- Matching algorithms in staffing platforms that surface candidates to recruiters
- Predictive scoring engines that rate applicant likelihood of success
If a tool uses AI or statistical modeling to influence who gets moved forward in a hiring or promotion process, it almost certainly falls within scope.
How AEDTs Are Used in Hiring Today
Employers and staffing agencies rely on AEDTs at multiple stages of the employment process. The most common use cases cluster around three phases.
Resume and application screening. Large employers can receive tens of thousands of applications for a single position. AI screening tools parse resumes and rank applicants against a defined profile, narrowing a large pool to a manageable shortlist. Some systems use natural language processing to identify relevant skills and experience; others score against historical outcome data, such as which prior hires performed well in similar roles.
Candidate assessment. Pre-hire assessments have moved from paper questionnaires to AI-scored exercises. Video platforms analyze candidate interviews and assign scores to verbal content, pace, or vocabulary. Cognitive and personality assessments deliver numeric outputs that feed into applicant tracking systems automatically. The recruiter may see a final score rather than the underlying evaluation.
Matching and ranking in staffing platforms. Staffing agencies use AI matching tools to connect open roles with candidate profiles at speed. These systems score the fit between a job description and a candidate record, then surface ranked lists to recruiters. The algorithm determines which candidates are seen first, and in practice, which ones get considered at all.
The efficiency gains are real. But so are the risks. A scoring model trained on historical hiring data inherits the biases embedded in that history. If a company historically hired more men for engineering roles, a model trained on that data may systematically score male candidates higher, regardless of how the model was designed.
Why AEDTs Became a Regulatory Priority
The regulatory attention on AEDTs is not theoretical. Several high-profile legal cases have illustrated the discrimination risks these tools create when deployed without adequate auditing.
The Kistler v. Eightfold AI lawsuit tested whether AI hiring platforms qualify as consumer reporting agencies under federal law, raising fundamental questions about who bears legal responsibility when an algorithm influences an employment decision. The Workday class action litigation alleged that its AI screening tools discriminated against Black, older, and disabled applicants, drawing on data showing systematic disparities in pass-through rates across protected groups.
These cases, along with the broader policy debate around algorithmic accountability, pushed regulators to act. NYC Local Law 144 was the first response. Colorado SB205, California FEHA updates, and Illinois HB 3773 followed, each with its own scope and requirements. The Illinois HB 3773 amendment to the Human Rights Act added employment AI explicitly to the state's anti-discrimination framework.
The pattern across all of these frameworks is consistent: regulators want third-party verification that AI tools used in hiring are not producing discriminatory outcomes. Annual audits are the baseline. Continuous monitoring is where enforcement is heading.
What NYC Local Law 144 Requires for AEDTs
NYC Local Law 144 applies to employers and employment agencies that use AEDTs when making employment decisions affecting candidates or employees based in New York City. This includes employers with NYC offices who are hiring for remote positions.
The law has three main requirements:
Annual independent bias audit. Employers must commission a bias audit from an independent third party at least once per calendar year. The audit must measure impact ratios, comparing selection rates across sex and race/ethnicity categories and their intersections. If a tool produces outcomes that disadvantage a protected group at a rate below 80 percent of the most-selected group (the 4/5ths rule), that is evidence of adverse impact and must be disclosed.
Public disclosure of audit results. Employers must publish the results of the most recent bias audit on their public website. This includes the date of the audit, the score ranges and selection rates for each category, and the impact ratios for each group tested.
Candidate notification. Employers must notify candidates at least 10 business days before using an AEDT in their evaluation. Candidates must be given the opportunity to request an alternative assessment method where reasonable.
Penalties for non-compliance run from $500 for a first violation to $1,500 per day for ongoing violations. Each day a non-compliant AEDT is in use is a separate violation.
How Colorado SB205 and Other State Laws Extend AEDT Obligations
NYC Local Law 144 established the baseline. Several states have now built on that foundation with broader requirements.
Colorado SB205, which took effect February 1, 2026, applies to "high-risk AI systems" used in employment decisions. It covers 12 protected characteristics, compared to NYC's two, and imposes obligations on both the AI developers (vendors) and the deployers (employers and staffing agencies). Both parties must conduct impact assessments, maintain documentation, and take steps to mitigate identified risks within 90 days of a material system change.
California's FEHA updates create the broadest coverage, extending to 18 protected characteristics and applying to selection tools at any stage of the employment process, not just final decisions. California also establishes joint liability between vendors and employers, a significant shift that means HR tech companies cannot disclaim responsibility for how their tools are used.
The EU AI Act, which classifies HR AI systems as high-risk under Annex III, adds pre-market conformity assessments and post-market monitoring obligations for any tool deployed by companies operating in EU member states. High-risk obligations apply from August 2, 2026.
For HR tech vendors selling to employers across multiple jurisdictions, complying with each framework separately is operationally difficult. The practical response is to audit against the highest common denominator: cover 14 or more protected classes, monitor continuously rather than annually, and maintain legal-grade audit documentation at all times.
The Difference Between Annual Audits and Continuous Monitoring
Most AEDT regulations set an annual audit as the minimum. But annual snapshots miss what happens in between. AI models drift over time as the input data changes. A recruiting tool that passed a bias audit in January may produce measurably different outcomes by August if the candidate pool has shifted or if the underlying model was retrained.
Continuous monitoring addresses this by running audit checks against live data on a recurring basis, typically monthly. If a tool's impact ratios degrade, a monitoring system surfaces that signal before it becomes a compliance violation or, worse, evidence in a discrimination lawsuit.
Colorado SB205 specifically requires employers to reassess within 90 days of any material change to a high-risk AI system. That requirement is only meaningful if there is an infrastructure in place to detect when changes have occurred and to re-run the relevant assessments. Annual audit cycles, by definition, cannot satisfy that obligation reliably.
The regulatory direction across all major frameworks is toward more frequent, more granular oversight. Employers and vendors who build compliance programs around annual audits are likely to find themselves behind the curve as enforcement intensifies over the next two years.
What a Third-Party AEDT Bias Audit Actually Measures
A bias audit for AEDTs is a structured statistical analysis of how a tool performs across protected demographic groups. The methodology required under NYC Local Law 144, and adopted as a baseline by most other frameworks, uses impact ratio analysis.
For tools with pass/fail outcomes, the audit measures the selection rate for each demographic group: the number of candidates who pass divided by the total number of candidates from that group evaluated. The impact ratio compares each group's selection rate to the selection rate of the group with the highest selection rate. A ratio below 0.8 indicates potential adverse impact.
For tools that produce numeric scores, the audit measures the scoring rate: the proportion of candidates in each group who score above the median. The same 0.8 threshold applies.
A rigorous audit also goes beyond the 4/5ths rule. Intersectional analysis, which examines outcomes for Asian women, Black men, and similar demographic intersections, can surface disparities that group-level analysis misses. Dual-method approaches that combine disparate impact analysis with counterfactual analysis, testing whether changing a candidate's demographic characteristics while holding other factors constant would change the score, provide a more complete picture of where bias enters the model.
The audit must be conducted by an independent third party. Internal assessments do not satisfy the legal requirement. The independence requirement is central to the law's design: it ensures that the entity reviewing the tool has no financial interest in its approval.
What Vendors and Employers Need to Do Now
For HR tech vendors whose tools qualify as AEDTs, the compliance requirements are specific and time-sensitive. The starting point is determining whether your tool falls within the scope of existing law, beginning with NYC Local Law 144 if your customers include NYC-based employers or employers hiring for NYC-based or remote roles from NYC offices.
If the tool is in scope, the next step is commissioning a bias audit from an independent third party. That audit needs to cover sex and race/ethnicity at a minimum, with intersectional analysis. The results must be published on the employer's website. If you are a vendor, your customers need that audit documentation to satisfy their own disclosure obligations.
For employers and staffing agencies deploying third-party AI tools, the compliance burden does not stop at vendor selection. Under most frameworks, deployers share responsibility for the outcomes those tools produce. Requesting audit documentation from vendors is a starting point. Building internal processes to monitor ongoing outcomes, collect candidate demographic data where lawful, and respond to audit findings is the more durable approach.
The case for independent third-party audits is not just legal compliance. Enterprise buyers increasingly require audit documentation before purchasing HR tech. A Warden Assured certification has become a sales-cycle accelerant for vendors who can demonstrate that their tools have been independently reviewed.
Frequently Asked Questions About AEDTs
Does NYC Local Law 144 apply to my company if we are not based in New York City?
Yes, if your company has offices in New York City and uses AEDTs to make employment decisions affecting NYC-based candidates or employees, or if you are hiring for remote positions from NYC offices, the law applies. Physical presence in NYC combined with use of the tool for covered roles is the trigger, not the company's headquarters location.
What is the difference between an AEDT and standard applicant tracking software?
Standard applicant tracking software that simply stores and organizes applications without using AI or statistical modeling to score or rank candidates does not qualify as an AEDT. The defining characteristic is the use of machine learning, AI, or statistical modeling to produce a recommendation, prediction, or score that substantially assists a hiring decision. A basic resume storage system is not an AEDT. An AI ranking engine that scores candidates against a predictive fit model is.
Who is legally responsible for a biased AI tool: my company or the vendor who built it?
Responsibility is often shared. As the employer using the tool for hiring or promotion decisions, your company is ultimately accountable for ensuring your employment practices are fair and lawful. However, the vendor who created the tool also has a responsibility to build a fair and reliable product. This is why it's so important for both vendors and employers to conduct thorough testing and maintain clear documentation.
How often does an AEDT bias audit need to happen?
NYC Local Law 144 requires an audit at least once per calendar year. Colorado SB205 requires reassessment within 90 days of any material change to the AI system, in addition to the baseline annual requirement. Continuous monitoring approaches run audit checks monthly or more frequently and are better positioned to catch model drift between formal audit cycles.
What happens if an AEDT shows adverse impact in a bias audit?
The law does not prohibit tools that show adverse impact from being used. It requires disclosure of the results. However, consistent adverse impact against a protected group creates legal exposure under employment discrimination law independent of AEDT-specific regulations. The practical response is to investigate the source of the disparity, document the investigation, implement mitigations where feasible, and monitor outcomes after any changes.



