If you're building or selling an AI-powered hiring tool — resume screeners, candidate rankers, interview platforms — New York City's Local Law 144 directly applies to your product. As the first major city-level law to regulate Automated Employment Decision Tools (AEDTs), NYC LL144 creates mandatory compliance obligations for HR tech vendors and their enterprise clients alike.
This resource is built for HR tech platforms navigating what the law actually requires: annual third-party bias audits, public disclosure of results, and coverage of race, ethnicity, and sex across candidate pools. Enterprise buyers are now asking for proof of compliance before signing contracts — and a formal audit is becoming a prerequisite, not a differentiator.
Here is a clear breakdown of the law, what it demands from platforms, and how to operationalize compliance without slowing down your roadmap.
What is the NYC AI Bias Audit Law?
The NYC AI Bias Audit Law is a pioneering legislative effort to combat AI bias in recruitment and HR. The law mandates that
- An independent and impartial bias audit be completed and published before an automated employment decision tool is adopted to support employment and promotion decisions.
- Employers and employment agencies must notify employees and job candidates who are residents of New York City about the use of such tools.
An automated employment decision tool is defined as a tool that uses machine learning, statistical modelling, data analytics, or artificial intelligence to assist or replace discretionary decision-making in employment.
Although it is a local law, its impact extends beyond the city to any company operating within its jurisdiction, including remote positions associated with an office in New York City.
What is a Bias Audit?
A bias audit is an impartial evaluation by an independent auditor to assess the tool’s disparate impact on individuals based on race/ethnicity and sex. Our comprehensive audit includes the calculation of selection rates and impact ratios across all protected categories. This process is specifically designed for HR tech vendors and employers who need to validate their automated employment decision tools (AEDTs).
What’s included: We provide a full disparate impact analysis, a public summary for your site, and a compliance certificate. Pricing: Our pricing is structured per tool audited, with volume discounts for platforms managing multiple algorithms. Choosing Warden AI means you get more than just a "pass/fail" mark; you receive a roadmap for bias mitigation. We prioritize speed and accuracy, ensuring your audit is completed within a 2–4 week window so your sales cycle remains uninterrupted.
The NYC AI Bias Audit Law requires that the audit be performed by an independent entity not involved in using, developing, or distributing the automated employment decision tool, and having no direct financial interest in the employer or the vendor.
To ensure continuous compliance with the law, the bias audit should be performed annually and each violation of the law can result in penalties of up to $1,500.
What are the opportunities for AI Vendors?
While the NYC AI Bias Audit Law introduces significant challenges, it also creates substantial opportunities for AI vendors who can efficiently address their clients’ new compliance needs. Developing a value proposition around helping clients stay compliant with current and upcoming regulations, such as the NYC AI Bias Audit Law, can serve as a significant competitive differentiation that can help retain and attract clients who value ethical standards and legal compliance.
In summary, the growing use of AI in recruitment presents both opportunities and challenges. Laws like New York City’s AI Bias Audit law aim to ensure that the benefits of AI are realised without compromising fairness and equity. For AI vendors, this regulatory landscape offers a unique chance to lead in ethical AI deployment and compliance, setting them apart in a competitive market.
How can Warden AI help?
While AI vendors may fall just outside the AI Bias Audit Law, conducting an independent bias audit can be very helpful as it helps builds trust with vendors and employers and is becoming essential to stay competitive in an increasingly regulated market.
Warden AI's auditing platform addresses the NYC Bias Audit Law challenges, helping AI vendors to ensure compliance, transparency, and ethically responsibility. Here’s how Warden AI can support your journey towards compliance and innovation:
- Automated bias checks: Regularly check AI systems for bias, using various techniques to identify potential areas of non-compliance and recommend corrective actions that help vendors meet the ethical standards demanded by the NYC Bias Law.
- Proprietary dataset: Overcome insufficient historical data, Warden AI’s platform includes an extensive diverse test dataset to test AI systems for bias issues.
- Suite of reporting tools: Demonstrate trust and transparency with Warden AI’s live AI Assurance dashboards, comprehensive Audit Reports, and assurance badges.
- Human oversight: Facilitate human review and intervention, ensuring effective monitoring and control of high-risk AI applications.
Schedule a demo to find out how Warden AI can help you comply with the NYC Bias Audit Law and stay ahead in an increasingly more regulated and competitive market. Warden AI is specifically built for the HR technology ecosystem. Unlike generic legal consultants, our platform integrates directly with your data workflow to provide continuous AI monitoring. This is for growth-stage startups and established enterprise platforms that cannot afford the reputational risk of a failed audit. We provide the "Warden Assured" badge, a recognized mark of quality that tells your customers you have exceeded the minimum requirements of NYC LL 144, securing your position as a leader in responsible AI.
Frequently Asked Questions about NYC LL 144



