Learn how to ensure compliance, demonstrate fairness, and stay competitive under NYC Local Law 144
The use of AI in the workplace is becoming increasingly common. By some estimates, as many as 83 percent of employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen candidates for hire. Among other benefits, these AI tools promise to reduce recruitment costs, overcome bias and deliver better candidate matches.
However, applying AI in recruitment is not without challenges. One key challenge is detecting and preventing bias. While AI has the potential to reduce the bias inherent in human-driven decision-making, it also comes with the risk of amplifying existing biases at scale. If left unaddressed, this bias can unfairly hinder individuals’ participation in the economy and increase legal risks for businesses.
The application of AI in recruitment is identified as high-risk in several jurisdictions, leading to rapid developments in laws and regulations in this area. A notable milestone in this regulatory landscape is New York City’s Local Law 144, commonly known as the AI Bias Audit law, which came into effect on July 5, 2023.
The NYC AI Bias Audit Law is a pioneering legislative effort to combat AI bias in recruitment and HR. The law mandates that
An automated employment decision tool is defined as a tool that uses machine learning, statistical modelling, data analytics, or artificial intelligence to assist or replace discretionary decision-making in employment.
Although it is a local law, its impact extends beyond the city to any company operating within its jurisdiction, including remote positions associated with an office in New York City.
A bias audit is an impartial evaluation by an independent auditor to assess the tool’s disparate impact on individuals based on race/ethnicity and sex.
The NYC AI Bias Audit Law requires that the audit be performed by an independent entity not involved in using, developing, or distributing the automated employment decision tool, and having no direct financial interest in the employer or the vendor.
To ensure continuous compliance with the law, the bias audit should be performed annually and each violation of the law can result in penalties of up to $1,500.
While the NYC AI Bias Audit Law introduces significant challenges, it also creates substantial opportunities for AI vendors who can efficiently address their clients’ new compliance needs. Developing a value proposition around helping clients stay compliant with current and upcoming regulations, such as the NYC AI Bias Audit Law, can serve as a significant competitive differentiation that can help retain and attract clients who value ethical standards and legal compliance.
In summary, the growing use of AI in recruitment presents both opportunities and challenges. Laws like New York City’s AI Bias Audit law aim to ensure that the benefits of AI are realised without compromising fairness and equity. For AI vendors, this regulatory landscape offers a unique chance to lead in ethical AI deployment and compliance, setting them apart in a competitive market.
While AI vendors may fall just outside the AI Bias Audit Law, conducting an independent bias audit can be very helpful as it helps builds trust with vendors and employers and is becoming essential to stay competitive in an increasingly regulated market.
Warden AI's auditing platform addresses the NYC Bias Audit Law challenges, helping AI vendors to ensure compliance, transparency, and ethically responsibility. Here’s how Warden AI can support your journey towards compliance and innovation:
Schedule a demo to find out how Warden AI can help you comply with the NYC Bias Audit Law and stay ahead in an increasingly more regulated and competitive market.