Many organizations mistakenly believe New York City’s new hiring law only applies to companies headquartered within the five boroughs. The reality is far broader. The law’s jurisdiction is determined by the location of the job, not the employer. If your company uses automated tools to screen, score, or select candidates for any position based in New York City, including remote roles filled by city residents, you are required to comply. This means conducting a yearly NYC bias audit performed by an independent party. This guide will help you determine if your recruitment practices fall under the law's purview and outline the necessary steps to take if they do, regardless of where your office is located.
Key Takeaways
- Understand your core obligations: The law requires an annual, independent bias audit for each automated hiring tool, clear notification to candidates, and public disclosure of the audit summary on your website.
- Recognize the broad scope and high stakes: Compliance is mandatory for any company hiring for a role in NYC, regardless of your headquarters' location. Non-compliance can lead to daily fines and significant legal exposure from potential discrimination claims.
- Build a sustainable compliance program: Lasting compliance goes beyond a single audit; it requires creating a complete inventory of your tools, establishing a strong AI governance framework, and continuously monitoring your systems for fairness.
What Is the NYC Bias Audit Law?
If your company hires or promotes employees in New York City, you need to be aware of a significant piece of legislation impacting how you use technology in your recruitment process. Known as the NYC Bias Audit Law, this regulation introduces new rules for employers using automated tools to make employment decisions. Understanding its requirements is the first step toward ensuring your hiring practices are both fair and compliant. It changes the landscape for any organization that relies on automated systems to build its workforce.
The Core of Local Law 144
At its heart, New York City's Local Law 144 requires employers to take a closer look at the automated tools they use for hiring and promotions. The law mandates that these systems, called automated employment decision tools (AEDTs), undergo an annual AI bias audit conducted by an independent party. This is not just an internal checkup; it is a formal assessment to check for potential bias. Beyond the audit itself, the law has two other key components. Employers must notify candidates when an AEDT is being used to evaluate them. They also have to make the results of the bias audit publicly available on their website, creating a new layer of transparency in the hiring process.
What the Law Aims to Achieve
The primary goal of this legislation is to promote fairness and reduce discrimination in the workplace. As companies increasingly rely on AI to screen resumes and assess candidates, there is a growing concern that these tools could unintentionally favor certain groups over others. The law directly addresses this by requiring audits that check for disparate impacts based on race, ethnicity, and gender. By making these audits mandatory and public, the city aims to hold organizations accountable for the technology they deploy. It ensures that candidates have more insight into how they are being evaluated and pushes the industry toward developing and using more equitable AI systems.
Who Must Comply With the Law?
New York City’s bias audit law has a broad scope that can catch many employers by surprise. The compliance requirements are not determined by the size of your company or where your headquarters are located. Instead, the key factor is whether your organization uses specific types of technology to make employment decisions affecting jobs or people in New York City. Understanding if your company falls under the law’s jurisdiction is the first step toward building a compliant hiring process. Many organizations find that even if they are not based in the city, their recruitment practices require them to adhere to these rules.
Identifying Covered Employers
The law applies to any employer or employment agency that uses Automated Employment Decision Tools (AEDTs) to screen candidates for hiring or employees for promotion. This applies to businesses of all sizes, from small startups to large multinational corporations, across both the public and private sectors. If your company relies on an algorithm or AI system to help filter, score, or select applicants for a position, you are required to comply. The regulation is triggered by the use of the tool in the decision making process, regardless of how much weight that tool’s output is given. The responsibility for the audit rests with the employer using the tool, not the vendor who created it.
The Law's Geographic Reach
A common point of confusion is the law's geographic application. A company does not need to be physically located in New York City to be subject to the regulation. The law’s reach is determined by the location of the job. If you are hiring for a position based in New York City, you must comply, even if your company is headquartered in another state or country. This also extends to remote positions if the person hired for the role will be working from a location within the city. Any organization that evaluates job candidates or current employees for roles connected to New York City using an AEDT must follow the law’s requirements for bias audits and transparency notices.
What Is an Automated Employment Decision Tool (AEDT)?
New York City's Local Law 144 centers on a specific category of technology: the Automated Employment Decision Tool, or AEDT. Understanding what qualifies as an AEDT is the first step for any employer looking to comply with the law. The definition is intentionally broad, capturing a wide range of software used in modern recruitment and employee advancement. If your organization uses technology to filter, score, or rank candidates, it is crucial to determine if those systems fall under the law's purview. The following sections break down what an AEDT is and how its role in hiring has prompted this new level of regulatory oversight.
Defining AEDTs and Common Examples
The law defines an AEDT as any computational process that issues a simplified output, like a score or classification, which is used to either replace or "substantially assist" a human decision-maker in hiring or promotion. Think of tools that automatically screen resumes for keywords, software that analyzes video interviews for specific traits, or platforms that rank candidates based on their responses to an assessment. These systems use data and algorithms to make recommendations. Because the definition is so broad, many common types of hiring software can be considered AEDTs. The key is whether the tool's output is a significant factor in making an employment decision.
The Role of AEDTs in Hiring and Promotions
AEDTs are designed to make hiring and promotion cycles more efficient, helping teams sort through large applicant pools to find the best fit. However, their efficiency can come with a risk. The primary concern behind Local Law 144 is the potential for these tools to perpetuate or even amplify biases, resulting in discriminatory outcomes. The law requires an audit to check for "disparate impact," which occurs when a tool's results unfairly disadvantage individuals based on their race, ethnicity, or gender, even if there was no intent to discriminate. By requiring an AI bias audit, the city aims to bring transparency to these automated systems and ensure they support fair hiring practices.
What Does a Bias Audit Entail?
Complying with New York City’s law involves more than just running a quick check on your software. The regulation outlines a specific process for auditing, testing, and reporting on the tools you use for hiring and promotion. Understanding these requirements is the first step toward building a compliant and fair recruitment process.
The Mandate for an Independent Annual Audit
The law requires employers to commission an AI bias audit for each of their automated employment decision tools, or AEDTs, at least once a year. A critical part of this rule is the term “independent.” The audit must be conducted by a third party who was not involved in developing or using the tool. This ensures the evaluation is impartial and objective. This annual cadence means compliance is not a one-time project but an ongoing commitment. Your company must schedule and complete a new audit every 12 months for as long as the tool remains in use.
Key Components of the Audit
The core of the audit is a test for disparate impact. This analysis determines if an AEDT disproportionately screens out candidates based on their race, ethnicity, or sex. The audit calculates an “impact ratio” by comparing the selection rate of a specific demographic group to the selection rate of the most selected group. For example, it might compare the rate at which female applicants are selected for a role against the rate for male applicants. The law specifies the intersectional race, ethnicity, and sex categories that must be analyzed, providing a clear framework for the evaluation.
Transparency and Public Reporting Rules
The audit’s findings cannot remain internal. Before using an AEDT, and at least annually thereafter, you must publish a summary of the most recent bias audit on your company’s website. This summary must be placed in a clear and conspicuous location where job seekers or employees can find it. The public disclosure needs to include the date of the most recent audit and the date the tool was first used. It also must share the impact ratios for all categories, making the tool's performance transparent to the public. You can see examples of this transparency in the Warden Assured Directory.
Common Hurdles in Achieving Compliance
Achieving compliance with New York City’s bias audit law presents several challenges for employers. The process involves more than just running a test; it requires careful planning, data management, and legal interpretation. Understanding these common obstacles is the first step toward building a successful and sustainable compliance strategy. From identifying the right tools to finding a qualified auditor, organizations must address several key areas to meet the law's requirements and mitigate risk.
The Challenge of Locating Every AEDT
The first hurdle for many organizations is simply identifying every tool that qualifies as an AEDT. Your company must determine which systems are used for hiring or promotion that either replace human discretion or significantly influence conclusions. These tools may be standalone resume screeners, or they could be features embedded within larger Human Resource Information Systems (HRIS). In large companies, different departments might use various unsanctioned tools, making a complete inventory a complex undertaking. A thorough internal review is necessary to map out every piece of technology that plays a role in employment decisions before an audit can even begin.
Securing a Qualified, Independent Auditor
The law requires that your AEDT undergoes an annual check by an independent expert. This audit must search for disparate impacts on candidates based on their race, ethnicity, and gender. However, the term "independent" is not explicitly defined in the law, leaving companies to interpret its meaning. Finding an auditor who is truly impartial and possesses the necessary expertise in data science, employment law, and AI ethics can be difficult. The process of an AI bias audit itself requires deep technical and legal knowledge, and the pool of qualified professionals who can perform this service to a legal standard is still developing.
Managing Data for a Successful Audit
A successful bias audit depends entirely on the quality and completeness of your data. You will need to examine the data your AEDTs process alongside other employee information to see if you have what is needed to properly check for bias. Many organizations find that their historical data is insufficient, lacking the specific demographic details required for a statistically valid analysis. Preparing for an audit often involves a significant data gathering and cleaning effort. You must ensure you have a sufficient sample size and accurate records for both selected and non-selected candidates to produce a meaningful audit result.
Interpreting the Law's Vague Language
While the NYC Bias Audit Law is in effect, some of its language remains open to interpretation. Key terms, including what precisely constitutes an AEDT, what makes an auditor truly "independent," and which statistical methods are best for measuring bias, are still being clarified through regulatory guidance and early enforcement actions. This legal ambiguity can make it challenging for employers to build a compliance strategy with confidence. This uncertainty requires a robust framework, often supported by an AI assurance platform, to create a defensible and transparent process that can adapt as legal interpretations evolve.
How to Prepare for Compliance
Meeting the requirements of Local Law 144 involves more than just a single audit. It requires a systematic approach to how you select, use, and monitor automated tools in your hiring process. Preparing for compliance means building a foundation of transparency and accountability within your organization. By taking a few deliberate steps, you can create a clear path toward meeting your legal obligations and fostering a fairer hiring environment. These actions will not only help you adhere to the law but also strengthen your overall HR practices.
Start with an Internal AEDT Inventory
The first step is to create a comprehensive inventory of every Automated Employment Decision Tool (AEDT) your organization uses. This means identifying any software or system that assists or replaces human decision making in hiring and promotions. You need to document which tools are in use, what they do, and where they fit into your recruitment workflow. Maintaining a detailed and current list is fundamental. This inventory serves as your map, showing you the full scope of what needs to be audited and managed, and is a critical part of your AI assurance strategy.
Build a Framework for AI Governance
Once you know which tools you’re using, you need a clear framework for governing them. This involves establishing rules for how your company uses and manages its AEDTs. Your framework should align with best practices and existing regulations. It is also wise to assemble a dedicated team responsible for overseeing these tools. This team should have the authority to review and test the systems, ask questions of the vendors or developers, and report any issues that arise. A strong governance structure demonstrates a commitment to responsible AI use and provides a clear line of accountability.
Create Clear Protocols for Candidate Notification
Transparency with candidates is a non-negotiable part of the law. You must inform applicants and employees when an AEDT will be used to evaluate them for a role. This notice needs to be clear and accessible. It should explain that an automated tool is being used, specify the job qualifications or characteristics it assesses, and provide information on how an individual can request an alternative evaluation method or an accommodation. Proactive communication not only ensures compliance but also builds trust with candidates, showing them you are committed to a fair process, which is a key component of any AI bias audit.
Implement a System for Continuous Monitoring
Compliance is not a one-and-done task. It requires ongoing attention. You should implement a system for continuously monitoring your AEDTs to ensure they are performing as expected and remain stable over time. Regular assessments of a tool’s performance and its impact on different demographic groups will help you identify and address potential bias or other issues before they become significant problems. This continuous oversight ensures your tools remain effective and compliant long after the initial audit is complete, helping you maintain a high standard of fairness like the one set by Warden Assured.
What Are the Penalties for Non-Compliance?
Failing to comply with New York City’s bias audit law carries significant and compounding consequences. The legislation was designed with clear enforcement mechanisms that include financial penalties, but the risks extend far beyond monetary fines. For employers and HR technology vendors, understanding these penalties is a critical part of developing a comprehensive compliance strategy. The law’s structure makes inaction a costly choice, not just in terms of dollars, but also in legal exposure and public reputation. Proactive measures are essential for any organization that uses automated tools in its employment decisions.
Understanding the Fines and Penalties
The law specifies a civil penalty of up to $500 for a first violation. For each subsequent violation, the penalty can increase to as much as $1,500. Crucially, the city considers each day an employer uses a non-compliant tool or fails to provide proper notice as a separate violation. This means fines can accumulate rapidly. For example, using an unaudited tool for a single week could result in seven distinct violations. These penalties apply to both the use of an un-audited AEDT and the failure to properly notify candidates and employees. An independent AI bias audit is the foundational step to confirm your tools are compliant and avoid these accumulating costs.
Beyond Fines: The Risk of Litigation
While the financial penalties are clear, the greater risk for many organizations lies in potential litigation and reputational harm. Local Law 144 is fundamentally a civil rights law, designed to prevent discrimination in hiring and promotions. A public finding of bias, or the failure to conduct a required audit, could become powerful evidence in a discrimination lawsuit filed under other local, state, or federal laws. If an audit reveals bias and the organization does not take corrective action, it creates a documented record of liability. Public reports of biased hiring practices can also damage a company's brand, making it harder to attract top talent and maintain customer trust. Adhering to a recognized standard like Warden Assured helps demonstrate a commitment to fairness that can mitigate these legal and reputational risks.
Maintaining Compliance for the Long Term
Achieving compliance with the NYC Bias Audit Law is not a one-time task. The law requires an annual audit, and the world of AI and regulations is constantly changing. Maintaining compliance means building a durable program that integrates into your company’s operations. It requires a forward-looking approach that treats fairness and transparency as ongoing business priorities, not just a box to check once a year. This involves continuous training, regular risk assessments, and a clear strategy for managing your AI tools over their entire lifecycle. By embedding these practices into your workflow, you can protect your organization from penalties and build a more equitable hiring process for the future.
The Importance of Ongoing Team Training
Compliance is a team sport. Your legal and HR departments must work together to prepare and update the notices you provide to candidates and employees about the use of automated tools. But the responsibility doesn't end there. Hiring managers and recruiters who interact with these systems daily need to understand their basic functions and limitations. Ongoing training ensures that everyone involved in the hiring process is aware of their obligations under the law. It helps them answer candidate questions accurately and spot potential issues before they become significant problems, making your team the first line of defense in your compliance efforts.
Conduct Regular Risk Assessments
The automated tools you use today may not be the same ones you use next year. Your hiring needs will also change. That's why it's essential to conduct regular risk assessments of your AI systems. An algorithmic risk analysis helps you understand which groups of people might be affected by your automated tools and how significant those effects could be. This process involves reviewing your AEDTs to identify any new or emerging sources of bias. Think of it as a routine checkup for your AI. Regular assessments allow you to catch potential issues early, before they lead to non-compliance or harm your company's reputation.
A Proactive Approach to Addressing Bias
An audit that finds potential bias is not a failure; it is an opportunity to improve. The key is to have a plan in place before the audit even begins. If an audit reveals that an AEDT is producing unfair outcomes, you need a clear and immediate process for remediation. This involves investigating the source of the bias, working with the tool's vendor or your internal development team, and taking concrete steps to fix the disparities. A proactive approach to AI bias auditing demonstrates to regulators, candidates, and your own employees that you are committed to fairness and are actively working to correct any issues that arise.
Evolving Your AI Governance Strategy
A robust AI governance strategy is the foundation of long-term compliance. This is more than just a document; it is a living framework that guides how your organization selects, implements, and monitors automated employment tools. Your strategy should create a clear system for managing your AEDT inventory, define the schedule for risk assessments, and outline the protocols for training and bias remediation. As AI technology advances and regulations evolve, your governance strategy must adapt. By continuously refining your approach, you ensure that your company not only meets current legal requirements but is also prepared for the future of AI in the workplace.
Related Articles
- NYC Local Law 144 - Solution - Warden AI
- Navigating the NYC Bias Audit Law for HR Tech platforms - Warden AI
- HR Tech Compliance: Everything You Need to Know about NYC Local Law 144 - Warden AI
- What is a Third-Party AI Audit? A Simple Guide - Warden AI
- Colorado SB 205 vs. NYC Local Law 144: How Do AI Hiring Regulations Compare? - Warden AI
Employer FAQs on the NYC Bias Audit Law
Does this law apply to my company if we are not based in New York City?
Yes, it very well might. The law’s requirements are tied to the location of the job, not the location of your company headquarters. If you are using an automated tool to screen candidates for a position based in New York City, you must comply. This also applies to remote roles if the person you hire will be working from a location within the city.
My software vendor says their tool is compliant. Is that enough?
While a vendor’s own testing is a positive sign, the law places the responsibility for compliance squarely on the employer using the tool. You, as the employer, are required to commission an annual bias audit from an independent third party. You cannot simply rely on a vendor's claims or their internal audit. The purpose of the law is to ensure an impartial assessment of how the tool is used in your specific hiring context.
What happens if an audit reveals that our tool has a biased impact?
Discovering a biased impact is not an automatic penalty. Instead, you should view it as a critical opportunity to improve your hiring process. The audit provides the data you need to investigate the source of the disparity and take corrective action, such as adjusting the tool's configuration or working with the vendor on a solution. Having a clear plan to address any identified issues demonstrates a good faith commitment to fairness to both regulators and candidates.
What kind of data is needed to perform a bias audit?
A successful audit requires historical data from your hiring process. Specifically, an auditor will need records for both selected and non-selected candidates for a particular job, along with the demographic information for race, ethnicity, and sex required by the law. Gathering and organizing this data is often the most intensive part of preparing for an audit, so it is wise to review your data collection practices well in advance.
Is this just a New York City law, or should I expect similar regulations elsewhere?
New York City is a leader in this area, but it is not alone. A growing number of jurisdictions, including the state of Colorado and the European Union, are implementing their own rules for AI in the workplace. The principles of fairness, transparency, and accountability are becoming global standards. Building a strong AI governance and compliance framework now will not only help you meet today's requirements but also prepare your organization for future regulations.



