State of AI Bias in Talent Acquisition - Download Report

Harper vs. SiriusXM: The Growing Legal Risk of AI in Hiring

AI Bias

/

20 Aug 2025

Harper vs. SiriusXM: The Growing Legal Risk of AI in Hiring

Harper vs. SiriusXM shows AI bias in hiring is already a legal risk. Both vendors and employers must show their work, or face the courts.

SiriusXM is the latest employer to face an AI hiring bias lawsuit, as plaintiff, Arshon Harper, claims the company’s use of AI in hiring unlawfully discriminated against Black applicants.

Here are the case facts: 

  • Harper applied for around 150 positions at SiriusXM for which he believed he was qualified. He was rejected from all except one, where he received an interview but was not hired.

  • The lawsuit asserts that the AI tool relied on data such as education and address, variables correlated with race, in failing to assess his qualifications properly, thus perpetuating bias.

  • The case is framed under Title VII of the Civil Rights Act and §1981, alleging both disparate treatment and disparate impact.

This lawsuit arrives on the heels of Mobley vs. Workday, where the court allowed a collective action to proceed against an AI vendor. 

This case shows the importance of both vendors and employers showing their work. 

Both Employers and Vendors Share Legal Responsibility 

The difference between these cases is significant. 

In Mobley, the spotlight was on the vendor, Workday, and whether an AI provider can be held liable for discrimination under agency theory. 

In Harper, the defendant is the employer, SiriusXM. 

Together, these cases highlight a shared reality. Both builders and users of AI hiring tools carry legal risk. 

Liability doesn’t stop at the vendor. It extends to the organizations that deploy these systems in real hiring decisions.

Old Laws, New Technology

Importantly, neither case hinges on newly passed AI regulations. 

Both rely on long-standing US civil rights laws, (Title VII, §1981, the Age Discrimination in Employment Act, the Americans with Disabilities Act) this time just applied to new technology. 

This makes the legal risk immediate, not hypothetical or “down the line” when future AI regulation kicks in. Legal risk and reputational damage is already here. 

Evidence from our State of AI Bias Report 

The Harper vs. SiriusXM lawsuit doesn’t come out of nowhere, instead it reflects the very risks we tracked in our State of AI Bias in Talent Acquisition Report.

Our report data showed:

  • 75% of talent leaders said AI bias is a top concern when adopting AI tools, second only to data privacy.

  • Mentions of Mobley vs. Workday skyrocketed once the case reached collective action status, underscoring how quickly legal action can reshape perception.

  • Yet, end users and employees are still unaware of how AI is used in hiring processes with many not knowing AI is being used at all.

In short, lawsuits like Harper and Mobley aren’t isolated events. 

They’re part of a larger shift where existing civil rights law is being applied to AI in hiring, and both vendors and employers are squarely in the spotlight.

Defensibility is the Real Issue 

AI bias will continue to dominate headlines. But the deeper issue is defensibility. 

In a world where allegations can be filed in days; the real challenge is whether companies can produce hard evidence showing how an AI system works, what decisions it impacted, and whether adverse impact occurred. 

Without that audit trail, vendors and employers alike are left exposed.

Final thoughts

AI in hiring isn’t a “compliance problem” waiting for future laws to mature. 

It’s a defensibility problem unfolding now, under long-standing civil rights law. 

Those who can’t show their work won’t just face compliance challenges, they’ll face the courts.

Join the companies
building trust in AI

Request Demo