
Joined by industry experts Hung Lee, Julie Sowash, and Jeff Pole, Warden AI recently hosted a briefing to explore what the recent Eightfold AI lawsuit means for vendors, employers, and the broader HR tech market.
The class action filing alleges that scraping public data to build "rich talent profiles" may constitute an unauthorized background check under the Fair Credit Reporting Act (FCRA). This case highlights a critical expansion of risk. We now need to ask "Is your AI compliant with consumer reporting laws?"
Our speakers included:
While many organizations are waiting for local AI legislation, the panel noted that existing laws are already being applied to AI tools. If you are building or buying in HR tech, these are the five key takeaways from the webinar.
For years, the industry has operated on the assumption that publicly available data (e.g., from LinkedIn or GitHub) is free to use for scoring. Hung Lee noted that innovation often outpaces regulation, but that gap is closing:
Technology innovation is getting faster and faster, such that legislation will always be behind... We have to get used to having conversations of this type.
The panel discussed how the "Public Data" defense is facing new challenges as courts examine whether "people aggregators" have the right to profile individuals without consent.
With federal AI legislation stalled in the US, the courts are effectively setting the standards. Julie Sowash explained that plaintiffs' attorneys are utilizing established statutes - like the FCRA (1970) - to address modern AI practices:
Plaintiff's attorneys are keeping their eye out of where they can use existing legislation to provide protections in what is now the Wild Wild West of the AI era.
She cautioned that this complaint "opens a Pandora's box" that could lead to further inquiries regarding bias and class protections.
A central topic was the risk of creating "Shadow Dossiers" - comprehensive profiles on candidates who never applied for a role. Jeff Pole highlighted that this practice can inadvertently cross the line into "Consumer Reporting."
Julie Sowash pointed out the specific danger of using inferred data within these profiles:
I can infer by using quality of Education as a rank... If I know that certain groups of people are not progressing to those more senior job skills... then we can assume that there may be bias that's happening.
The panel agreed that a static, one-time audit is insufficient in this environment. To defend against claims of discriminatory scoring or improper data usage, organizations need a continuous, auditable trail.
Jeff Pole emphasized the importance of measurement as a management tool:
If you can measure it... you can measure a risk then. It's easier to manage it, so that's where we always... recommend starting with... do your best to measure how it behaves.
Julie Sowash reinforced the operational necessity of this approach:
We should have real-time auditing... doing it correctly from the build is the only way that you're going to create that auditable defense trail.
This aligns with Warden AI’s core philosophy: Third-party continuous bias monitoring provides the independent evidence required to prove fairness and compliance.
While legal threats grab headlines, Jeff Pole argued that the most immediate driver of change is procurement. Sophisticated buyers are requiring vendors to demonstrate risk mitigation before signing contracts.
We have buyer pressure on the vendors... It is hard to imagine a world in which we are not at least gradually stepping towards mitigation of the risk.
The consensus was that transparency regarding data provenance and fairness is becoming a standard requirement for doing business.
Despite the serious legal subject matter, the tone of the session was constructive. Julie Sowash concluded with a sentiment that reflects the path forward for the industry:
I am an AI optimist. I think we can do really, really great things if we have the right governance in place.
Fairer AI systems for HR aren't automatic. But with robust testing, transparency, and continuous monitoring, they are achievable.