Download our new CCPA Guide for Staffing & Recruitment

Jack & Jill Stress-Tests Agentic AI for Bias Risk

Jack & Jill Stress-Tests Agentic AI for Bias Risk

Warden’s Head of Responsible AI, Martyn Redstone, sat down with Jack & Jill CEO and Co-Founder Matthew Wilson, to discuss how they set out to fix a broken recruitment system using safe and fair agentic AI.

What is Jack&Jill’s mission and how does AI fit into that?

The pressure on recruiters and candidates is real. High application volume, fragmented workflows, and inconsistent evaluation standards create friction on both sides of the hiring process.

Jack & Jill’s mission is to reduce that friction by introducing structured, AI-driven infrastructure into hiring.

At the center of the platform are two AI agents, Jack & Jill, designed to support both candidates and employers throughout the process.

These agents guide structured interactions, evaluate responses against predefined, job-relevant criteria, and generate outputs intended to bring greater consistency to hiring decisions.

Why does AI trust matter in agentic AI?

Matt mentions trust is both the solution and the problem.

Many teams do not fully trust the AI systems they are using. They do not know whether recommendations are fair, whether they introduce bias, or whether they would stand up to scrutiny.

That uncertainty creates operational friction. Recruiters override recommendations. Teams duplicate review processes. Organizations carry both human and algorithmic bias risk without fully understanding either.

Fairness matters in agentic AI especially, because talent leaders need evidence that systems are behaving as intended.

Why should candidates and customers care about fairness?

Jill works on the enterprise side, matching candidates to specific roles, so it is crucial that Jill has fairness baked in to alleviate buyer concerns.

“How do you know your AI recruiter is not biased?’ has been the most asked question from talent leaders.

Additionally, talent leaders using AI in HR are seeing real regulatory frameworks come into force. The EU AI Act, California FEHA, FCRA obligations in the US, and NYC Local Law 144 all introduce new expectations around transparency, accountability, and monitoring.

Candidates must be informed, decisions must be explainable, and organizations are needing evidence that systems are behaving as expected.

What is proxy bias and how does Jack & Jill test for it?

Proxy bias is when a system appears to ignore protected characteristics (like race, gender, age), but still discriminates because it uses other variables that stand in for them.

Indicators of protected characteristics, like an applicant’s name, can have adverse effect on candidate outcomes.

To combat this, Jack & Jill redacts information about the candidate to reduce the potential for proxy bias to seep in.

Bias auditing is an effective way to stress-test whether proxy signals for protected characteristics are influencing outcomes.

How does Jack & Jill stress-test with bias auditing?

As part of becoming a Warden Assured platform, Jack & Jill implements a dual-bias testing methodology. This means the system is evaluated using two complementary statistical techniques, disparate impact analysis and counterfactual testing.

Disparate impact testing evaluates if the AI system is adversely impacting a protected group compared to another protected group.

Counterfactual testing systematically stress-tests model outcomes by modifying or removing proxy information (for example, names, gendered terms, or age signals) to determine whether those proxies materially influence decisions.

Jack and Jill implements both techniques, in addition to redaction before candidates are then matched, creating a stress-testing environment in the platform.

Why Jack & Jill chose ongoing, AI assurance

Jack & Jill did not want fairness to rest on internal validation alone.

Internal reassurance is rarely sufficient. Buyers, legal teams, and regulators expect independent assurance, particularly when AI systems influence employment outcomes.

The company therefore sought an AI assurance partner with specific expertise in talent acquisition systems, rather than a generalist auditor.

The objective was to not obtain a one-time certification but continuous oversight into the company’s AI system.

As Matt explains:

We consult with the experts. Having Warden onboard is reassuring to our customers. We wanted to show we go beyond the minimum requirement and give confidence that our AI system is safe and fair

Independent, continuous validation signals maturity.

Jack & Jill's AI Assurance Dashboard

What is next for Jack & Jill?

Jack & Jill are moving from being a “fair-by-design” to an enterprise-safe, defensible AI platform.

The emphasis on independent audits, ongoing monitoring, and regulation signals that their next phase is about continuing to earn and prove trust under scrutiny.

Read Jack & Jill’s report on bias in recruitment here.

Join the companies
building trust in AI

Request Demo