Fair, transparent and audited: How inploi Is raising the bar for responsible AI in talent acquisition

Fair, transparent and audited: How inploi Is raising the bar for responsible AI in talent acquisition

Fair, transparent and audited: How inploi Is raising the bar for responsible AI in talent acquisition

News

13/08/2025

A white background with soft, abstract line icons. On the left, an orange rounded rectangle contains the white inploi logo. On the right, connected by an orange outline, is a white panel featuring the Warden Assured logo in dark purple.

At inploi, we believe AI can make hiring fairer, faster, and more human. But that only works when the technology behind it is tested rigorously and publicly, not hidden behind claims or static policy PDFs.

That’s why we’re proud to share two milestones:

1. Our AI-powered Application Scoring has once again been independently audited by Warden: with an all-green result showing no measurable bias across gender, race, or intersectional groups (live results published on the AI Assurance Dashboard).

2. inploi has been included in the new Warden Assured Directory, a curated list of AI providers who meet ongoing fairness and compliance standards.

For our customers, this means one thing:

Trust isn’t assumed. It’s proven continuously.


Why we did it

AI adoption in HR is accelerating, and so is scrutiny. Talent leaders are asking sharper questions than ever:

  • Are these tools fair?

  • How do we know?

  • What happens if the model behaves differently in six months?

  • Who is accountable?

Enterprise and public-sector teams face real risk: regulatory, reputational, and operational. Good intentions aren’t enough. Proof matters.

Warden gives us that proof: independently, repeatedly, and in public.


What the audit tested

Warden’s method is rigorous and hiring-specific. They used a combination of thousands of real CVs (collected with consent for auditing purposes) and synthetic CVs.The real CVs came from actual candidates, while the synthetic ones were generated to be identical in skills, education, and experience but varied in protected characteristics like gender, race, name, and religion.

The goal: see whether inploi’s scoring model treats equivalent candidates differently based on attributes that shouldn’t matter.

It didn’t.

This doesn’t mean our system is flawless - no AI application ever is. But it shows we’ve taken real steps to de-risk adoption, reduce bias, and build trust into the product itself.

“Customers are rightly cautious when it comes to AI. They see the upside, but they’re nervous about bias and what it means for their brand. This audit gives them confidence that we’ve built something trustworthy, and that we’re willing to prove it.”

- Nicolas Escudero, Product Manager at inploi

View the public audit dashboard


What this means for employers

For teams navigating compliance frameworks like NYC Local Law 144 and the upcoming EU AI Act, Warden assurance removes major barriers to adoption.

Warden assurance gives you:

✓ Faster procurement: risk, compliance, and legal have verified proof ready to go
✓ Less internal friction: no lengthy audits, no uncertainty
✓ Higher confidence: transparent, repeatable, third-party validation
✓ Public accountability: results aren’t hidden, they’re published

As one customer told us:

“This is the first time a TA tech vendor has come to us with live AI assurance. It’s a shortcut through what would’ve been six months of risk assessment.”


Included in the Warden Assured Directory

We’re also pleased to share that inploi is now listed in the Warden Assured Directory, alongside a small group of vendors meeting their standards for fairness and transparency.

Being included means our scoring model is:

  • Independently audited

  • Continuously monitored

  • Publicly reported

  • Compliant with established fairness testing methods

For TA leaders and candidates alike, it’s a clear signal:
inploi is committed to responsible AI, not just AI capability.


Responsible AI, by design

Candidate experience is built on trust. And trust is built through choices, not slogans.

We chose external auditing because it’s the right thing to do.
We publish results publicly because transparency builds confidence.
And we audit monthly because responsible AI isn’t a one-off milestone, it’s a practice.

If you’d like to see how our AI-powered Scoring works in practice, we’d be happy to walk you through it.

Want to see our AI-powered Application Scoring in action? Book a demo now.


FAQs

1. What is Warden AI assurance?

Warden AI assurance is an independent audit that evaluates AI systems for fairness, bias, and compliance with regulations such as NYC Local Law 144. It provides transparent, third-party validation that an AI model behaves consistently and without discriminatory impact.

2. Why does AI bias auditing matter in recruitment?

In hiring, bias can lead to unfair outcomes, reputational risk, and regulatory penalties. Third-party audits help organisations ensure their AI tools treat candidates fairly and meet legal and ethical standards.

3. How often is inploi’s AI Application Scoring audited?

Our scoring model is audited monthly. Results are published publicly on the AI Assurance Dashboard so clients can see real-world performance over time.

4. What does it mean for inploi to be listed in the Warden Assured Directory?

Inclusion means our AI has passed Warden’s fairness tests and is continuously monitored. It signals to employers that inploi meets high standards for transparency, compliance, and responsible AI.