Book a 30-minute demo and learn how Kula can help you hire faster and smarter with AI and automation
The debate around AI in hiring is shifting. The focus is no longer on whether AI can be biased, but on how to measure it, manage it, and make sure it's improving outcomes.
That’s the goal behind our new report, State of AI Bias in Talent Acquisition 2025.
It draws on over 150 real-world audits, a million test samples, and vendor insights to show where things really stand.
A key finding: AI systems are passing standard fairness tests, and in many cases, outperforming the human decisions they replace.
For teams building AI-native ATS platforms, like Kula, there’s a clear takeaway: bias isn’t just a risk to mitigate. It’s also a space to lead.
We compiled our top 3 insights from our 2025 State of AI bias report, and how TA plays into each of them.
1. AI can improve fairness, but only with the right design
The report data shows that 85 percent of audited systems met fairness thresholds across race and sex. In fact, AI decisions were up to 45 percent fairer than human ones when benchmarked using industry-standard methods.
This is especially relevant for end-to-end systems like Kula’s. Features like AI scoring, Conversational Analytics, and AI Notetaking can reshape decision-making across the hiring funnel.
But with that level of reach, comes responsibility, and also the chance to raise the bar for fairness in ways that manual processes simply cannot.
2. Bias varies more than people expect
Not all systems are equal. The range in fairness scores between vendors reached 40 percent. Even among tools that passed basic thresholds, there were big differences in performance.
That’s why transparency matters. HR leaders are increasingly expecting clear documentation, published audits, and explainability built into the product.
Some vendors are already delivering on this, setting new expectations across the market.
3. Compliance is no longer a future concern
Regulations like NYC Local Law 144 are already shaping procurement, and the EU AI Act and Colorado SB205 are next.
Yet, our survey found that fewer than half of vendors have completed compliance work for even one of these laws. In fact, only 20% of vendors are following best practices for responsible AI development.
Customers are starting to ask harder questions, and at the same time many vendors are still catching up. But some are clearly further along, combining product design with real governance, and showing that speed and responsibility can go hand in hand.
Kula is one of the vendors actively supporting and educating the market when it comes to learning about algorithmic bias.
P.S. Check out Kula’s Global Guide to AI Regulations for more insight on that topic.
What comes next for TA teams?
Hiring teams want to move fast. AI can help accelerate decisions, if it's built on solid ground.
That means testing for bias, embedding transparency into the product, and putting real safeguards in place to protect candidates and employers alike.
We’re grateful to all the vendors, like Kula, who contributed to the State of AI Bias in Talent Acquisition Report.
It’s a conversation the industry needs, and we’re looking forward to seeing how leaders across the space take it forward.
Read the full State of AI Bias in Talent Acquisition 2025 report