Compliance
2

Apriora Passes Third-Party Bias Audit

December 4, 2024
Apriora Passes Third-Party Bias Audit

The Potential for Bias in AI-based Hiring

The advent of AI has precipitated new tools for hiring at breakneck speed. They promise to help employers handle the hundreds of applicants they receive for each job posting. However, any underlying bias in an AI model’s construction risks translating to bias in hiring.

Enter New York City’s Bias Audit Law, designed to combat discrimination that may results from use of Automated Employment Decision Tools (AEDTs). An AEDT is defined as "a computer-based tool that:

  • Uses machine learning, statistical modeling, data analytics, or artificial intelligence. AND
  • Helps employers and employment agencies make employment decisions. AND
  • Substantially assists or replaces discretionary decision-making.”

Apriora’s Ensures Compliance

Apriora’s AI interviewer, Alex, was designed to enhance hiring by conducting live screening calls in an efficient and unbiased manner. With a 4.5 out of 5-star rating from candidates post-interview, the solution is both effective and candidate-friendly. “By interviewing more candidates with Apriora’s AI, employers can widen their talent aperture and identify qualified applicants from non-traditional backgrounds that may have otherwise been screened out of the hiring process,” said Aaron Wang, cofounder of Apriora.

As Alex can automate ~80% of the recruiting process, Apriora proactively sought a third-party audit of its AI system.

Apriora underwent a thorough, independent audit that examined:

  • The tool’s impact across various demographic groups, focusing on race, ethnicity, and gender
  • Internal governance practices to manage and monitor risks related to bias and fairness
  • Rigorous identification, acknowledgement and assessment of risks within the model’s design that contribute to bias

Conducted by BABL, a responsible AI consultancy, the tests determined that Apriora passed all audit criteria. See the full results here.

Looking to the Future

The results confirmed that Apriora’s system performs consistently across demographics via “selection rate” and “impact ratio” calculations, the law’s two metrics of focus.

The audit also identified an area for improvement. While Apriora’s risk management practices were sufficient, BABL recommends continuing to ensure the lack of bias persists as the product evolves. This means that, as Alex gets smarter, any updates get rigorously tested before they go into production. Tests can include comparing recommendations made by the new system to those made to before the update.

We are grateful to BABL for their thorough review and remain dedicated to responsible innovation. We welcome continued dialogue with our customers, partners, and the broader community about the ethical use of AI in recruiting. Thank you for supporting us in our mission to improve every industry through efficient, fair and transparent hiring.