In a recent webinar, pymetrics Founder & CEO, Dr. Frida Polli and Head of Policy + IO Science, Dr. Kelly Trindel, sat down with Merve Hickok, Founder of AIEthicist.org, to discuss the ethics of recruiting AI and the movement to create more governance, accountability, and transparency in algorithmic systems. Read more below for our key takeaways:
Bias in the Recruiting Process
Fairness in hiring is legally defined as the 80% rule. That means for every two groups applying to a position, there should be no fewer than 8 candidates screened in from one group for every 10 candidates from the other. The most common methods used to hire today, human resume review and cognitive testing, both fail to meet this rule. On average, human resume reviews screen in only 7 black candidates for every 10 white candidates, and even more strikingly, cognitive testing screens in only 3 black candidates for every 10 white candidates. Furthermore, call-back studies are used to evaluate how an employer responds to the same resume submitted under a different name. Results from such studies have revealed that no progress on eliminating racial discrimination has been made in hiring in the last 30 years.
The solution? Audited AI, which does pass the 80% rule. It screens in 9 black applicants for every 10 white applicants, and is 20% better at screening in qualified black candidates than human resume review, and 180% better than cognitive testing.
Societal Influence on Biased Algorithms
The historical context or societal biases at play around the time the data was collected influence data collection can in turn result in biased algorithms. Therefore, it is important to evaluate who is making the decision about algorithms, when such decisions were made, and whether they are ultimately sound choices. The individual or group of individuals decide on the criteria and what they are expected to deliver, so it is important they are drivers for change, or inequality in hiring will continue.
Algorithmic Decision-Making for Good
Currently, there are methods to optimize algorithms for fairness, but there is much room for improvement. The ability to create algorithms to optimize fairness is all in the hands of the buyers and sellers. Frida, Kelly, and Merve advise that when choosing to implement an algorithm, always compare all options available to you with a critical eye and be open minded. They also emphasize understanding the variables and the construct of the technologies at hand. How would you explain what it is and how it works if you were asked? If you can’t explain, you need to dive deeper into the fairness it perpetuates (or doesn’t).
Promoting Ethical Hiring Technology
The entire ecosystem around hiring technology needs to prioritize the development and use of fair tools. A three-pronged approach is suggested for the parties involved - employers, vendors, regulators, and candidates - to promote ethical hiring technology.
1. Education of employers and policymakers on the potential for new tech to disrupt bias hiring.
2. Greater transparency from vendors of tools to empower employers with information.
3. Amended regulation that promotes less ambiguity for employers, stronger regulatory incentives to abandon biased technology, and investment in transparent and ethical technology.
The panel discusses, in more depth, specific policy and legislative initiatives of interest within these three lanes.
Strategies to maximize the benefits of hiring technologies and mitigating any potential harm include doing in-depth due diligence, demonstrating job relevance with all questions and information obtained from candidates, and ensuring compliance. When building an algorithm, particularly those that are self-learning, we have the ability to learn as we go, which is incredibly powerful. Big data provides more opportunities for transparency and accountability as we analyze trends and patterns and can make amends on the fly. Leverage this adaptability to continuously review, edit, and execute.
If you’d like to learn more about how pymetrics can help you optimize recruiting AI for fairness, contact us here.