pymetrics recently hosted a webinar, “Bias, Diversity, and Inclusion in the Digital World” featuring our Founder & CEO, Frida Polli, Head of Client Insights and Analytics, Adrianne Pettiford, and Professor of Organizational and Industrial Psychology & pymetrics Advisor, Lori Foster. In this session, they discussed how traditional ways of combating bias in the workplace have proven to not be effective, nuances to consider when evaluating technology for bias, how to design processes for ongoing technology that audits for bias, and the importance of representation and diversity in training sets.
Now more than ever, organizations are looking to improve upon their Diversity & Inclusion initiatives, but training bias out of human beings is much more difficult than it seems. The foundation of bias in our brains is actually quite functional and adaptive. We’re programmed to make quick decisions easily to conserve our slow, deliberative thinking for when it’s most critical. For instance, in seeing an object with four legs and a flat surface, the mind instantly identifies a table––taking the time to study or decide that fact is unnecessary. But these functional tactics that we begin learning from infancy, that become deeply entrenched in our brains, unfortunately lead us to learn biases that aren’t as adaptive and functional. Because they take so long for us to learn and program, they take just as long, if not longer, for us to unlearn.
Such bias comes into play in the employment context when a hiring manager is faced with a stack of resumes. A decision has to be made, often relatively quickly. While he or she may be capable of taking the time to review these slowly and deliberatively, it feels natural to revert to quick decision making and mental shortcuts to get the job done. Familiarity and similarity start to impact us in ways we don’t even realize, which is where problems come into play. It’s tempting to try to solve this by sending recruitment teams to diversity programs to try to train their brains out of bias, but depending solely on the attempt to rewire our brains is not a great solution, especially when a lot is deeply ingrained. Instead, we need to design systems that are wired for our instinctive thinking. This allows organizations to move away from tactics that require people to do what feels unnatural, and instead focus on processes that identify what people are good at and supplement those skills with tools and data that correct and improve the decision-making process.
When instrumenting technology to assist in making the hiring process more efficient AND fair, organizations can lean on experts who have spent their careers studying IO psychology or fairness and diversity. Individuals in these roles can take the time to be slow and deliberative in their thinking, and identify the fundamentals that matter most to these systems, such as what elements are predictive of on-the-job performance or what may actually operate as proxies for demographics that should be avoided. Then, their analysis and expertise can be built into the algorithms.
It’s important to test the algorithms before deploying them and using training sets that ensure as much representation as possible. Being able to identify when an algorithm is too narrowly defined and fine-tuned to a small, homogenous group is critical, so you can take steps to de-bias it before it goes into production. We can use these tools to help us increase diversity and help people gain a seat at the table, and with time those biases that we’re fighting against will actually begin to change.
Technology at the end of the day is neutral. It’s how we build it and what we do with it that matters. It’s important that we build the technology to support human strengths in a way that’s ethical and brings us closer to the greater societal aims that we’re looking to accomplish.
If you’re looking to mitigate bias and increase diversity in your workforce and would like to watch this webinar recording in full, please click here.