Systematic bias in hiring assessments is a major barrier to workforce diversity and inclusion, but it’s also a barrier that is rarely discussed. Many employers and job candidates believe that, because discrimination in employment is illegal, most methods of evaluating applicants are equitable. Unfortunately, this is not at all the case.
How can you evaluate the bias in a hiring assessment? In short, by looking at how candidates of various demographic groups perform and how those differences in performance affect employers’ evaluations. Consider a company that gives applicants a test designed to measure short-term memory on a scale of 1 to 10, and then only considers those people who receive a score of at least 7. If 50% of men applying to the job score 7 or higher, but only 20% of women fall in the passing range, the assessment would be considered very biased against women.
The specific degree of bias in any assessment is called an “impact ratio.” In the above example, the female-male impact ratio is equal to 20% divided by 50%, or 0.4. Since the 1970s, the U.S. Equal Employment Opportunity Commission (EEOC) has used a standard known as “the four-fifths rule” to determine whether a hiring procedure is resulting in systematic disadvantage. According to this rule, “fair” assessments should have an impact ratio of at least 0.8 when comparing the pass rates for gender and racial groups.
Today, many of the most common hiring assessments do not pass the four-fifths rule with respect to race. As summarized in the chart below, cognitive ability tests (sometimes called “general mental ability” or short-form “IQ” tests) create the largest disparities between black and white candidates. However, work sample, job knowledge, and situational judgement tests also fall below the 0.8 threshold, as well as resume reviews conducted by humans.
While loopholes in anti-discrimination law exist that allow these biased assessments to be used in certain circumstances, it’s important to recognize that doing so can come at the cost of workforce diversity. pymetrics designs every model we build to be compliant with the four-fifths rule, disrupting traditional screening methods that can disqualify diverse candidates automatically. For more information on bias-mitigating technology, download our free whitepaper here.