In the context of technology, terms like “ethical,” “equitable,” and “human-centered” are often framed as idealistic. Put differently, while it might be “good” for companies to aspire to these norms, they are fairly easy to deprioritize when times get tough.
If history has taught us anything, it’s that this strategy doesn’t work. In the context of scalable automated technology, “pragmatic” decisions to shirk ethics are dangerous, exacerbating social problems and derailing progress in the long run.
Consider what often happens in humanitarian contexts, when technological interventions are introduced that cater to the “average” user.
In 2013, the largest cyclone in recorded history devastated the Philippines, killing thousands and displacing millions. Typhoon Haiyan also immediately destroyed existing communication infrastructure, leading private sector and government partners to develop social media and smart phone-based applications to fill the void. Unfortunately, when it came to who was able to benefit from digital humanitarian response efforts, communities with high levels of digital literacy and reliable access to the Internet were the sole recipients. Significant research now exists on the “digital discrimination” that occurred in the aftermath of the Philippines’ disaster. As one UN report notes, “Victims from low socio-economic backgrounds had limited or no access...This constraint contributed to their slow recovery compared to middle-income families who were able to navigate the media landscape and tools offered by digital data and technologies.”
In short, the Philippines was a country that had struggled with stark income disparities for some time, and in failing to account for this reality, suboptimal technological solutions exacerbated an already challenging situation.
COVID-19 is not a natural disaster, but the same amplification of inequality can occur if our technological solutions are not designed and deployed in a conscientious manner. The groups who are most affected by economic disruption are the most vulnerable in our society, and tools have the power to either alleviate their misfortunes or make them much worse. For example, disheartening research indicates that racial minorities are more likely to be laid off than their white peers. When these workers then try to find new employment opportunities, conventional hiring systems, like resumes and cognitive ability tests also place them at a significant disadvantage in the process.
As a hiring platform grounded in principles of bias mitigation and equality of opportunity, pymetrics levels the playing field by ensuring job candidates across demographic groups are evaluated fairly. The worst possible course of action we could take at this point in time, when the stakes are so high for certain people, would be to deprioritize our commitment to equity.
In continuing to navigate this period of uncertainty for our society and its aftermath, the fairness implications of any technological solution need to be considered carefully. Whether in AI or in other disciplines, ethics are not something that can be put on hold, and for those companies who have not yet put their principles into practice, now would be a good time to start.
If you’re committed to upholding fairness in your talent practices even during these difficult times, we'd welcome the opportunity to chat in greater depth. Please fill out the form here.