From who gets the job, to how much certain labor is deemed to be worth, organizations are increasingly reliant on Artificial Intelligence (AI) to guide their decisions . Yet, in a world of work where historical inequalities are lurking, it is important for those who influence how AI systems are designed and deployed to consider how those likeliest to be disempowered will be impacted.
The responsibility of ensuring AI makes ethical decisions falls not only to the scientists and engineers. When ethicality is inherently tied to the competitiveness and even survivability of a business, the need for all agents, including HR, to understand the principles, policies, and practices guiding AI development becomes evident.
In this conversation between Kelly Trindel, Head of Policy + I/O science at pymetrics, and Kelly Forbes, Co-founder of the AI Asia Pacific Institute, the two speakers dispel some of the confusion and complexity surrounding AI ethics.
To start, the speakers define the guiding principles behind building ethical AI. Different institutions may vary in how they label these principles, and even hold different actors accountable. Nevertheless, they generally converge into the 4 principles highlighted in the Singapore Government’s Model AI Governance Framework:
1) Transparency: Data used to build AI models is known and openly disclosed.
2) Unbiased: Data used to build AI models is representative of the actual environment.
3) Explainability: How an AI system works and how it arrives at a particular prediction can be clearly explained.
4) Human Centricity: AI system enables humans to make meaningful decisions without taking away control to safely shut down the system if required.
Kelly Forbes shares the example of a large company that selected employees for promotion and contract renewal based on their proximity to the office. Revelations like these send a chilling reminder that our ethical choices begin when we decide what kind of data will inform the AI’s decisions. Individuals, be that candidates or employees, have the right to know how their data will be used, and when an AI is behind those decisions in the first place.
Beyond initial design, companies need to continuously monitor their AI system to ensure that it is operating in an unbiased manner. Importantly, when bias correction is not possible, humans have to be able to safely and swiftly shut the system down.
We can't let careless AI deployment deepen the disparity.
Over the course of this pandemic, the lower waged, least credentialed, and old-line business workers are the ones most likely to suffer job losses. Now is the time to think about how AI can help to equalize employment opportunities in the ensuing recovery. We can't let careless AI deployment deepen the disparity.
A quick audience poll suggests that most people struggle to balance productivity goals with upholding ethical AI principles. This is especially worrying in today’s context as notions of ‘equitability’ and ‘ethicality’ are more likely to be de-prioritized when organizations are pressured to make quicker outputs with fewer resources.
However, this perceived tradeoff between efficiency and ethicality is false, as Kelly Trindel explains. pymetrics’ audited AI technology demonstrates that when you tackle bias, you remove irrelevant signals or noise that affects the accuracy of the predictions.
In the US, the rule for unbiased hiring, legally defined as the 4/5ths rule, holds that relative to the highest passing group, other groups should be passing at the 80% rate. Neither of the two most common employment selection tools in use today – cognitive tests and resume reviews – meet this benchmark. And their effectiveness at predicting job fit has been in question too. Audited AI tools, whether that’s pymetrics or others, outperforms more traditional tools in every aspect.
Referencing the 2020 Edelman Trust Barometer, Kelly Forbes emphasizes that it is not only possible, but also crucial, that organizations achieve both efficiency and ethicality. Almost 70% of respondents in the Edelman research indicated that they would prefer to do business with a company that is ethical over one that is merely competent. Disappointingly, few institutions at present meet the coveted status of being seen as both competent and ethical. Companies who can connect these two objectives therefore have significant leverage to differentiate themselves in the eyes of both investors and clients.
Globally, governments are signaling the importance of AI ethics with the rapid formation of policy and research groups. Since 2017, a slew of frameworks have been released to guide the development of AI and protect user rights. While many of these frameworks have not been coded into law, there are existing legislations, such as those relating to discrimination and human rights, which can called on to punish harmful AI design and deployment.
In the US, legal frameworks are already being drawn at every level to ensure that the transformative power of AI is harnessed for good. These nascent policies can be categorized under 3 overarching objectives:
1) To change perspectives on the potential for new technology to disrupt the status quo
2) To enforce greater transparency from vendors by empowering employers with information
3) To amend regulations that promote no bias, less ambiguity and more transparency
pymetrics unequivocally supports these policy moves. From contributing to the Singapore Model AI Governance Framework 2.0 as the only HR technology partner, to joining coalitions in New York City and California, our goal is to urge greater accountability in AI-powered employment selection.
In her capacity as pymetrics’ Head of Policy, Kelly Trindel reiterates that hiring decisions have significant consequences on people’s lives. As such, both employers and technology providers must be committed to processes that align with ethical principles.
In closing, Kelly Trindel shares a few best practices to help you cut through the dazzle of AI. The following steps are necessary not only to maximize the benefits of employment selection technology, but also mitigate any potential harm:
Essentially, they operationalize the abstract ideals of the principles discussed earlier. As the speakers acknowledge, it can be difficult to get AI ethics right. The key is to have a human eye on the system, update your algorithms regularly and consistently (AI is about learning!), and always strive towards earning greater trust from all your stakeholders.