Third-Party Auditing: Why We Invited External Experts into Our Codebase

Kelly Trindel, PhD
May 13, 2020

Countless frameworks for ethical AI have been developed by non-profits, academics, governments, and companies over the past few years. Disagreements over these principles persist, but one key tenet - transparency - is consistently cited as critical for ensuring the integrity of automated systems. In fact, according to a study published in Nature last year, 73 of 84 ethical AI frameworks identified transparency as an important norm, making it the most agreed upon principle across the literature. 


As an organization committed to using AI for social good, pymetrics strives to be transparent in our interactions with employers, candidates, and the general public. The models we build can have very real implications for people’s lives, so we believe we have a special responsibility to be forthcoming about our process. In our view, technology’s role in hiring should be to reduce bias and subjectivity; this can only happen when companies like ours prioritize consistency and objectivity.


While pymetrics’ operating principles are grounded in transparency and fairness, we also understand that there are limits to any organization’s ability to discuss itself in neutral terms. This is especially challenging given the nascent nature of our industry, since there are no clear rules for when an AI company can use terms like “ethical,” “fair”, or “de-biased” to describe its products. We empathize with the difficult position employers find themselves in while sorting through the rhetoric surrounding hiring technologies that may or may not meet these standards. While consumers can rely upon government agencies to proactively ensure the safety of new cars or the effectiveness of new drugs, there is no similar proactive auditing process  to authorize  the fairness of a machine learning tool for employment selection.


We want to reduce the confusion that often characterizes our industry. That’s why last month, we invited a group of computer science researchers and ethical AI experts from Northeastern University to conduct an independent, third-party audit of our codebase and  platform for objectivity and fairness. This investigation will culminate in a public report on their findings. The audit is led by Professors Alan Mislove and Christo Wilson, who have conducted several notable studies on digital accountability, including investigations of Uber’s opaque surge pricing and Facebook’s use of PII for targeted advertising. Both researchers contribute to journals focused on fairness in computer science, with Professor Wilson serving on the  executive committee of the ACM Conference on Fairness, Accountability, and Transparency (FAccT)


We are posting this as a form of public pre-registration of this audit. In medical clinical trials, researchers often announce the intent of their studies long before the research is concluded. This pre-registration tells the world of your hypotheses (e.g., “our new drug cures cancer”), and implies a commitment to release the results even if they are disappointing. In this spirit, we are opening our process to objective, rigorous academic investigation, conducted and authored outside of our control. 


It might seem daunting to allow external experts to openly critique the inner workings of our platform, but in doing so, we are prioritizing a genuine commitment to transparency. As we find ourselves in an environment of heightened skepticism of automated technologies, the fact is that bad actors really do exist. Our goal is to set an industry precedent for how real trust can be earned.


Everyone at pymetrics looks forward to sharing the results of our third-party audit this summer. Until then, please reach out to info@pymetrics.com for questions about the platform.