ARTIFICIAL INTELLIGENCE: Explaining Black Box Algorithms to Avoid Discrimination

Speaking of Amazon, news broke that the company had built out an AI-based recruiting tool that was supposed to help it rank candidates at scale. They certainly are not the only ones -- the tech startup space is littered with applicant management and analysis software, especially given that employees have many more jobs on average than in prior decades. What this AI did, however, was systematically discriminate against women, down-weighting resumes that included the phrase "women's" in descriptions or candidates that came from all female colleges. This result came unintentionally from the underlying data. If you correlate the language in thousands of employee resumes, you will get the status quo, which is that on average the Amazon employee, or any tech employee, is more likely to be male. Another artifact that mattered is the way candidates used language itself, which can be gendered in output.

Other examples of unethical AI are plenty. For example, image recognition algorithms make an error of 3% for white male faces but 30% on black females faces. Or, when used in automating sentencing criminals in the US, algorithms punish minorities more harshly. Or, when underwriting credit, AI disfavors historically disadvantaged protected classes using Zipcode. But the math isn't wrong -- it is in fact painfully correct. These outcomes are a mirror to how things are, not a solution for how we want things to be. Yet AI will be used regardless. Just this week, Lloyds adopted speech to text passwords for telephone banking, replacing pins with the sound of a customer's voice. Will this service work better for majorities and not minorities? Further, such security can be gamed using pre-recordings, or generated voices. Similarly, image recognition can be gamed with photos or by twins.

This is why we are excited to see two initiatives make the news. The first comes from the MIT Lincoln Lab, focused on machine vision. The software builds a visualization based on how a neural network sees an object, highlighting which parts and features of the object drive a particular decision. The picture below shows how the computer detects "large metal cylinders", first looking for size, then for materials and finally for shape -- each  highlighted by importance-ranking heatmaps. The second comes from IBM, called the Trust and Transparency service. In an example around insurance claims automation, the company shows an explanatory overlay on the AI that points out the probabilistic weightings for different drivers of an approval/rejection decision. A human analyst can then understand why the machine made its judgment. We think such tools will be required for any serious AI company.

b2f8d655-c7ea-442c-90df-5bfc871ee4a7[1].png
ff92aa6a-9ef6-401b-9be8-0c109374cb03[1].png
f3a0f7f4-6446-4a01-93b4-5747f9d21af8[1].png

Source: Reuters (Amazon), IBMMIT Media Lab, Business Insider (Amazon), FS Tech (Lloyds)