FYI.

This story is over 5 years old.

News

Companies are using AI to stop bias in hiring. They could also make discrimination worse.

Algorithms are being used to eliminate human bias, but sometimes they entrench it.
hiringbias

When Angelique Johnson applied for a seasonal associate role at a Vans store in Memphis, she didn’t know that her first interview would be with a computer.

A question popped up on the screen, asking her how she would respond if a customer came in for a product but it wasn’t available. She had three minutes to answer while HireVue recorded her on video.

HireVue’s, a platform that uses artificial intelligence to help companies improve hiring, analyzes tens of thousands of factors from video interviews to determine which applicants make it to the next round in the hiring process. “We're looking at every frame of the video, every word that was spoke, how their tonalities shift. We're looking at what kinds of pronouns and verbs they use,” said HireVue CEO Loren Larsen.

Advertisement

Johnson, 25, said she didn’t find talking into the camera awkward, which she attributed to having grown up in the digital age. She added that she was “glad that I could actually give my voice and my personality to the questions instead of the bland yes, no, maybe questionnaires.”

In recent years, large companies including Deloitte and LinkedIn have embraced AI to speed up their hiring and recruitment process and diversify their hiring pool. For some of the companies investing in these services, AI seems to be working. But even as more companies use algorithms in hiring, some rights advocates are concerned they may entrench the very bias managers are trying to eliminate.

Read more: How a labor union is using an algorithm to predict when to organize.

Johnson, who is black, said she believes HireVue’s AI-driven screening method could cut down on discrimination, but she wondered if human managers might inject bias later in the process.

Despite a plethora of studies documenting rampant discrimination, and many companies increasing efforts to diversify their staff, bias in hiring is still very real. A team of Arizona State University researchers found in 2014 that white men with criminal records were more likely to be called for an interview or receive an offer than black men with clean records. And hiring discrimination against African-Americans hasn’t improved in the last 25 years, according to a recent meta-study. Studies have also shown Latinos and women are significantly less likely to be hired than white men.

Advertisement

So companies are embracing the theory that removing people from at least some parts of the hiring process can remove human bias.

Frida Polli, the CEO of AI talent solutions company Pymetrics, wants to take resumes out of the initial screening process altogether. This way, “It doesn't matter whether you went to fancy school A or not, you're all given the same shot at the job,” said Polli.

Instead, Polli’s company uses neuroscience exercises to determine if applicants are a good fit. Pymetrics builds a custom algorithm for each client’s company by running at least 50 of their top performers through games aimed at measuring cognitive, social, and emotional traits to generate a profile of the dream employee for a particular role. Applicants are given similar tests — things like matching photos of faces to the correct emotions.

The idea is that by using objective, data-based recruiting, while also removing bias, the ranks will diversify organically. Pymetrics reports that clients have seen a 20-100 percent increase in gender, ethnic, and socioeconomic diversity of hires.

HireVue’s algorithm also analyzes a client’s top performers — and their worst — to create custom algorithms which applicants’ interviews are compared against. Larsen says it only reviews factors that have been “scientifically validated as being predictive of job performance” for a specific role.

In 2017 Unilever screened all entry-level candidates with first Pymetrics, then HireVue. HireVue reported that Unilever saw a 16 percent increase in hires that added gender and ethnic diversity to the company.

Advertisement

When an algorithm goes wrong

Critics have long warned of an inherent risk when algorithms are created out of past hiring data. Without caution, the hiring data used to build an algorithm could hold a mirror to any workplace inequality, and essentially systemize it.

The American Civil Liberties Union (ACLU) has also expressed concern about new biases brought in by algorithms: for example, even without knowing an applicant’s race, an algorithm could learn to group candidates by other possible racial identifiers like zip code, organization membership or language.

“I think the technology has expanded while regulatory reform or legal reform is still playing catch up,” said Esha Bhandari, an ACLU staff attorney with the Speech, Privacy and Technology Project.

In October, Reuters reported that Amazon’s internal AI recruiting tool was biased against women. The experimental engine’s algorithm was based on 10 years of resumés submitted to the company, and, because they were overwhelmingly submitted by men, it reportedly downgraded graduates of two all-women colleges and resumes with the word “women’s.” The project was scrapped in early 2017, and Amazon said it had never been used to screen applicants to the company.

In 2016, a ProPublica study found that software used to predict an arrestee’s likelihood of recidivism was biased against black defendants. Their review of risk scores in Broward County, Florida in 2013 and 2014 revealed the formula falsely labeled black arrestees as likely future criminals at almost twice the rate as white defendants. Risk assessments factor into every stage of the criminal justice system, which could reinforce racial disparities by setting higher bail amounts or harsher sentences for black defendants. Northpointe, the company who created the algorithm, disputed ProPublica’s analysis.

Advertisement

The Amazon controversy put AI hiring companies on notice and a few are acknowledging the risk of algorithmic bias. HireVue and Pymetrics have been among the most outspoken.

Read more: This algorithm reads X-rays better than doctors do.

“If your algorithms are trained on all Caucasians and no people of color, the likelihood that that algorithm will then be biased for Caucasians is high,” said Polli.

Both Larsen and Polli said their companies test each model and remove biases they identify. The algorithm is then rebuilt and tested again. “We should be very transparent about how we measure and what data we are using, the score that we present back to the customer, and how we’re using these decisions that affect someone's job,” said Larsen.

In November, Pymetrics made public a tool to audit for demographic bias in algorithms. “AI is like any other technology. It could be used for good and it can be designed well or it can be designed poorly and it will end up with negative outcomes,” said Polli.

But there’s no industry consensus on algorithmic transparency or what it should look like. The ACLU has called for external auditing for bias, but companies don’t appear to be jumping at that suggestion.

“I don't think it's necessarily fair to ask a company to give away their code or give away their algorithms that they've invested in building,” Larsen said.

But as these tools become more common — roughly 55 percent of U.S. HR managers predict they’ll be using AI within the next five years, according to a 2017 survey by talent solutions company CareerBuilder — rights activists like Bhandari are increasingly concerned about putting safeguards in place.

“We have too little transparency, too much hiding behind claims of proprietary information or trade secrets for academics, journalists, and civil society to evaluate whether or not these can be fair,” said Bhandari.

Cover: People use desktop computers to look at job sites for employment possibilities on Friday, June 1, 2012 at the Mississippi Department of Employment Security WIN Job Center in Jackson, Miss. (AP Photo/Rogelio V. Solis)