AI for hiring is here, and it’s not bad news
Even with all its problems, AI is a step up from the notoriously biased recruiting process
Artificial Intelligence promises to make hiring an unbiased utopia.
There’s certainly plenty of room for improvement. Employee referrals, a process that tends to leave under-represented groups out, still make up a bulk of companies’ hires. Recruiters and hiring managers also bring their own biases to the process, studies have found, often choosing people with the “right-sounding” names and educational background.
Across the pipeline, companies lack racial and gender diversity, with the ranks of under-represented people thinning at the highest levels of the corporate ladder. Fewer than 5% of chief executive officers at Fortune 500 companies are women, and that number will shrink further in October when Pepsi CEO Indra Nooyi steps down. Racial diversity among Fortune 500 boards is almost as dismal, as four of the five new appointees to boards in 2016 were white. There are only three black CEOs in the same group.
“Identifying high-potential candidates is very subjective,” says Alan Todd, CEO of CorpU, a technology platform for leadership development. “People pick who they like based on unconscious biases.”
AI advocates argue the technology can eliminate some of these biases. Instead of relying on people’s feelings to make hiring decisions, companies such as Entelo and Stella.ai use machine learning to detect the skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates, but also to pinpoint those who may have previously gone unrecognized in the traditional process.
Stella’s algorithm only assesses candidates based on skills, for example, says founder Rich Joffe. “The algorithm is only allowed to match based on the data we tell it to look at. It’s only allowed to look at skills, it’s only allowed to look at industries, it’s only allowed to look at tiers of companies.” That limits bias, he says.
Entelo released Unbiased Sourcing Mode on 8 August—a tool that further anonymizes hiring. The software allows recruiters to hide names, photos, school, employment gaps and markers of someone’s age, as well as to replace gender-specific pronouns—all in the service of reducing various forms of discrimination.
AI is also being used to help develop internal talent. CorpU has formed a partnership with the University of Michigan’s Ross School of Business to build a 20-week online course that uses machine learning to identify high-potential employees. Those ranked highest aren’t usually the individuals who were already on the promotion track, Todd says, and often exhibit qualities such as introversion that are overlooked during the recruitment process.
“Human decision making is pretty awful,” says Solon Barocas, an assistant professor in Cornell’s information science department who studies fairness in machine learning. But we shouldn’t overestimate the neutrality of technology either, he cautions.
Barocas’ research has found that machine learning in hiring, much like its use in facial recognition, can result in unintentional discrimination. Algorithms can carry the implicit biases of those who programmed them. Or they can be skewed to favour certain qualities and skills that are overwhelmingly exhibited among a given data set. “If the examples you’re using to train the system fail to include certain types of people, then the model you develop might be really bad at assessing those people,” Barocas explains.
Not all algorithms are created equal—and there’s disagreement among the AI community about which algorithms have the potential to make the hiring process more fair.
One type of machine learning relies on programmers to decide which qualities should be prioritized when looking at candidates. These “supervised” algorithms can be directed to scan for individuals who went to Ivy League universities or who exhibit certain qualities, such as extroversion.
“Unsupervised” algorithms determine on their own which data to prioritize. The machine makes its own inferences based on existing employees’ qualities and skills to determine those needed by future employees. If that sample only includes a homogeneous group of people, it won’t learn how to hire different types of individuals—even if they might do well in the job.
Companies can take measures to mitigate these forms of programmed bias. Pymetrics, an AI hiring start-up, has programmers audit its algorithm to see if it’s giving preference to any gender or ethnic group. Software that heavily considers ZIP code, which strongly correlates with race, will likely have a bias against black candidates, for example. An audit can catch these prejudices and allow programmers to correct them.
Stella also has humans monitoring the quality of the AI. “While no algorithm is ever guaranteed to be foolproof, I believe it is vastly better than humans,” says founder Joffe.
Barocas agrees that hiring with the help of AI is better than the status quo. The most responsible companies, however, admit they can’t completely eliminate bias and tackle it head on. “We shouldn’t think of it as a silver bullet,” he cautions.
- Optimizing facilities and sustainability management with IoT and AI
- The Workforce realignment: Moving beyond the AI vs. Humans Conundrum
- The role of digitization & integration across industrial IoT
- How emerging technologies are creating a race for insights
- The rise of Artificial Intelligence and impending takeover
Editor's Picks »
- Steel stocks get winter chill as China demand issues resurface
- Why Uday Kotak’s defiance is scaring his bank’s investors
- Exit RBI governor Urjit Patel, enter wrath of the markets?
- The government has a troubling message for minority shareholders
- Opec-allies’ output cut may not amount to big shift in oil prices