Should Organizations Look to AI Tools for Hiring?
Whether or not we should be using AI in hiring is the wrong question. A better question is, how should AI be safely and ethically used to support more effective and fair hiring decisions? Why is the first question wrong? Because AI is just a statistical analysis tool, you should use it to process and understand jobs data.
AI is generally not what people think it is. It is not an evil robot sent back in time to kill you despite the jolt of fear you still feel when you see that Terminator poster with shining red eyes. It is not a robot overlord enslaving humankind to prevent us from harming ourselves (apologies to Isaac Asimov).
Today, AI is just machine learning code that allows us to process data that we could not readily process at scale, even a few years ago. Specifically, that means unstructured, complex, and messy data such as images, video, audio, and text. A good rule of thumb, though, when using AI in hiring is not to use data that candidates do not consciously provide employers for use in the hiring process. This includes videos of candidates, images, or audio from their interviews. The reason is that candidates do not generally want organizations making decisions about their future based on subjective aspects such as looks and accents. Candidates want to be evaluated based on their objective skills and job-relevant capabilities.
Harnessing AI for Better Hiring
However, you can use AI to process candidates’ words in an interview. When properly developed, this technology can help hiring teams more accurately analyze candidate responses and ensure they match specific job requirements. Using transcribed words, deep learning and natural language processing can score what candidates say against job-relevant competencies.
Studies have shown this type of scoring is nearly identical to the scores given by trained human raters, except that they are almost four times lower in group differences. This is an example of how you can harness AI to improve the efficiency of the hiring process while also improving diversity.
Another excellent example of how AI can be a game-changer in hiring is through online assessments designed to help candidates match with jobs. Because of AI’s predictive analysis capabilities, candidates that may not have thought initially they were a fit for a specific role were then recommended by AI for that role. This can lead to a significant increase in diversity for historically less diverse jobs.
Ensuring Ethical AI
None of this is magic. Effective and ethical hiring of AI results from building careful and rigorous algorithms based on scientific research. And they must continuously refine them to ensure they are predictive of job performance and are fair in candidate evaluations.
Yes, carelessly developed algorithms can cause bias. We have witnessed countless examples of this in the news media, including a chatbot that became racist after being trained on user responses and facial recognition software that is less accurate on minority faces. But just as AI can uncover (and accidentally scale) bias, it can also identify bias and allow us to remove or correct it. In this way, AI both causes bias and solves it.
Regulating AI in Hiring
AI is a powerful and beneficial technology that can improve the human experience when developed and deployed responsibly and ethically. It is up to us (society, government, citizens) to harness AI to ensure this is the case. Recently, there has been a raft of local, state, and national attempts to reign in AI and algorithms, including in New York City, California, Illinois, and even in the US Senate. These laws aim to protect privacy and push regular audits or impact assessments to ensure that AI and algorithmic tools, in general, are working correctly and not causing harm.
In most cases, these regulations don’t represent unduly burdensome oversight but the ethical, correct way to use a powerful technology such as AI. And nor do they require dramatically changing the way legitimate technology-enabled hiring is done. Going back to the Civil Rights Act of 1964, which defined protected classes of people, and the Uniform Guidelines on Employee Selection Procedures from 1978, our field has long focused on equitable hiring tools. The newer AI-oriented regulations are attempting to go further, requiring regular and independent audits of any high-stakes algorithmic tool. And this is entirely reasonable and even necessary.
AI is not going away; it is simply too powerful a tool. But we need to harness it for the benefit of humanity, not just corporations or governments. I believe a tool fails this test if it seeks not to assist but to control a human. To ensure a future where AI improves human lives, we all need to work together to call attention to these issues and implement appropriate regulations.
Eric Sydell, Ph.D., is the executive vice president of innovation at Modern Hire.
Get the top recruiting news and insights delivered to your inbox every week. Sign up for the Recruiter Today newsletter.