Explaining AI: What Talent Acquisition Pros Need to Know About Machine Learning and More
With artificial intelligence (AI) progressing by leaps and bounds, there has been a lot of talk about what it all could mean for the way we work. These conversations have been especially persistent in industries like recruiting, where key processes have long been very focused on human-to-human activity.
AI can offer solutions to many of the most common challenges recruiters and hiring managers encounter, all while providing a smoother and more enjoyable candidate experience to job seekers. Unfortunately, despite its well-documented benefits, AI is still misunderstood by many, especially the people who stand to gain the most from it: hiring teams.
These misunderstandings have driven HR and recruiting professionals to fear AI’s implications rather than celebrate them. Fortunately, these fears can be assuaged with a little more clarity on what, exactly, AI is.
To help expose misconceptions and uncover the truth, I want to take time today to explore three of the key components of AI. In particular, we’ll be talking about:
- Machine Learning: A computer’s ability to learn. When data enters the system, the computer can read the information, detect patterns, and make changes or perform tasks based on the input.
- Natural Language Processing: A computer’s ability to understand human speech and text. In other words, a person can speak or type a message to a computer and the computer will understand and respond accordingly.
- Computer Vision: Similar to and sometimes confused with “machine vision,” computer vision is a computer’s ability to process data from a digitized image and take action based on the information gathered.
Notice how each of these AI expressions allows a computer to communicate with humans and behave in humanlike ways. Unfortunately, AI’s ability to successfully engage in human behaviors is the very reason why so many talent acquisition pros fear it.
Traditionally, recruiting and hiring have depended on human interaction because these processes require the understanding and evaluation of complex, abstract concepts like personality and culture. However, if AI can gather information about a person based on visual, textual, and verbal output, does that mean there is no need for actual human intervention?
To answer that question, we’ll need to dive a little deeper.
1. Understanding Machine Learning
Machine learning means a computer can take in new information, process it, learn new behaviors based on the information, and act on those behaviors. Often, people take this to mean that a computer has been given life, in a sense, because it can act without direct, explicit human instruction. However, that is bit misguided. Machine learning is actually rather linear, and the AI needs training from human users to draw accurate conclusions based on new information.
If you present a machine-learning-enabled computer with a detailed description of two fruits — say a strawberry and a raspberry — you are presenting it with a “feature set.” In this case, the feature set is a range of weights, textures, colors, and other characteristics ascribed to the strawberry and the raspberry. Based on this information, the computer will be able to discern between strawberries and raspberries without much intervention.
However, if you present the computer with a third fruit it hasn’t yet been exposed to, it will not be able to identify the new fruit accurately. The computer only has available to it the information it has been given. So, if the computer only knows strawberries and raspberries, every fruit it encounters will be sorted into one of those categories. A blackberry will probably wind up with the raspberries, kiwis might be strawberries, and there’s no telling where an apple would fall.
Typically speaking, machine learning depends on three different kinds of algorithms:
- Supervised Learning: Data sets are labeled so that patterns are detected and used to label additional incoming data.
- Unsupervised Learning: Data sets aren’t labeled and are sorted according to similarities and differences between one another.
- Reinforcement Learning: Data sets aren’t labeled and the computer instead learns from trial and error feedback.
Deep Learning vs. Machine Learning
Machine learning requires a great deal of human intervention. If the programmer leading the computer doesn’t present very specific details, the computer won’t be able to accurately understand the input, leading to lower-quality results.
Deep learning is a subset of machine learning. It can detect patterns and use feature sets to make decisions, but it obtains those feature sets in a unique way. Instead of receiving constant input from humans, computers with deep learning capabilities can build their own “understandings” using a network of algorithms called an “artificial neural network.” Deep learning still relies on patterns and doesn’t always detect minute details, but it uses reinforcement learning to create increasingly accurate predictive models that become more complex with each process.
Sometimes deep learning is referred to as “deep neural learning” or “deep neural networking.” Computer programs that use neural networking can learn new concepts and build advanced learning algorithms quickly and more efficiently than those without neural networks.
2. Understanding Natural Language Processing
A computer with natural language processing (NLP) has the ability to understand and generate text and speech.
Most computers can only understand human inputs when those inputs are encoded in a programming language like Java or Python. Computers with NLP, on the other hand, understand human languages like English, Spanish, and so on. Some of the most common examples of NLP are speech recognition and text translation.
For example, email spam filters are an early form of NLP. These filters scan incoming messages for certain phrases and use the information to determine whether an email is junk. Another commonly used, more modern example is Siri and her other virtual assistant counterparts, which can all understand and respond to even complex voice commands. NLP is also integral to chatbots, which have been making waves in everything from customer service to recruiting in recent times.
3. Understanding Computer Vision
As the name suggests, computer vision is a computer’s ability to “see.”
Human sight may seem simple, but it’s actually a very complicated matter. Multiple biological processes take place in a fraction of a second to allow us to recognize the objects before us. Replicating the mechanisms of human sight in machines is a challenging undertaking.
“For humans, it’s very simple to understand the contents of an image. We see a picture of a dog and we know it’s a dog,” computer vision expert Adrian Rosebrock said in a 2015 interview. “But for a computer it’s not that easy. All a computer ‘sees’ is a matrix of pixels (i.e., the red, green, and blue pixel intensities) of an image. A computer has no idea how to take these pixel intensities and derive any semantic meaning from the image.”
Researchers have had success with creating sensors and image processors that can do the same work as the human eye — in some cases, they can do it even better. However, that doesn’t necessarily mean the computer can “see.” Modern cameras can record crisp, high-definition images quickly, but that doesn’t mean the computer can understand what it has captured an image of.
In other words: We have granted computers sight, but granting them the understanding of that sight is another story. However, researchers are hard at work at making this a reality.
Returning to the berry-based example of machine learning above, it’s possible to build a system that can accurately differentiate between raspberries and blackberries based on sight alone. However, if presented with an apple, the system would need new training to recognize and categorize this fruit.
Future advances in computer vision will take the work of engineers, computer scientists, programmers, neuroscientists, and a whole lot of other experts — but that doesn’t mean we haven’t made any breakthroughs. For example, Google’s self-driving vehicles understand signs and react appropriately. Similarly, a team at MIT trained a system to recognize sounds by using computer vision to first recognize the scenes from which the sounds were emitted. Once the system understood what it was seeing, it was better at determining what it was hearing.
Is AI Better at Talent Acquisition Than Humans Are?
Think about it this way: Today’s smartphones feature predictive text keyboards. The predictive text feature helps the phone user type messages more quickly by providing word suggestions. The more the smartphone owner uses their keyboard, the better the predictive text feature learns their habits, vocabulary, and even personal references like friends’ names and location-specific terms. Over time, the predictive text feature’s word suggestions grow more and more accurate.
And yet, predictive text conversations have not replaced all human interaction. In fact, there are more than a few horror stories of autocorrect fails.
The same can be said for AI in talent acquisition in the job market. While it may help teams more accurately and effectively perform key recruiting and hiring tasks, it simply cannot replace real, human-to-human communication and interaction.
AI is reaching new heights, but there is still a great deal of work to be done. In talent acquisition AI has the amazing potential to lower time to hire, improve matching techniques, and prevent negative candidate experiences, all while the hiring team can focus its time and resources on more high-touch and creative elements of the process.
A version of this article originally appeared on the Red Branch Media Blog.