How to Tell If a Preemployment Assessment Is Valid

That's not a valid work email account. Please enter your work email (e.g.
Please enter your work email

Companies have never had more resources to help them find and hire the right candidates. While hiring managers have historically had to rely on resumes, cover letters, and interviews, many more options for preemployment assessment have emerged over the past decade. Some of these new assessments can give companies a much more rigorous idea of the skills and traits job seekers actually possess. Instead of traditional methods rife with bias and the potential for error, employers can use an ever-expanding range of tools to make data-informed hiring decisions.

However, not all assessments are created equal. Employers need to be able to recognize which preemployment assessments are valid and which ones aren’t. To make that distinction, employers should focus on three elements of assessments:

1. Clarity about what they intend to measure
2. Evidence that these traits are, in fact, being measured
3. Evidence that the traits are meaningfully related to job performance

Assessments that can’t clear these hurdles won’t just fail to identify the right candidates — they could also open employers up to reputational damage or even legal action.

While it’s vital for companies to understand best practices for conducting assessments, it’s also instructive to know what validity doesn’t look like. This will help employers make the most of these powerful hiring tools and avoid the most common misconceptions about how assessments should function.

Attitudes Have Nothing to Do With Validity

When determining the effectiveness of an assessment, employers should always ask themselves whether it’s predictive. In other words, is there a clear relationship between how candidates perform on the test and how they perform in the workplace?

Some providers claim their assessments are valid because test takers say the test results are consistent with how they see themselves. But given the cognitive distortions we’re all prone to, these self-reports are extremely unreliable. What’s more, they don’t reveal anything about how successful a test-taker will be on the job.

There are many ways candidates can lead themselves astray when interpreting their own test results. For example, there’s the Barnum effect, which the American Psychological Association defines as the “tendency to believe that vague predictions or general personality descriptions … have specific applications to oneself.” There’s also confirmation bias, which refers to our tendency to believe interpretations that support our assumptions. These are all reasons why companies should determine whether their assessments are predictive instead of attempting to ratify them with self-reports of perceived validity.

Resist the Urge to Make Spurious Connections

If there’s one iron law of statistics that almost everyone knows, it’s the fact that correlation doesn’t equal causation. This principle should always be top of mind when companies are trying to figure out whether a pre-employment assessment is valid.

For example, let’s say your high-performing healthcare workers tend to be more sensitive and caring than the general population. This might lead you to actively seek out candidates who have high scores on tests that measure their sensitivity. But this would be a superficial and counterproductive conclusion if sensitivity didn’t actually cause higher performance. Indeed, it’s entirely possible that sensitive and caring people are attracted to the healthcare industry in general, and that both high-performing and low-performing workers are caring and sensitive. Perhaps other factors (such as goal orientation or discipline) actually distinguish the top performers from their underwhelming peers.

Companies should also remember that some skills and attributes are more reliably predictive of job performance than others, such as general cognitive ability and conscientiousness, which (taken together) account for between 20 and 30 percent of variability in job performance.

Be Wary of Excess Complexity

Some assessments today draw upon significant quantities of data, which can only be processed with increasingly complex machine-learning algorithms. While this may sound impressive, complexity can hinder the development and evaluation of predictive assessments. The less companies know about exactly how their assessment algorithms function, the less equipped they’ll be to address any problems that arise.

One of the most notorious examples of an assessment gone awry was when Amazon had to discontinue a recruiting engine that turned out to be biased against women. Because the engine’s machine-learning algorithm vetted candidates by identifying consistencies in applications over a 10-year period, it learned to prioritize male resumes because tech is a male-dominated field. A 2019 study published by researchers from Microsoft and Cornell University found that there’s a lack of transparency with hiring algorithms — a problem often exacerbated by unnecessary complexity. If you don’t even know what your algorithms are up to, you won’t be able to make adjustments as needed.

Although advanced algorithms and the influx of available employment data in the hiring process can help companies develop more precise assessments, companies should remember that more information and complexity don’t always lead to validity.

Replicating Your Past Hiring Decisions Isn’t Always a Great Way to Go

There’s a category of assessments that don’t even try to predict performance on the job; they simply try to automate and replicate the decisions of your current recruiters or hiring managers. These assessments might identify patterns in resumes, video interviews, or assessments that lead to candidates getting hired and then promote candidates in the application process who look similar to recent hires.

The problem here is that there’s no guarantee your recruiters and hiring managers are doing a good job in the first place. Indeed, if they’re like most hiring managers, their hiring decisions are riddled with inconsistencies and biases. Additionally, the traits and characteristics a person needs to succeed at the interview might be quite different from the traits and characteristics they need to succeed in the job.

Matthew Neale is an I/O psychologist and vice president of assessment products at Criteria.

Get the top recruiting news and insights delivered to your inbox every week. Sign up for the Recruiter Today newsletter.

By Matthew Neale