The Impact of the “Flynn Effect” on Recruitment
—Have globally skyrocketing IQs affected applicant screening?
“It is as if some unseen hand propelled scores upward at an unvarying rate between 1952 and 1982, a rate of 6 IQ points per decade, with individual nations scattering randomly around that value… Culture-reduced tests of fluid intelligence show gains of as much as 20 points per generation (30 years); performance tests show 10-20 points; and verbal tests sometimes show 10 points or below.”—James R. Flynn, in his 20-nation study, “IQ Gains Over Time: Finding the Causes”,1998
At present rates of fertility and mortality and in the absence of changes within countries, the average IQ of the young world population would decline by 1.34 points per decade and the average per capita income would decline by 0.79% per year.”—Gerhard Meisenberg, “Wealth, Intelligence, Politics and Global Fertility Differentials”, Journal of Biosocial Science,2009
If applicants, clients and recruiters are all, on average, getting smarter (or dumber) with each passing year and passing generation, what are the implications for the business of recruitment? “If” is actually the wrong word. It should be “when”, which, according to the research, may have been as early as the industrial revolution, or as recently as 1918, 1932 or 1950, depending upon which research and criteria of intelligence are tapped.
The “Flynn Effect”
In 1994, American-born psychologist, James R. Flynn, now a professor at University of Otago, Dunedin, New Zealand, made news headlines and scored a flood of research citations because of the “Flynn Effect”, named after him by the authors of the best-selling 1994 book The Bell Curve, Harvard psychologist Richard J. Herrnstein and American Enterprise Institute political scientist Charles Murray. (J. Philippe Rushton, a University of Western Ontario researcher whose own research claims about white-black IQ differences have stirred up a hornet’s nest of controversy, has argued that the Flynn Effect should be called the “Lynn-Flynn Effect”, after researcher Richard Lynn, because of a 1982 article by Lynn (in the respected journal Nature) which identified the trend in Japan.)
The Bad News Tolled by The Bell Curve
In a sense, The Bell Curve seemed to break the bad news, while Flynn heralded the good. The—to many, gloomy—central thesis of The Bell Curve is that, more than anything else, IQ is the predictor of your odds, if you are a non-Hispanic white, of being unemployed, living in poverty, getting divorced, being incarcerated, having an illegitimate child, being a chronic welfare recipient, and/or being a high school dropout. IQ even trumped parents’ socio-economic status (“SES”, in the research jargon) as a predictor of these outcomes, e.g., 0% of those with IQs over 125—apparently M.A./Ph.D.-level intelligence—surveyed were chronic welfare recipients, high school dropouts or ever incarcerated. Not one. However, critics of The Bell Curve—and there are many—argue that Herrnstein and Murray got it entirely backwards: Low SES causes low IQs, not vice versa, the critics insist. To follow that debate, you can read the aptly titled 1995 book, The Bell Curve Debate, a collection of 81 expert essays.
What makes this bell-curve hypothesis come across as bad news to many is that The Bell Curve research suggests that even though you cannot easily shape your IQ, it is virtually certain to shape you and your life options and outcomes. Unlike parental SES—which thanks to the socio-economic upward mobility typical of developed countries, can be surpassed—one’s, IQ, like a tattoo, is generally both ineradicable and inalterable.
The Good News Told by Flynn
On the other hand, the “good news”, disseminated by Flynn’s research, is that IQs have risen dramatically in the past two generations, and not only in the U.S. or some select countries, but globally, with estimates of the increases varying between as much as 30% and no less than 5%, depending on which test was administered and re-administered to determine the trend, the age group, the country, the time span, etc.
In his 1984 paper, “The Mean IQ of Americans: Massive Gains 1932 to 1978”, Flynn reported
“This study shows that every Stanford-Binet and Wechsler standardization sample from 1932 to 1978 established norms of a higher standard than its predecessor. The obvious interpretation of this pattern is that representative samples of Americans did better and better on IQ tests over a period of 46 years, the total gain amounting to a rise in mean IQ of 13.8 points.” (“IQ Gains Over Time”, Encyclopedia of Human Intelligence,1994).
The basic approach used in all of these studies was to administer the most current IQ test of a given type, e.g., Raven Progressive Matrices, Stanford-Binet or Wechsler, and then re-test using the much older tests of previous decades. The results consistently revealed much higher scores on the re-testing—increases not attributable to the mere fact of being retested (an effect that was carefully controlled for and eliminated in the testing).
This result has been interpreted by Flynn and the research community as indicating that a score of 100 on a more recent test actually is equivalent to a much higher score on the earlier tests, the gain depending on the test and the age group. One figure cited is that the average IQ is no longer the nominal 100 of the normalized bell curve, but more like 113.8-115 (depending on the test), which is more than enough for high school graduation and many college diplomas, whereas 100 is 4-5 points short of high school graduation, on average.
What is astonishing about the Flynn Effect, is that it is world-wide, dramatic—in terms of the massive increase in average IQ scores and that its cause or causes are still being debated and investigated. Equally dramatic are some of the implications for matters of life-and-death, and, less dramatically, for recruiting.
What has continued to mystify and divide researchers ever since Flynn published his results is the question of what has caused the global increase in IQs—which many of them, including Flynn, have argued is not the same thing as a global increase in the genetic component of intelligence, at least because the time scale is far too short for “natural selection” or evolution to have bred such a huge jump in intelligence.
Why the Surprise Rise?
The following have all been cited, debated, dog-eared and/or dismissed by some researcher or other:
- Improved test preparation and practice, e.g., because of modern cram schools, standardized test guides
- The ongoing modern shift from concrete to abstract thinking (Flynn’s most recent hypothesis, in his book What Is Intelligence?)
- The stimulation and information explosion of the 20th-century
- The rise of computer skills and jobs
- Dramatically increased post-secondary education (necessitating more academic “streaming”, at the expense of vocational training, and a shift in emphasis to verbal skill cultivation in high school)
- Longer schooling in general (including cram schools, private tutoring, expansion of public school services, longer school year, e.g., Japan and China)
- Global urbanization
- Complex technology (requiring and engaging high-level cognition, e.g., choosing software settings)
- Global gains in socio-economic status (with attendant nutritional and motivational gains)
- Smaller families (only children have, on average, higher IQs—which may partially account for the Chinese whiz-kid phenomenon)
- Delayed childbirth (e.g., to the extent it correlates with improved SES)
- Flaws in IQ test design and/or administration (e.g., some of the re-tests were unsupervised)
- Eradication of various childhood diseases
- “Lamarckian inheritance”, viz., genetic transmission of acquired skills to the next generation
- Decline of farming (and its more concrete, motor-skill oriented intelligence)
- The Industrial Revolution (at least as a catalyst for the decline of “agrarian intelligence”)
- (“Educational”) TV
- Video games (contributing to certain cognitive skills, such as tracking multiple stimuli, short-term memory and eye-hand coordination, but at the almost certain expense of others, such as thinking)
- The unique visual characteristics of Chinese written characters (that require visual processing similar to that of visual IQ tests, like the Raven—as a partial explanation of high Chinese scores)
- Better schools and teachers (rejected by Flynn, noting that gains tend to disappear the more the test content matches that in school curricula)
The Lethal Implications of Rising IQ Scores
In his 2007 book, What is Intelligence?, Flynn cites a potentially lethal consequence of rising IQ scores: the use of outdated IQ tests and norms could lead to the execution of someone convicted of a capital crime who, by today’s standards, would be legally adjudged mentally disabled, although normal as measured by his test taken decades earlier.
Before considering the implications of this massive shift in IQ scores, trying to fathom the causes seems a reasonable prior task, to the extent that identification of the causes can reveal the implications. For example, if the gains are attributable to the quick visual judgments required in playing globally-marketed computer games—a skill more germane to “fluid” visual tests like the Raven Progressive Matrices test than to “crystallized” Stanford-Binet and Wechler tests, the observed largest Raven Progressive Matrices-based increase in IQ scores will matter only in recruitment of those kinds of skills.
On the other hand, if improvements in nutrition and elimination of toxins, such as air- or water-borne lead, are key factors, the gains will, once again, not translate into anything relevant to high-level recruiting. That’s because the improved nutrition and elimination of lead will overwhelmingly affect those whose low scores were caused by their under-exposure to nourishing food and overexposure to lead. Hence, despite the increase in their IQs, they still won’t be competitive with the highest scorers, who are precisely those who are overwhelmingly most likely to apply for high-powered jobs.
Otherwise, if the gains are to a significant extent attributable to ECE—early childhood education or to watching TV, surfing the Net or reading owner’s manuals for hi-tech gizmos, the score increases are more likely to show up in applicant traffic.
(Non-)Implications for Recruitment
What are the implications—even if not so grave–for recruitment? Does it mean that a 25-year-old job applicant is likely to have an IQ substantially higher than a 50-year-old applicant, even if that younger applicant can’t stop saying “I’m like….”? No.
First, some of the data indicate that the older the test subject, the greater the gains have been over the years, when compared to the performances of same-age older and younger test takers decades ago. Flynn cites British Raven test data that show gains of 20 points for 18-32-year-olds between 1942 and 1992, but gains of 30 points in the 33-67 age group!
Second, according to some studies and some tests, the greatest gains have been among the low-scorers on IQ tests—people unlikely to end up in your recruiter office, causing a skewing, rather than a simple and uniform shifting of the scores.
These results indicate that even though the average IQ score has dramatically increased, the increases have not been uniformly distributed across the bell curve. What this means is that the mental picture of an IQ bell curve being rigidly shifted as a whole to the right, with the mean shifting from 100 to, e.g., 113.8, is the wrong image. Instead, the far left, lower scores have disproportionately shifted to the right, while the highest scores have hardly budged. The genius boom has not happened.(On the other hand, in one paper, Flynn maintains that the increase is uniform and across the board. However, this seems to be a “minority report” in the research corpus.)
Flynn Non-Effect on Short-Listing
Given that the average applicant a recruiter will be short-listing is very likely to come from the high end of the scale, where scores have risen very little, if at all, the Flynn Effect is not so likely to manifest itself in the screening.
This “null” observation is of critical importance in offsetting any age bias that might be based on any misconception that the Flynn Effect suggests that younger applicants, on average will have higher IQs, and therefore be “smarter”, on average, than older applicants.
Credit the Schools?
It also merits emphasizing, as Flynn noted, that the most dramatic gains in scores have not been on tests like the Stanford-Binet that involve reading, vocabulary and other school-influenced skills. The Stanford-Binet and the Wechsler IQ tests measure “crystallized intelligence” that includes verbal skills and general knowledge-influenced dimensions of intelligence, unlike the Raven Progressive Matrices, which is a purely visual IQ test involving row and column information processing.
It’s the latter kind of test, across many cultures, that has had the biggest jumps in scores—suggesting that teachers and schools can’t take much credit for the gains and that verbal fluency in an interview is an unsound basis for concluding that the applicant, of any age, must be one of the highest-gain IQ test-takers, e.g., someone who took the Raven Progressive Matrices test.
Moreover, there is clear and paradoxical evidence that the younger applicant may be drawn from a pool that, on average, had substantially lower S.A.T. scores than the older applicant’s age cohort. In “The Mean IQ of Americans: Massive Gains 1932 to 1978”, Flynn, addressing the precipitous drop in S.A.T. scores between 1963 and 1981, states
“..these values entail a decline in non-IQ personal traits, motivation, self-discipline, and so forth, from 1963 to 1981 of such magnitude as to constitute a national disaster. The first step is to calculate how much a one-if both IQ gains and SAT losses are taken to be real, rather than artifacts of sampling error, then the deterioration of non-IQ personal traits among young Americans must have been very great.”
But 1984, except perhaps politically, is a long time ago. To what extent that deterioration has continued up until today is hard to gauge, given that the S.A.T. was replaced in 2005 by the “S.A.T. Reasoning Test”, which has a drastically different format and contents.
The Need for Caution
Still, Flynn’s data and observations are still quite relevant to the recruiting process, for the inverse correlation between IQ and S.A.T. trends can manifest itself even in the formation of informal judgments of an applicant’s skills and aptitudes. As a recruiter, you are not very likely to ask for or get IQ, S.A.T or other applicant scores. You are far more likely to not even care. But, like everyone else, you can’t help getting impressions, having hunches and forming opinions about how “smart” an applicant is and in what ways.
Like a former president of Mensa, who identified “intelligent” people by a “certain sensitivity around the mouth and eyes”, you will, of necessity, if your processing does not include aptitude testing, use your intuition. Hence, given the inverse IQ-S.A.T. correlation or the latest equivalent inverse relationship, you may get a confused impression of an applicant’s abilities, or worse, mistakenly take the higher/lower of the two indicators as the only indicator.
For example, an applicant caught up as a statistic in the precipitous decline in written and oral fluency may nonetheless have high-order intelligence. If you can’t see that, imagine Stephen Hawking. On the other hand, a highly articulate candidate possessed of a phenomenal memory may be unable to reload his stapler or follow your corporate structure flow chart.
That caveat offered and given that you probably cannot help speculating on or estimating an applicant’s “intelligence”, what workable concept of intelligence can you adopt? That’s a challenge, especially since IQ scores have increased much more rapidly than the genes that determine intelligence could have, suggesting, as Flynn’s research does, that the gains are due to changes in what is learned, how it is learned, how soon and how fast, not in innate capacity.
Duking It Out
My advice about how to approach applicant “intelligence” is implicit in what, in retrospect, I regard as one of the most entertaining moments in my entire education: In a psychology seminar I audited for fun during my graduate studies in philosophy at Duke University, the professor—visiting from Princeton, as I recall—asked the group, “What is intelligence?” –just as Flynn did, more recently, in the title of his 2007 book.
A very eager graduate student rattled off a pedagogical mantra: “It’s what we measure utilizing a Stanford-Binet test, cross-correlated or supplemented with a Wechsler, based on test items that are valid and reliable…”
The professor interrupted her and said, “No, no….What we do is look at someone and say, ‘Hmmm…looks intelligent.’ We then administer all sorts of tests until we find the one that best confirms our intuition.”
Not ready to give up, the student shot back, “But what happens if you look at someone and say, ‘intelligent’, but I look at him and say, ‘not intelligent’?”
Clearly prepared for this and/or very intelligent, the prof, faster than lightning, shot back, “Well, in that case….
….we take a second look at you.”