An Unreliable Professional Insanity Test
“Insanity: doing the same thing over and over again and expecting different results.”— falsely attributed to Albert Einstein and Benjamin Franklin
You’re familiar with the oft-invoked maxim “Insanity is doing the same thing over and over again, and expecting different results.”
It’s trotted out and trumpeted by concerned friends or colleagues when you stick with a sales pitch that never works, refuse to switch your cologne when it attracts only mosquitoes, persist in having “email@example.com” as your employment contact email (without ever getting a reply) or dress too casually for all your job interviews, which lead nowhere.
However, as career or life advice, that observation is seriously overstated and likely to cause trouble if taken and blindly followed on blind faith, unless the cases in which it is valid are systematically teased out from the ones in which it isn’t. For example, imagine how far Thomas Edison would have gotten with the invention of the filament-based incandescent lightbulb if he quit one test shy of the total that exceed 1600—just one short of his ultimately successful filament test materials test. The Smithsonian history of Edison and the light bulb would never have been written as follows:
“One tough piece was finding the right material for the filament–that little wire inside the light bulb. He filled more than 40,000 pages with notes before he finally had a bulb that withstood a 40 hour test in his laboratory. (10) In 1879, after testing more that 1600 materials for the right filament, including coconut fiber, fishing line, and even hairs from a friend’s beard, Edison and his workers finally figured out what to use for the filament–carbonized bamboo.”
(Actually, being really smart, Edison probably would have justified such indefatigable persistence on the grounds that no two of the trials were exactly the same, either with respect to the materials tested or the protocol followed. More on this, below.)
The Cases of Stephen King and Mother Nature
How about water-boarding or electro-shocking interrogators, if they gave up after only two or three interrogations? Then there are the cases of persistent authors, such as Stephen King, JK Rowling who achieved success only after countless submissions of the same writings, through the same kinds of channels, weathering the same kinds of rejection. Is “You gotta do the numbers!” any less valid as a principle of effort? For example, consider “Carrie”—King’s first (eventually) published novel:
“His first published novel was rejected so many times that King collected the accompanying notes on a spike in his bedroom. It was finally published in 1974 with a print run of 30,000 copies. When the paperback version was released a year later, it sold over a million copies in 12 months.” (Excerpt from “The rejection letters: how publishers snubbed 11 great authors”, The Telegraph, June 5, 2014)
Even Mother Nature hardly treats the insanity principle as axiomatic. Consider metal fatigue: Is it insane to expect a piece of metal to break cleanly after being repeatedly bent and straightened the same way, even though that doesn’t happen on the first dozen attempts? Reverse the question: Is it insane to be concerned about possible metal fatigue in engineered structures?
What about a statistical version of the insanity principle?— “It is insane to do the same thing over and over again, yet to expect a different result iin more than X% of the attempts.” At what value of X does persistence or repetition become nutty? For example, mailing out fliers advertising your job listings and employment services—is a 1% response rate rationally justifiable? In cases like these, in which rationality is gauged in terms of cost-benefit ratios or differences, or break-even points, it is easy to calculate the net gain, loss or break-even response rate—in terms of sales and deals—required to cover the costs.
However, in scenarios that lack an easily calculated, monetized measure of net gain or loss, the statistical insanity principle becomes harder to specify a value of X for. Suppose you know in advance that 1,000 auditions are required on average to land the kind of part you want in a Broadway musical. Is it insane to try even once, let alone 999 times?
You could try to tally transportation, head shot, wardrobe, waiting, opportunity and other costs, weighed against the cash and other rewards of landing a part, but the calculation is unlikely to be clean and simple. Besides, there is the (delusional) cockiness that convinces you that the average stats and odds apply only for average people, not for you. This latter consideration can be crucial and serves as a reminder to be as objective and precise as possible when estimating one’s own chances, lest the form of insanity that you succumb to is delusional, if not purely statistical.
The usefulness of any principle or proverb is bounded and defined by its proper domain, the boundary and background conditions prerequisite for its truth and applicability, and by the instances and domains in which it does not hold. A comparison of “Haste makes waste” and “He who hesitates is lost”—with respect to the domains and boundary conditions in which they do and do not hold—will amply illustrate that fact.
So, what about the repetition-based insanity test? What are the prerequisites, boundaries, conditions and domains in and under which it it does (or does not) hold?
- Maximal specificity condition: Revisit Edison’s experimentation with light bulbs and their filaments. If each experimental trial with a filament type differed from the others in some critical way, e.g., degree of electrical conductivity of the material or the material itself, then Edison could not be accused of “doing the same thing and expecting a different result”, even though at a more abstract level of description he was indeed doing the same thing over and over again—namely, testing light bulb materials to find a winning combination.
Hence, a necessary condition for the repetition insanity test to apply to a given case is that the trials be described with the degree of specificity that prima facie warrants conducting them in the first place. If there is no degree of descriptive specificity that distinguishes the trials from each other and the trials are yielding the same (failing) results, a preliminary warrant for calling the persistence insane exists (without being either conclusive or sufficient).
- Fatigue and threshold dynamics: Any series of repetitions that, although as actions are identical, create “hidden” states (e.g., observable only through micro-measurements or by means of very sophisticated technology or analysis), such as metal fatigue, may be eminently sane—either for the purpose of proving or creating such an effect or of proving it does not exist (when testing flexible wing joints to ensure that the same null result of no snapping is obtained).
Neuronal excitation is another illustration of this phenomenon: Is it insane to repeatedly stimulate a neuron in the attempt to get it to fire when every isolated attempt to do so fails? No, not if the repetitions are fast enough to create a threshold firing effect. (In this instance the frequency or speed of the repetitions and pulses is a crucial prerequisite or boundary condition for justifying the persistence and repetitions.)
There’s nothing crazy about doing that. (A similar line of argument can be framed for critical mass and synergy phenomena—namely, that persisting in adding yet another bit of something, a different result, e.g., uncontrolled nuclear fission, a camel with a back broken by the last straw, or team-based brainstormed success, will eventuate.)
- Quality control: Suppose you manufacture parachutes and decide to test every one of them, e.g., in a wind tunnel or through tensile-strength tests, to ensure safety and reliability. You’re “doing the same thing over and over again”, but are you expecting a different result? No, you’re hoping you don’t get one—and are prepared to take steps in case you find a defective chute. That seems to be a clearly sane thing to do.
But wait—if it is insane to do the same thing over and over again and expect a different result, isn’t it arguably equally insane to persist in doing that to ensure you don’t get a different result? That’s because the same assurance that you rationally should not expect a different result is an assurance that there is no rational need to test for a different result, quality control or no quality control. From the stanpoint of this line of argumentation, running exhaustive quality control tests should be as insane as testing a pail of golf balls to see whether they fall 32 feet/sec2 when released from a given height or sniff testing your egg salad every 10 seconds.
However, exhaustive quality control testing is not insane, even though it is costlier than statistical random sampling, at least when the stakes and liability of producing even one defective item are unacceptably high. It may be argued that parachutes and golf balls are different, because some of the former have failed to open, whereas no golf ball has ever failed to fall as predicted by the laws of gravitation and free fall.
The takeaway from this line of argumentation is that the sanity of persistence may have to be defined in statistical terms, per the discussion above, with specified threshold probabilities of failure or exceptions determining the degree of sanity or insanity of persistence.
If, despite my analysis, you persist in believing that it is insane to do the same thing over and over again, but expect a different result, try re-reading this article many times.
You may be surprised by the result…
…even though, despite the misattribution of the insanity principle to him, Einstein wouldn’t.