Will A.I. Technology Destroy More Jobs Than It Creates?—Historical vs. Logical Analysis

That's not a valid work email account. Please enter your work email (e.g. you@yourcompany.com)
Please enter your work email
(e.g. you@yourcompany.com)

ROBOT JOB DESTROYERIt has been claimed that A.I. (artificial intelligence), robot and other automation technologies will eventually destroy more jobs than they create—indeed, it has been reported that they and other innovations have already, in the past decade, done that, at least in the United States.

The now-famous 2013 Brynjolfsson-McAfee chart showing the decoupling of GDP vs. median income since 2003 has been cited and disputed as proof of that net destruction.

But even if specific individual A.I. technologies, e.g., A.I. voice recognition, response and recording software, on balance destroy more jobs or job categories than they create, is it also true that and because

1. The net effect of most (or average) technologies, over the course of human history or during a specific and limited period of time or at some place, is that they destroy more jobs and/or job categories than they create? (This and the three more sweeping claims below have their proponents and doubters.)

2. The net effect of the totality of technologies considered together, over all of or some limited period or place of human history, is that they destroy more jobs and/or job categories than they create? (This is disputed by Harvard economist Lawrence Katz, whose counter-claim is that no historical pattern shows shifts leading to a net decrease in jobs over an extended period.)

3. The net effect of new technology or specific technologies from this point on and indefinitely into the future will be job destruction?

4. The total employment income generated by net job-destroying technologies is also less than the total employment income, e.g., as a contribution to GDP, destroyed?

(If not, these job-destructive technologies will also be job-enhancing, for the fortunate working recipients of the larger, off-setting incomes, such as the current crop of young software engineers at Google earning $250,000 per year, as well as profiting start-up investors, stock speculators, etc.)

Notice that, in any event, destruction of job categories (even without compensating new job categories) may not be of any concern, if enough of the remaining ones get huge boosts in numbers of workers to at least offset the category losses. So, when reading reports about net job destruction, pay attention to whether the data are about “jobs” as occupied or vacant job slots or just the categories into which those slots and workers fall.

There is more than one method or approach for answering the foregoing questions here are but two of the most germane:

—HISTORICAL ANALYSIS: One method is to research and evaluate each technology developed since the dawn and in the course of human history (or over a short period), one at a time (or in combination), and trace the consequences and evidence for them, in terms of estimated numbers of job categories and/or actual job openings destroyed or created (even if not filled)—directly or indirectly—by those technologies.

How to Do a Historical Job Head Count?

Clearly, with respect to the totality of human history, including recent and future A.I. and robot technology history, a historical worker head count is going to be impossible, even with some rigorous, unambiguous concept of “job”.

For example, undertake attempting to count the number of “jobs” created by the technology used to build the pyramids on the backs and shoulders of workers who may or may not have been paid in cash, in expanded rights (e.g., being released from slavery), or through barter.

Then try to estimate the number of those employment alternatives, losses and gains, which, obviously, will be impossible to historically determine with any accuracy and precision.

Would we, with respect to any time frame, even within the last decade, choose or be able to count only those who were unable to ever find another job of any sort after losing theirs to some technology?

No, not given modern statistical methodology that does not include those who have given up the search for a job because of a lack of success and dropped out of the workforce for only that reason.

What Time Frame?

Also, why reckon technology-induced job losses and gains only in one-year or, as the cited study did, in single-decade chunks? If a given technology or combination of technologies directly leads to the creation of new job categories and jobs only after a decade or two, shouldn’t that count as job creation, despite the wait and the net job destruction over the shorter term?

As a historical research methodology, that seems seriously arbitrary, narrow and therefore flawed, despite any short-term utility.

If we are generous and accept delays of decades, what counts as clear proof of job destruction by AI and robots over the past decade would have to be reassessed at the end of the next, if the same technologies that destroyed jobs in the past decade end up creating even more in the next one. As a test case, consider the wheel and try to determine which time frame to adopt for the purpose of assessing its net job destruction or creation impact.

A more generous time frame approach could invalidate any extrapolation from the job destruction chart cited above—which is a possibility that MIT Sloan Professor of Management Erik Brynjolfsson himself has said he hopes will be the case.

Easier Job Category Count?

More promising, although less revealing, than a job count would be a destroyed or created job category count—allowing, in the extreme case, that job categories were created, but not always filled with openings and workers (much as is the case with some modern high-skill jobs that go begging for want of qualified workers and job categories that lack essential infrastructure, e.g., lunar mining by humans).

Hence, creation of a job category may count for little, if the jobs are not posted and filled.

But a job category count would also be problematic, even merely as a concept (as opposed to the more daunting task of actually counting them). One example will suffice to explain why.

Consider the creation of the first arrow—probably just a pointy stick (propelled by a flexible branch and a taut, strong strand of vine), neither perfectly straight nor sharp as a nail at one end and also featherless (unfletchered).

From the perspective of conceptualizing and counting job-category creation, did the following job categories come into existence at the same time as the awareness of the function and creation of structure of the arrow shaft, tens of thousands of years ago? —Stick sharpener, stone arrowhead maker, titanium arrowhead maker, electrical lathe operator (for smoother, straighter arrows), quiver maker, fletcher (arrow maker as maker of feathered arrows), leather bowstring maker, nylon bowstring maker, hand-guard maker or curare extractor (for making poisoned-tipped arrowheads).

If we are to exclude job categories that depended on support technologies and their workers, such as the electrical lathe and operator, that did not exist at the time of the creation of an earlier technology, such as that of the early bow and arrow, we will limit our vision and concept of job-category creation and destruction.

That’s because we will be logically forced to forever exclude many new job categories that, although not immediately, do eventually, if not through direct derivation, are made possible in association with technology (while counting only those job categories that did not depend on new technology or delayed application of imagination).

Specifically, we be forced to count only pre-existing job categories that the new technology more or less simultaneously filled with available workers, e.g., paid warrior or archer, in the case of the long bow and its arrow technologies.

The crippling problem with this conceptual historical approach is that, by definition, no technology that only much later inspired or combined with additional technologies that themselves created (un)filled job categories could be considered and tallied as a job category-creating technology (as an offset to jobs destroyed).

For example, the earliest computer technologies could not be credited for contributing to the creation of the historically much later job category of quantum computing specialist (because it was not a co-existing or even imagined job category at the dawn of the computer age).

If we remove this hamstringing restriction and allow later spin-off jobs and their categories to count as “job creation”, we will then face the challenge of setting the upper limit of the historical period acceptable for reckoning that a given technology will have created more jobs or job categories than it destroyed, or vice versa.

LOGICAL ANALYSIS: Of course, historical employment analysis requires extensive logical analysis of methodology, concepts, statistics, documents, relics, excavations, etc. But a case can be made for some form of pure logical analysis and evaluation of the claim that technology in general or in specific instances (e.g., A.I.), times and places, destroys more jobs than it creates (or, again, vice versa).

That is to say, without regard to recorded history or archaeology, it should be possible to have a logically informed, insightful and useful perspective on the question merely in virtue of thinking logically and otherwise clearly.

Better vs. Different Technologies

As an initial example, consider any claimed relevance of the non-historical concepts and purely logical distinction of “better” vs. “different” technologies.

From a logical point of view, it may seem to stand to reason that anything that is better in terms of productivity—including but not limited to an advanced technology—is more likely to destroy more jobs than a merely different technology, if only because “different” doesn’t essentially or automatically entail immediate or eventual displacement or elimination of workers.

But then, the same can be true of “better”, e.g., improved medical devices for surgeons, many of which do not jeopardize any given surgeon’s job or job category and may actually increase confidence in and therefore demand for them.

Naturally, as an illustration of “better” productivity, the looms of the industrial revolution—which sparked mass textile worker and Luddite protests, riots and attacks on machinery in 19th-century Victorian England.

That technology apparently turned out to be better in a second sense, in additional to technological processing superiority: It has been claimed that although 98% of weavers lost their jobs, other employment in textiles quadrupled because of the increased demand created by the cheaper mass-produced cloth. (Presumably the quadrupling was of a very sizable pre-existing textile workforce and not of a handful of workers.)

As a case of harmlessly “different” technological advance, we need only recall the technologies of the hula hoop craze, which didn’t displace mass gyrations within iron barrel hoops or destroy barrel-hoop related jobs, because it is unlikely that barrel hoops were ever that immensely and widely popular as toys in the 1950s—if ever.

On the other hand, the laws of supply and demand and of human nature suggest that even “different” can cause job destruction through a substitution effect, e.g., the invention of television as a destroyer of some book shop jobs (as people switch from reading to watching).

Of course, it can also be argued that if TV technology destroyed some book seller jobs, it created TV-related sales jobs for some book sellers or other people (not to mention manufacturing, design, etc., jobs for many others).

It could also, from a purely logical point of view, be argued that if TV replaced books and some of the retail bookseller/manufacturing jobs, it must somehow at some more abstract or emotional level of interpretation and in some sense represent a “better” technology or associated activity—if only because, logically speaking and by definition, any reflective choice between two things logically implies that the one chosen is the better specific choice, even if not representative of a generally better category.

Hence, the millions who have switched in part or entirely from reading to TV communicated the message that TV technology is better than book technology, because they have believed watching is better than reading.

From such considerations, a logical point and caution emerges: In an analysis of technology-induced job destruction or creation, the logical, abstract distinction between “better” and “different” will not automatically constitute a powerful tool for research into and differentiation of technologies that do (not) destroy more jobs or job categories than they create.

Solo and Joint Job Destroyers

There are similar research merits in examining the relevance of the logical distinction between “joint” and “individual” sufficiency with regard to job (category) destruction and creation. From a purely logical point of view, causes can either be individually—just one, or in combination with others, enough to cause a change.

Consider the current and clear example of robotics. Contrary to the demonization of the robot as job destroyer, the fact is that robots are getting a lot of help from other technologies, e.g. component technologies, accomplishing this.

There are hundreds, thousands, perhaps countless technologies that are “enabler technologies”, without which robots would never have evolved past plastic action figures, e.g., sensor, alloy, gyroscopic, wiring, gear, joint, electrical and memory technologies. So while pointing a human finger of blame, blame them too, since, without them, a lot of jobs would (have) exist(ed) much longer.

But you never see headlines screaming “Gyroscope technologies are destroying jobs!” Why, because they are not solely responsible. But then, neither are the robots and AI software (even if considered as two sides of the same menace).

This logical distinction between sole and joint responsibility has implications from a policy standpoint. For example, some future global council may choose to ban some components or applications of component technologies, rather than banning what are perceived as job-destroying robots per se.

That helpful application of logic would have brought to a halt a frenzied mob’s pursuit of the Frankenstein monster, and gotten them to wheel around and redirect the pitchforks and torches in the direction of the company that supplied the animating generator, as a critical, if not colluding enabler. From that point, all could have agreed to try to find a technological workaround to deal with the A.I. Frankenstein monster’s “glitches”.

Ditto for the robots: Even if they are destroying more jobs than they create, the smarter thing to do, rather than eliminate or otherwise resist them, would be to examine all contributing technologies to isolate, modify and especially modify or replace the ones that result in a net loss of jobs.

One way to accomplish that is to incorporate design features and capabilities that enable robots to create jobs as offsets to their job destructiveness.

For example, ponder the burgeoning cobot industry—the manufacture of robots that, for now, mostly “assist” humans on a collaborative, cooperative basis. They could easily be designed to not only assist, but to also require a human operator or some (or many) interfacing human workers—and not as a feather-bedding band-aid alternative to unemployment, but as productive and essential harnessing of human talent.

Future robotic medical systems, such as the da Vinci surgical platform (which has its limitations and critics), come to mind. These may not only allow, but also require, larger teams of medical specialists to utilize the robotic system and the data it could generate in real time, during surgery, without operating it, e.g., endocrinologist, anesthesiologist, neurologist and other monitors or observers.

Both historical and logical analysis suggest that we need to more closely and carefully reflect on our fears and hopes regarding the net employment effects of A.I., robotic and other automation technologies.

We should also strive to blend the facts, trends and claims of history with the concepts of logical analysis to create a well-balanced, fully-informed and aware perspective.

For example, before we surrender to fear and protest job-destroying A.I., robotic and other technologies, we should look at and speculate on their longer-term implications. while we simultaneously explore cobot innovations and (re)designs that make them not only “user-friendly”, but also “user-dependent”—like a Frankenstein monster tweaked to help villagers manufacture something useful.

Like torches and pitchforks.

By Michael Moffa