You’ve heard of it, for sure: the report that started it all. In September 2013, researchers Carl Benedikt Frey and Michael Osborne from Oxford University rocked the techno-optimistic boat by publishing their paper « The Future of Employment: How Susceptible Are Jobs to Computerization?« . Their verdict: 47% of American jobs were at risk of computerization. A stroke of genius since artificial intelligence was not the media darling it is today; this study was thus the kick start of many conversations regarding the relations between technological progress and employment.
Frey and Osborne’s approach consisted in assigning a probability of computerization – between 0 and 1 – on a one-to-two decades time horizon to the 700 occupations listed in the O*NET administrative classification. All of this was based on advice from machine learning experts, who evaluated 70 occupations, before the approach was extrapolated to all categories… thanks to a machine-learning model!
In the end, Frey and Osborne divided all American jobs into 3 categories:
- 33% of workers with a low probability of computerization (<0.3),
- 19% with a medium probability (between 0.3 and 0.7),
- 47% with a high probability (> 0.7).
4 years later: what happened to high-risk occupations?
4 years after the publication of this pioneering study, we wanted to look at what had happened to the high-risk occupations: those with a computerization probability higher than 0.9, that is 172 occupations out of the 702 that were initially examined.
We drew a comparison between the jobs statistics per occupation in the United States in 2013 (figures used by Frey and Osborne) and those from 2016 (the last ones available to us).
Verdict: no massive job destruction to be found. On the contrary: overall, the number of jobs in these occupations increased by 4.4% in 3 years. Out of 172 varied occupations, 57% were on the rise.
If one compares the employment growth rates of these different occupations with the growth rate of aggregate employment in the United States, in order to control for the momentum of the entire economy, the picture is more nuanced. Indeed, only 38% of occupations with a very high probability of computerization saw their head count growing faster than employment at a national, aggregate level.
Nevertheless, the employment growth rate of the 172 studied occupations, + 4.4%, and that of the total US employment, + 5.9%, are quite close. Far from the fears proclaimed so many times.
Winners and losers among occupations
Let’s look at the top 10 occupations at high risk of computerization… that grewn the fastest, and the top 10 occupations that declined sharply – by focusing on the occupations that included more than 100,000 workers in 2013:
Rising occupations are to be found in the hospitality industry, and those declining involve clerks and other positions of administrative support, that is to say those collecting documents, recording information etc. The cause of these setbacks is undoubtedly computerization, as these are occupations that consist in processing and transmitting information.
However, there are also jobs of this type expanding faster than the average growth of employment in the US: general secretaries (that is, outside the legal and medical fields) or claims-processing clerks, for example.
How can we explain the gap?
First of all, it should be emphasized that in economic studies it is always difficult to separate the influence of a specific variable from all the others. Thus, the comparison of actual employment trends with Frey and Osborne’s predictions is necessarily « polluted » by other factors beyond just computer technologies, such as consumer tastes (that’s the reason why manicures and pedicures jobs jumped by 27% in 3 years).
Then, Frey and Osborne made a point of specifying in their study that they were assessing the risks of computerization of various occupations, without making any prediction about the number of jobs that would actually disappear. Indeed, the possibility computerize a given occupation only leads to its disappearance if relevant technologies and processes are implemented. For instance, if the information system of a company is too old, and its data scattered, it will be very difficult to automate an occupation before upgrading this basic but essential infrastructure.
That being said, several biases of the 2013 study are still questionable:
- The authors assessed the computerization of occupations, not the computerization of the various tasks that make up each occupation. And all the tasks corresponding to a given occupation are not necessarily computerizable. With a task-based approach, 3 OECD economists found that just 9% of jobs within the OECD are at risk of automation.
- Perhaps the question asked to machine learning experts for job evaluation was not relevant – « Can the tasks of this occupation be sufficiently specified, provided that a large volume of data is available, so as to be carried out by computer-controlled cutting-edge equipment? » This question is more oriented towards theoretical than practical considerations, and explains why occupations that are intuitively difficult to computerize are rated as very high risk. As a result, occupations dealing with bits of information and those dealing with atoms were placed on an equal footing. For instance, consider roofers, who were assigned a 0.9 probability of computerization. The various steps that are part of such a job can be easily formalized, but creating an efficient robot capable of grasping, cutting and laying insulation and roofing materials will take many years.
- Rather than asking machine-learning experts to give an opinion on occupations from reading job descriptions, the authors could have interviewed workers directly.
Conclusion: « prediction without iteration is hallucination »
With their 2013 study, Frey and Osborne were the first to sense that job automation deserved thorough investigations, and their hard work to formalize the computerization potential of numerous occupations was incredibly interesting – and it would be hard for us to replicate it!
But in a sense their endeavor proved too successful: 4 years later, many would-be AI observers still exhibit the same figures, with no ounce of critical thinking. This explains why we wanted to take a step back.
For as with any good machine-learning model, it is not the prediction in itself that matters, but its accuracy by comparison with what actually occurs. Analyzing the results of past studies is therefore of paramount importance if we want to update our forecasting models so as to predict the future with greater relevance.
To plagiarize the famous quote of Thomas Edison, a prediction without iteration is a hallucination. And the time strongly favors visionaries – of the mystic kind.
Do you want to assess AI's impact on employment in your company through a workshop?Contact our AI Lab