Here are two readings sent by Ali & Matt, longtime readers of SimoleonSense. The first piece (via WSJ) covers the dietary crusade against fatty foods pursued by health regulators & nonprofits at the expense of the American public. The second reading is a paper from the New England Journal of Medicine addressing myths, presumptions, and facts about obesity.
“Saturated fat does not cause heart disease”—or so concluded a big study published in March in the journal Annals of Internal Medicine.
“The new study’s conclusion shouldn’t surprise anyone familiar with modern nutritional science, however. The fact is, there has never been solid evidence for the idea that these fats cause disease. We only believe this to be the case because nutrition policy has been derailed over the past half-century by a mixture of personal ambition, bad science, politics and bias.”
“One consequence is that in cutting back on fats, we are now eating a lot more carbohydrates—at least 25% more since the early 1970s. Consumption of saturated fat, meanwhile, has dropped by 11%, according to the best available government data. Translation: Instead of meat, eggs and cheese, we’re eating more pasta, grains, fruit and starchy vegetables such as potatoes. Even seemingly healthy low-fat foods, such as yogurt, are stealth carb-delivery systems, since removing the fat often requires the addition of fillers to make up for lost texture—and these are usually carbohydrate-based.”
“The problem is that carbohydrates break down into glucose, which causes the body to release insulin—a hormone that is fantastically efficient at storing fat. Meanwhile, fructose, the main sugar in fruit, causes the liver to generate triglycerides and other lipids in the blood that are altogether bad news. Excessive carbohydrates lead not only to obesity but also, over time, to Type 2 diabetes and, very likely, heart disease.”
“The real surprise is that, according to the best science to date, people put themselves at higher risk for these conditions no matter what kind of carbohydrates they eat. Yes, even unrefined carbs.”
“The reality is that fat doesn’t make you fat or diabetic. Scientific investigations going back to the 1950s suggest that actually, carbs do.”
“The second big unintended consequence of our shift away from animal fats is that we’re now consuming more vegetable oils….”
“This shift seemed like a good idea at the time, but it brought many potential health problems in its wake. In those early clinical trials, people on diets high in vegetable oil were found to suffer higher rates not only of cancer but also of gallstones. And, strikingly, they were more likely to die from violent accidents and suicides.”
“Yet paradoxically, the drive to get rid of trans fats has led some restaurants and food manufacturers to return to using regular liquid oils—with the same long-standing oxidation problems. These dangers are especially acute in restaurant fryers, where the oils are heated to high temperatures over long periods.”
“The past decade of research on these oxidation products has produced a sizable body of evidence showing their dramatic inflammatory and oxidative effects, which implicates them in heart disease and other illnesses such as Alzheimer’s. Other newly discovered potential toxins in vegetable oils, called monochloropropane diols and glycidol esters, are now causing concern among health authorities in Europe.”
“Cutting back on saturated fat has had especially harmful consequences for women, who, due to hormonal differences, contract heart disease later in life and in a way that is distinct from men.”
“Sticking to these guidelines has meant ignoring growing evidence that women on diets low in saturated fat actually increase their risk of having a heart attack. The “good” HDL cholesterol drops precipitously for women on this diet (it drops for men too, but less so). The sad irony is that women have been especially rigorous about ramping up on their fruits, vegetables and grains, but they now suffer from higher obesity rates than men, and their death rates from heart disease have reached parity.”
Reading 2: Myths, Presumptions, and Facts about Obesity
“When the public, mass media, government agencies, and even academic scientists espouse un- supported beliefs, the result may be ineffective policy, unhelpful or unsafe clinical and public health recommendations, and an unproductive allocation of resources….We review seven myths about obesity, along with the refuting evidence.”
Myth 1: Small sustained changes in energy intake or expenditure will produce large, long-term weight changes.
“Recent studies have shown that individual variability affects changes in body composition in response to changes in energy intake and expenditure,7 with analyses pre- dicting substantially smaller changes in weight (often by an order of magnitude across extended periods) than the 3500-kcal rule does.5,7 For ex- ample, whereas the 3500-kcal rule predicts that a person who increases daily energy expenditure by 100 kcal by walking 1 mile (1.6 km) per day will lose more than 50 lb (22.7 kg) over a period of 5 years, the true weight loss is only about 10 lb (4.5 kg),6 assuming no compensatory in- crease in caloric intake, because changes in mass concomitantly alter the energy requirements of the body. ”
Myth 2: Setting realistic goals for weight loss is important, because otherwise patients will become frustrated and lose less weight.
“Although this is a reasonable hypothesis, empirical data indicate no consistent negative association between ambitious goals and program completion or weight loss.8 Indeed, several studies have shown that more ambitious goals are sometimes associated with better weight-loss outcomes.Furthermore, two studies showed that interventions designed to improve weight-loss outcomes by altering unrealistic goals resulted in more realistic weight-loss expectations but did not improve outcomes.”
Myth 3: Large, rapid weight loss is associated with poorer long-term weight-loss outcomes, as compared with slow, gradual weight loss.
“Within weight-loss trials, more rapid and greater initial weight loss has been associated with lower body weight at the end of long-term follow-up….Although it is not clear why some obese persons have a greater initial weight loss than others do, a recommendation to lose weight more slowly might interfere with the ultimate success of weight-loss efforts.”
Myth 4: It is important to assess the stage of change or diet readiness in order to help patients who request weight-loss treatment.
“Readiness does not predict the magnitude of weight loss or treatment adherence among per- sons who sign up for behavioral programs or who undergo obesity surgery.”
Myth 5: Physical-education classes, in their current form, play an important role in re- ducing or preventing childhood obesity.
“Physical education, as typically provided, has not been shown to reduce or prevent obesity. Findings in three studies that focused on expanded time in physical education12 indicated that even though there was an increase in the number of days children attended physical education classes, the effects on body-mass index (BMI) were inconsistent across sexes and age groups.”
Myth 6: Breast-feeding is protective against obesity.
“A World Health Organization (WHO) report states that persons who were breast-fed as in- fants are less likely to be obese later in life and that the association is “not likely to be due to publication bias or confounding.”14 Yet the WHO, using Egger’s test and funnel plots, found clear evidence of publication bias in the published lit- erature it synthesized.15 Moreover, studies with better control for confounding (e.g., studies in- cluding within-family sibling analyses) and a randomized, controlled trial involving more than 13,000 children who were followed for more than 6 years16 provided no compelling evidence of an effect of breast-feeding on obesity.”
Myth 7: A bout of sexual activity burns 100 to 300 kcal for each participant.
“The energy expenditure of sexual intercourse can be estimated by taking the product of activity intensity in metabolic equivalents (METs),18 the body weight in kilograms, and time spent. For example, a man weighing 154 lb (70 kg) would, at 3 METs, expend approximately 3.5 kcal per minute (210 kcal per hour) during a stimulation and orgasm session. This level of expenditure is similar to that achieved by walking at a moderate pace (approximately 2.5 miles [4 km] per hour). Given that the average bout of sexual activity lasts about 6 minutes,19 a man in his early-to- mid-30s might expend approximately 21 kcal during sexual intercourse. Of course, he would have spent roughly one third that amount of energy just watching television, so the incremental benefit of one bout of sexual activity with respect to energy expended is plausibly on the order of 14 kcal.”
Presumptions about Obesity
“Just as it is important to recognize that some widely held beliefs are myths so that we may move beyond them, it is important to recognize presumptions, which are widely accepted beliefs that have neither been proved nor disproved, so that we may move forward to collect solid data to support or refute them. Instead of attempting to comprehensively describe all the data peripherally related to each of the six presumptions shown in Table 2, we describe the best evidence.”
Things we know about with reasonable confidence.
“Our proposal that myths and presumptions be seen for what they are should not be mistaken as a call for nihilism. There are things we do know with reasonable confidence. Table 3 lists nine such facts and their practical implications for public health, policy, or clinical recommendations.”
I just finished reading Fred Turner’s book, From Counter Culture to Cyberculture. Turner’s book explores the relationship between systems thinking/cybernetics/counterculture & silicon valley/digital utopianism.
Here is a quick synopsis - via The University of Chicago Press
In the early 1960s, computers haunted the American popular imagination. Bleak tools of the cold war, they embodied the rigid organization and mechanical conformity that made the military-industrial complex possible. But by the 1990s—and the dawn of the Internet—computers started to represent a very different kind of world: a collaborative and digital utopia modeled on the communal ideals of the hippies who so vehemently rebelled against the cold war establishment in the first place.
From Counterculture to Cyberculture is the first book to explore this extraordinary and ironic transformation. Fred Turner here traces the previously untold story of a highly influential group of San Francisco Bay–area entrepreneurs: Stewart Brand and the Whole Earth network. Between 1968 and 1998, via such familiar venues as the National Book Award–winning Whole Earth Catalog, the computer conferencing system known as WELL, and, ultimately, the launch of the wildly successful Wired magazine, Brand and his colleagues brokered a long-running collaboration between San Francisco flower power and the emerging technological hub of Silicon Valley. Thanks to their vision, counterculturalists and technologists alike joined together to reimagine computers as tools for personal liberation, the building of virtual and decidedly alternative communities, and the exploration of bold new social frontiers.
Shedding new light on how our networked culture came to be, this fascinating book reminds us that the distance between the Grateful Dead and Google, between Ken Kesey and the computer itself, is not as great as we might think.
If this sounds interesting, I would recommend watching this video about the book (here). Or, if you prefer a text based introduction, read this piece via Edge.org which serves as a decent prologue/background for Turner’s book.”
Finally, watch Fred’s talk on the Burning Man Festival where he -”discusses his opinions on the social phenomenon of Burning Man and how he thinks the ideals of the festival apply to the marketplace that is evolving in our society, specifically in the Silicon Valley.”
If you enjoyed the book, The Second Machine Age, or Marc Andresssen’s piece, Software Eating The World, take a look at this paper (by Frey @ Osborne) – it identifies areas where technology is replacing human labor. This is a fairly long post which contains the following:
1. Research abstract
2. Curated excerpts
3. Ideas for investors, students, and entrepreneurs
“We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupation’s probability of computerisation, wages and educational attainment. According to our estimates, about 47 percent of total US employment is at risk. We further provide evidence that wages and educational attainment exhibit a strong negative relationship with an occupation’s probability of computerisation.”
Part 2: My Favorite Bits
“The impact of computerisation on labour market outcomes is well-established in the literature, documenting the decline of employment in routine intensive occupations – i.e. occupations mainly consisting of tasks following well-deﬁned procedures that can easily be performed by sophisticated algorithms.”
“Arguably, this is because the manual tasks of service occupations are less susceptible to computerisation, as they require a higher degree of flexibility and physical adaptability (Autor, et al., 2003; Goos and Manning, 2007; Autor and Dorn, 2013).”
“At the same time, with falling prices of computing, problem-solving skills are becoming relatively productive, explaining the substantial employment growth in occupations involving cognitive tasks where skilled labour has a comparative advantage, as well as the persistent increase in returns to education (Katz and Murphy, 1992; Acemoglu, 2002; Autor and Dorn, 2013).”
Technological disruption and unemployment isn’t a new topic.
“The concern over technological unemployment is hardly a recent phenomenon. Throughout history, the process of creative destruction, following technolog- ical inventions, has created enormous wealth, but also undesired disruptions. As stressed by Schumpeter (1962), it was not the lack of inventive ideas that set the boundaries for economic development, but rather powerful social and economic interests promoting the technological status quo.
Here’s a historical example of innovation vs the status quo.
“William Lee, inventing the stocking frame knitting machine in 1589, hoping that it would relieve workers of hand-knitting. Seek- ing patent protection for his invention, he travelled to London where he had rented a building for his machine to be viewed by Queen Elizabeth I. To his disappointment, the Queen was more concerned with the employment impact of his invention and refused to grant him a patent….Most likely the Queen’s concern was a manifestation of the hosiers’ guilds fear that the invention would make the skills of its artisan members obsolete.5 The guilds’ opposition was indeed so intense that William Lee had to leave Britain.”
Historically innovation/employment dynamics originally occurred as follows:
“Unless all individuals accept the “verdict” of the market outcome, the decision whether to adopt an innovation is likely to be resisted by losers through non-market mechanism and political activism.” Workers can thus be expected to resist new technologies, insofar that they make their skills obsolete and irreversibly reduce their expected earnings. The balance between job conservation and technological progress therefore, to a large extent, reflects the balance of power in society, and how gains from technological progress are being distributed.
Attitudes shifted around the industrial revolution and societies began to favor innovation.
“There are at least two possible explanations for the shift in attitudes towards technological progress. First, after Parliamentary supremacy was established over the Crown, the property owning classes became politically dominant in Britain (North and Weingast, 1989). Because the diffusion of various manufacturing technologies did not impose a risk to the value of their assets, and some property owners stood to benefit from the export of manufactured goods, the artisans simply did not have the political power to repress them.”
“Second, inventors, consumers and unskilled factory workers largely benefited from mechanisation (Mokyr, 1990, p. 256 and 258). It has even been argued that, despite the employment concerns over mechanisation, unskilled workers have been the greatest beneficiaries of the Industrial Revolution (Clark, 2008).”
Let’s continue studying 19th century innovation.
“An important feature of nineteenth century manufacturing technologies is that they were largely “deskilling” – i.e. they substituted for skills through the simplification of tasks (Braverman, 1974; Hounshell, 1985; James and Skinner, 1985; Goldin and Katz, 1998). The deskilling process occurred as the factory system began to displace the artisan shop, and it picked up pace as production increasingly mechanized with the adoption of steam power (Goldin and Sokoloff, 1982; Atack, et al., 2008a).”
“Work that had previously been performed by artisans was now decomposed into smaller, highly specialised, sequences, requiring less skill, but more workers, to perform.”
“Together with developments in continuous-flow production, enabling workers to be stationary while different tasks were moved to them, it was identical interchangeable parts that allowed complex products to be assembled from mass produced individual components by using highly specialised machine tools to a sequence of operation.”
An example of this type of innovation would be the Ford Motor Company.
“Crucially, the new assembly line introduced by Ford in 1913 was specifically designed for machinery to be operated by unskilled workers (Hounshell, 1985, p. 239). Furthermore, what had previously been a one-man job was turned into a 29-man worker operation, reducing the overall work time by 34 percent (Bright, 1958).”
“The example of the Ford Motor Company thus underlines the general pattern observed in the nineteenth century, with physical capital providing a relative complement to unskilled labour, while substituting for relatively skilled arti- sans (James and Skinner, 1985; Louis and Paterson, 1986; Brown and Philips, 1986; Atack, et al., 2004).11 Hence, as pointed out by Acemoglu (2002, p. 7): “the idea that technological advances favor more skilled workers is a twentieth century phenomenon.” The conventional wisdom among economic historians, in other words, suggests a discontinuity between the nineteenth and twentieth century in the impact of capital deepening on the relative demand for skilled labour.”
By the late 19th century a pattern of capital-skill emerged.
“The modern pattern of capital-skill complementarity gradually emerged in the late nineteenth century, as manufacturing production shifted to increasingly mechanised assembly lines. This shift can be traced to the switch to electricity from steam and water-power which, in combination with continuous-process and batch production methods, reduced the demand for unskilled manual workers in many hauling, conveying, and assembly tasks, but increased the demand for skills (Goldin and Katz, 1998).”
“…while factory assembly lines, with their extreme division of labour, had required vast quantities of human operatives, electrification allowed many stages of the production process to be automated, which in turn increased the demand for relatively skilled blue-collar production workers to operate the machinery. In addition, electrification contributed to a growing share of white-collar nonproduction workers (Goldin and Katz, 1998).”
“Over the course of the nineteenth century, establishments became larger in size as steam and water power technologies improved, allowing them to adopt powered machinery to realize productivity gains through the combination of enhanced division of labour and higher capital intensity (Atack, et al., 2008a). “
Furthermore, changes in transportation also play(ed) an important role.
“The transport revolution lowered costs of shipping goods domestically and internationally as infrastructure spread and improved (Atack, et al., 2008b).”
“The market for artisan goods early on had largely been confined to the immediate surrounding area because transport costs were high relative to the value of the goods produced. With the transport revolution, however, market size expanded, thereby eroding local monopoly power, which in turn increased competition and compelled firms to raise productivity through mechanisation.”
“As establishments became larger and served geographically expended markets, managerial tasks increased in number and complexity, requiring more manage- rial and clerking employees (Chandler, 1977). This pattern was, by the turn of the twentieth century, reinforced by electrification, which not only contributed to a growing share of relatively skilled blue-collar labour, but also increased the demand for white-collar workers (Goldin and Katz, 1998), who tended to have higher educational attainment (Allen, 2001).”
Which brings us to what’s happening today.
“recent studies find that computers have caused a shift in the occupational structure of the labour market. Autor and Dorn (2013), for example, show that as computerisation erodes wages for labour performing routine tasks, workers will reallocate their labour supply to relatively low-skill service occupations. More specifically, between 1980 and 2005, the share of US labour hours in service occupations grew by 30 percent after having been flat or declining in the three prior decades.
Furthermore, net changes in US employment were U-shaped in skill level, meaning that the lowest and highest job-skill quartile expanded sharply with relative employment declines in the middle of the distribution.”
“The expansion in high-skill employment can be explained by the falling price of carrying out routine tasks by means of computers, which complements more abstract and creative services.”
For example, text and data mining has improved the quality of legal research as constant access to market information has improved the efficiency of managerial decision-making – i.e. tasks performed by skilled workers at the higher end of the income dis- tribution.
The result has been an increasingly polarised labour market, with growing employment in high-income cognitive jobs and low-income manual occupations, accompanied by a hollowing-out of middle-income routine jobs. This is a pattern that is not unique to the US and equally applies to a number of developed economies (Goos, et al., 2009).”
In other words,
“The result has been an increasingly polarised labour market, with growing employment in high-income cognitive jobs and low-income manual occupations, accompanied by a hollowing-out of middle-income routine jobs. This is a pattern that is not unique to the US and equally applies to a number of developed economies (Goos, et al., 2009). “
Now on to the potentially worrisome part.
“The reason why human labour has prevailed relates to its ability to adopt and acquire new skills by means of education (Goldin and Katz, 2009).”
“Yet as computerisation enters more cognitive domains this will become increasingly challenging (Brynjolfsson and McAfee, 2011).”
“For example, Beaudry, et al. (2013) document a decline in the demand for skill over the past decade, even as the supply of workers with higher education has continued to grow. They show that high-skilled work- ers have moved down the occupational ladder, taking on jobs traditionally per- formed by low-skilled workers, pushing low-skilled workers even further down the occupational ladder and, to some extent, even out of the labour force. ”
“As robot costs decline and technological capabilities expand, robots can thus be expected to gradually sub- stitute for labour in a wide range of low-wage service occupations, where most US job growth has occurred over the past decades (Autor and Dorn, 2013). This means that many low-wage manual jobs that have been previously protected from computerisation could diminish over time. “
“Computers will therefore be relatively productive to human labour when a problem can be specified – in the sense that the criteria for success are quantifiable and can readily be evaluated (Acemoglu and Autor, 2011). The extent of job computerisation will thus be determined by technological advances that allow engineering problems to be sufficiently specified, which sets the boundaries for the scope of computerisation.”
“Recent technological breakthroughs are, in large part, due to efforts to turn non-routine tasks into well-defined problems. Defining such problems is helped by the provision of relevant data…”
“data is required to specify the many contingencies a technology must manage in order to form an adequate substitute for human labour. With data, objective and quantifiable measures of the success of an algorithm can be produced, which aid the contin- ual improvement of its performance relative to humans. ”
“As such, technological progress has been aided by the recent production of increasingly large and complex datasets, known as big data. ”
Specifically, which technological advances are likely to lead to human labor replacement?
“advances in fields related to Machine Learning (ML), including Data Mining, Machine Vision, Computational Statistics and other sub-fields of Artificial Intelligence (AI), in which efforts are explicitly dedicated to the development of algorithms that allow cognitive tasks to be automated.”
“In addition….the application of ML technologies in Mobile Robotics (MR), and thus the extent of computerisation in manual tasks.”
Why do these advances add value?
“The use of big data is afforded by one of the chief comparative advantages of computers relative to human labor: scalability. Little evidence is required to demonstrate that, in performing the task of laborious computation, networks of machines scale bet- ter than human labour (Campbell-Kelly, 2009). As such, computers can better manage the large calculations required in using large datasets. ML algorithms running on computers are now, in many cases, better able to detect patterns in big data than humans.”
“Computerisation of cognitive tasks is also aided by another core comparative advantage of algorithms: their absence of some human biases.”
“Advances in user interfaces also enable computers to respond directly to a wider range of human requests, thus augmenting the work of highly skilled labour, while allowing some types of jobs to become fully automated.”
Here are some examples of the types of advances that we are talking about:
“Fraud detection is a task that requires both impartial decision making and the ability to detect trends in big data. As such, this task is now almost completely automated (Phua, et al., 2010).”
“Oncologists at Memorial Sloan-Kettering Cancer Center are, for example, using IBM’s Watson computer to provide chronic care and cancer treatment diagnostics. Knowledge from 600,000 medical evidence reports, 1.5 million patient records and clinical trials, and two million pages of text from medical journals, are used for benchmarking and pattern recognition purposes.”
“Sophisticated algorithms are gradually taking on a number of tasks performed by paralegals, contract and patent lawyers (Markoff, 2011). More specifically, law firms now rely on computers that can scan thousands of legal briefs and precedents to assist in pre-trial research. A frequently cited example is Symantec’s Clearwell system, which uses language analysis to identify general concepts in documents, can present the results graphically, and proved capable of analysing and sorting more than 570,000 documents in two days (Markoff, 2011).”
“…the cities of Doha, São Paulo, and Beijing use sensors on pipes, pumps, and other water infrastructure to monitor conditions and manage water loss, reducing leaks by 40 to 50 percent. In the near future, it will be possible to place inexpensive sensors on light poles, sidewalks, and other public property to capture sound and images, likely reducing the number of workers in law enforcement (MGI, 2013). ”
What about labor replacement in more nuanced fields?
“Moreover, a company called SmartAction now provides call computerisation solutions that use ML technology and advanced speech recognition to improve upon conventional interactive voice response systems, realising cost savings of 60 to 80 percent over an outsourced call center consisting of human labour (CAA, 2012). ”
“Although the extent of these developments remains to be seen, estimates by MGI (2013) suggests that sophisticated algorithms could substitute for approximately 140 million full-time knowledge workers worldwide. “
But technology is still unable to replace labor when it comes to several types of tasks, including:
“1. Perception & Manual Tasks - Robots are still unable to match the depth and breadth of human perception. While basic geometric identification is reasonably mature, enabled by the rapid development of sophisticated sensors and lasers, significant challenges remain for more complex perception tasks, such as identifying objects and their properties in a cluttered field of view…..The difficulty of perception has ramifications for manipulation tasks, and, in particular, the handling of irregular objects, for which robots are yet to reach human levels of aptitude….Manipulation is also limited by the difficulties of planning out the sequence of actions required to move an object from one place to another. There are yet further problems in designing manipulators that, like human limbs, are soft, have com- pliant dynamics and provide useful tactile feedback.”
“2. Creative Intelligence Tasks - The challenge here is to find some reliable means of arriving at combinations that “make sense.” For a computer to make a subtle joke, for example, would require a database with a richness of knowledge comparable to that of humans, and methods of benchmarking the algorithm’s subtlety…..the principal obstacle to computerising creativity is stating our creative values sufficiently clearly that they can be encoded in an program (Boden, 2003). Moreover, human values change over time and vary across cultures. Because creativity, by definition, involves not only novelty but value, and because values are highly variable, it follows that many arguments about creativity are rooted in disagreements about value. Thus, even if we could identify and encode our creative values, to enable the computer to inform and monitor its own activities accordingly, there would still be disagreement about whether the computer appeared to be creative.”
“3. Social Intelligence Tasks – While algorithms and robots can now reproduce some aspects of human social interaction, the real-time recognition of natural human emotion remains a challenging problem, and the ability to respond intelligently to such inputs is even more difficult. Even simplified versions of typical social tasks prove difficult for computers, as is the case in which social interaction is reduced to pure text.”
According to the authors, to predict which jobs are likely to be displaced we need to think about job as rich combinations of 3 traits: perceptual/manual, creativity, & social intelligence. To the degree that jobs are low in these three areas they are likely to be automated (over time).
Likelihood of different jobs to be replaced by computers due to technological advances.
“Seen from this perspective, our findings could be interpreted as two waves of computerisation, separated by a “technological plateau.”
“In the first wave, we find that most workers in transportation and logistics occu- pations, together with the bulk of office and administrative support workers, and labour in production occupations, are likely to be substituted by computer capital. ”
“[The second wave is demonstrated by] algorithms for big data are already rapidly entering domains reliant upon storing or accessing information, making it equally intuitive that office and administrative support occupations will be subject to computerisation. The computerisation of production occupations simply suggests a continuation of a trend that has been observed over the past decades, with industrial robots taking on the routine tasks of most operatives in manufacturing.”
“More surprising, at first sight, is that a substantial share of employment in services, sales and construction occupations exhibit high probabilities of computerisation. Yet these findings are largely in line with recent documented tech- nological developments.”
“Second, while it seems counterintuitive that sales occupations, which are likely to require a high degree of social intelligence, will be subject to a wave of computerisation in the near future, high risk sales occupations include, for example, cashiers, counter and rental clerks, and telemarketers.
“Third, prefabrication will allow a growing share of construction work to be performed under controlled conditions in factories, which partly eliminates task variability.”
“According to our estimates, however, this wave of automation will be followed by a subsequent slowdown in computers for labour substitution, due to persisting inhibiting engineering bottlenecks to computerisation.”
But not everything will be computerized (just yet).
“… generalist occupations requiring knowledge of human heuristics, and specialist occupations involving the development of novel ideas and artifacts, are the least susceptible to computerisation.”
“Our predictions are thus intuitive in that most management, business, and finance occupations, which are intensive in generalist tasks requiring social intelligence, are largely confined to the low risk category. The same is true of most occupations in education, healthcare, as well as arts and media jobs.”
“The low susceptibility of engineering and science occupations to computerisation, on the other hand, is largely due to the high degree of creative intelligence they require.”
“Rather than reducing the demand for middle-income occupations, which has been the pattern over the past decades, our model predicts that computerisation will mainly substitute for low-skill and low-wage jobs in the near future. By contrast, high-skill and high-wage occupations are the least susceptible to computer capital.”
“Our findings thus imply that as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence. For workers to win the race, however, they will have to acquire creative and social skills”
If you are an entrepreneur: then creating value via technological disruption is your main mission. Of course, first you have to identify opportunities to exploit (areas where you can apply technology or pain points that people will pay you to remove).
I suggest flipping to the appendix of this paper. It lists 700 jobs/careers and ranks them according to the probability that they will computerized. Create a spreadsheet with each career and go meet senior people in those industries and ask them what types of products or services would help them improve their business(es). Ideally you will already have experience in one or more areas that are likely to be computerizable.
If you are an equity investor: Firs, take a look at this list of companies in the automation/robotics space
If you are thinking about investing in automation/ robotics companies like Fanuc, Kuka, Durr, or these, then competitive/ innovative threats are likely to appear where technological bottlenecks exist. I would make sure to ask senior management teams how they are spending r&d to tackle these types of issues and whether there are competitors that have a lot of competencies in these areas (as they could be acquisition candidates).
Also, a good idea would be to plot cost of low-skill labor (which you can source from the appendix) in various areas around the world to understand potential adoption/demand for automation/robotics products/services.
If you are a venture capital investors: If you are a big picture (top down thematic) investor I would focus on 3 things:
First, research organizations trying to tackle technological bottlenecks (meaning making advances in perception/manipulation, social intelligence, etc). But, I suspect investing at this stage is a very high risk low reward strategy because development would involve lots of r&d. A better strategy would be to pick teams within academia that are brink of discoveries and help them commercialize their research.
Second, I would try to map jobs/industries using this paper’s appendix and then search for companies via crunch-base, angel-list that are trying to innovate these domains.
Third, I would use this paper for risk mgmt. Meaning, are you investing in companies that are likely to use insights from people innovating in areas of perception/manipulation, social intelligence, ai, etc? If the answer is no, revisit your portfolio and start talking to your ceo’s.
If you are a student (high school & college): I would spend sometime browsing this paper’s appendix.
Take a look at jobs that are very likely and unlikely to be computerized. Then map which majors and career paths lead to those jobs. Avoid majors leading to jobs that are likely to be computerized. But there is another bigger picture message, you absolutely need to invest in non computerizable skills while focusing on becoming a well rounded generalist, communicator, and manager.
One last point:
If the ideas in this paper are viable then I have a feeling equity investors will enjoy some ebit margin expansion as cogs/sga decline due to technological innovation.
You can access Frey & Osborne’s paper here.
This a pretty interesting panel (moderated by Michael Milken) on the current state of credit markets.
H/T Champ for sending this our way.
I’ve started delving into the world of artificial intelligence and of course I am starting by reading the classics. Here is a link to Alan Turing’s very famous work, Computing Machinery & Intelligence, a paper that many say catapulted artificial intelligence research.
What is the Turing Test?
The Turing test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine’s ability to render words into audio.