Technological Unemployment Amidst Stagnation

Introduction: Not Just Another Cyclical Downturn

The recession that followed the 2008 financial crisis was the most severe that the developed world had experienced since the Great Depression of the 1930s. Even five years later in 2013, unemployment remains far higher than pre-crisis levels. The most popular explanations for why the Great Recession of 2008 has proven to be so long-lived focuses the two themes. One explanation blames the recession on inadequate demand and attributes the persistence of unemployment to the failure of either the central bank or the government in not providing enough stimulus. Monetary economists blame the Fed for not providing more monetary stimulus and Keynesian economists blame the government for not providing enough fiscal stimulus. The second explanation blames the slow recovery on the severity of the financial crisis, noting that historically most recoveries that follow financial crises tend to be slow and gradual1.

But neither of these explanations can account for the most striking characteristic that defines the post-2008 economic recovery in the United States – the fact that when viewed through the lens of corporate profits the recession is already long over. The benefits of the current economic recovery have flown disproportionately towards corporate profits with wages and employment lagging far behind. Since 2008 we have seen a robust recovery in corporate profits, a modest recovery in GDP and a feeble recovery in employment and wages.

And this disconnect between profits, GDP and unemployment is not a new phenomenon. Since at least 1990, each successive recovery in employment has proven to be weaker than the previous recovery, with profits and GDP recovering faster than employment and wages. The ratio of corporate profits to GDP has now reached an all-time high whereas the ratio of wages to GDP continues to fall. Whatever ails the US economy is not just a cyclical phenomenon but a structural problem and worryingly the condition of the economy is deteriorating with each business cycle.

Progressive Deterioration In Pace Of Recovery

Progressive Deterioration In Pace Of Recovery

Corporate Profits/GDP At All-Time High

Corporate Profits/GDP At All-Time High

Wages/GDP At All-Time Low

Wages/GDP At All-Time Low

Some attribute the declining share of wages in GDP to the rapid nature of recent technological change and innovation and in particular to the increasingly automated nature of many economic activities and advances in labour-saving technology. This is an intuitively appealing explanation. But it neglects that fact that productivity growth across the developed world has been anaemic for the last four decades, a phenomenon that Tyler Cowen has called ‘The Great Stagnation’2. How can we suffer from stagnant productivity growth and technological unemployment at the same time?

Product Innovation And Process Innovation

To understand how technological unemployment can occur during a period of stagnant productivity growth, we need to analyse the nature of innovation more closely. Innovation in the economy can be broadly divided into two categories: product innovation i.e. creating new consumer goods and services, and process innovation i.e. creating cheaper, more efficient ways to produce existing consumer goods and services.

The dynamics of product innovation are very different from those of process innovation. Product innovation typically entails a high degree of uncertainty and trial-and-error. Process innovation on the other hand is typically more amenable to being analysed within a conventional risk-reward framework. Even when investments in process improvements are large, outcomes are less uncertain than they are in product innovation.

Most economists deny the possibility of technological unemployment on the grounds that our wants are unlimited. Therefore if automation and artificial intelligence displace workers from one part of the economy, they will find employment elsewhere in another sector. Indeed, something similar has been occurring in our economy for at least the past two hundred years. The most significant example of this reorganisation was the automation of the agricultural sector in the first half of the twentieth century which resulted in a significant shift of the workforce away from agriculture into manufacturing and services.

But this simplistic innovation ignores two critical nuances of how this reorganisation has taken place in reality. First, economies reorganise back to full employment in the long run not via increased consumption of existing goods and services but via consumption of entirely new goods and services. When process innovation drives down costs and prices, consumers may initially consume more of the existing basket of goods and services. But sooner or later, diminishing marginal utility and satiation sets in and consumption of existing goods and services stops increasing. At this point, new product innovation is needed to provide an entirely new basket of goods and services. Due to the uncertain nature of new product innovation, it is by no means a certainty that the economy will be able to provide the requisite amount of product innovation to maintain full employment, especially in the short run. It is precisely this combination of high process innovation and low product innovation that has made technological unemployment possible even in a period of stagnant productivity growth. In the short run, productivity growth can be sustained by process innovation. However, long run productivity growth requires bursts of product innovation along with persistent process innovation.

The second nuance that is often latched upon by advocates of technological unemployment is the question: what if there are no more economic activities that human beings can perform any better than machines and robots? To answer this question we need to take a much closer look at the nature of innovation in automation and artificial intelligence over the last two hundred years, a task that I undertake later in this essay. But first we need to analyse exactly why process innovation has been so rapid and product innovation so sluggish in recent times.

Product Innovation Vs Process Innovation: New Entrants Vs Incumbent Firms

Due to the uncertain nature of new product innovation, incumbent firms rarely excel in it unless compelled to do so by the competitive pressure exerted upon them by new entrants. Even in industries where entry of new firms is free and unrestricted, incumbent firms rarely come up with disruptive new products. Historically, new entrants to an industry have been responsible for most disruptive product innovation. On the other hand, process innovation being a lower-risk activity is typically introduced by established incumbent firms and not new entrants3.

Incumbent firms have very little incentive to invest significant resources in risky initiatives that aim to displace their existing cash-cow businesses. In many instances, they may face resistance not only from internal departments that feel threatened by the potential success of new products but also from customers who are reluctant to embrace disruptive change. In fact a great deal of the uncertainty in new product innovation arises from the fact that it is almost never driven by the customer. As the old adage goes, customers rarely know what they want unless they see it, a principle embodied by Steve Jobs’ time at Apple Computers. New product innovation requires constant trial and error and most of these trials are bound to fail. Incumbent firms are, quite rationally, primarily focused on protecting their existing source of profits and minimising the risk of failure rather than undertaking speculative risks where the odds of failure are greater than the odds of success.

In the absence of new firm entry, even a competitive industry with many players will focus on process innovation and cost reduction and avoid any potentially disruptive product innovation. When incumbent firms do undertake product innovation, they do when their existing source of super-normal profits is threatened by disruptive products from new entrants. In an environment where product innovation is high, not undertaking new product initiatives is the riskier option. Simply protecting existing revenue streams rarely works out. Despite this, many incumbent firms are rarely able to respond effectively to new entrants, primarily due to organisational rigidity4. New entrants on the other hand face a different set of incentives. Having no existing profits to protect, the lure of capturing such super-normal profits drives their actions far more than the much larger possibility of failure.

In other words, unless incumbent firms face the threat of failure due to the entry of new firms, product innovation is unlikely to be robust. The role of failure in fostering product innovation has sometimes been called the ‘invisible foot’5 of capitalism. Process innovation being a lower-risk endeavour is not dependent upon the threat of failure. Simply instituting a regime where firm owners and employees are incentivised to seek higher profits is sufficient to encourage process innovation. In other words, the positive incentives of Adam Smith’s invisible hand are sufficient to give us a high level of process innovation but disruptive product innovation requires the negative incentives of the fear of failure i.e. an invisible foot of dynamic competition from new entrants.

Process and Product Innovation in the Post WW2 United States

Viewed through the lens of product vs process innovation, the Post WW2 economic history of the United States can be divided into two parts6. The first half from 1945 till the 70s was a period when the pace of both product and process innovation was slow. As Alexander Field has shown, much of the productivity growth in the aftermath of the war came from exploiting product innovation that had already taken place during the 1930s7. The damage done to the industrial base of the rest of the developed world meant that there was very little competition for American goods from foreign manufacturers. Most large American firms were also largely insulated from strong shareholder pressure to improve profitability. This combination of low import competition, low rate of entry by new firms and weak shareholder pressure meant that there was very little process innovation or cost control. It is not a coincidence that many view the 1950s and 1960s as a golden age of economic growth and stability. It was essentially a period when neither firm owners, managers or workers felt the threat of failure or even had the incentive to improve efficiency or control costs. It was a period of stability for all, masses and classes alike.

The second half from the 1980s onwards has been characterised by an equally low, maybe lower, rate of product innovation. However, process innovation has accelerated significantly. This is the period often referred to as the neoliberal era. The neoliberal revolution is often viewed as a shift towards more deregulated and free markets but this interpretation is a misleading half-truth. In reality, the neoliberal turn in the developed world was characterised by a dramatic resurgence in shareholders asserting their rights over incumbent firms along with a series of initiatives that sought to mimic the positive incentive structure of markets in domains that had hitherto not been subject to such incentive pressures. However although the invisible hand was unshackled, the invisible foot was left in an even more crippled state than before. Deregulation and privatisation often simply replaced staid monopolies with equally conservative yet shareholder-focused oligopolies. Licensing and patent regimes became steadily more dysfunctional and prevented the entry of smaller firms. The regulatory burden also only served to protect large incumbent firms against new entrants. The earlier regime of stability for all had been transformed into a regime of stability for the classes and instability for the masses.

The Greenspan Put And The Great Stagnation

The neo-liberal economic model amplified the control of shareholders over incumbent firms but it diminished the disruptive competition faced by them, thus incentivising incumbent firms to accelerate process innovation and neglect riskier product innovation. It is obvious how barriers to entry such as licensing requirements promoted this shift. But a much more important driver of this shift was the shift in monetary policy that took place during the so-called ‘Great Moderation’.

The conventional wisdom views the Great Moderation as the golden era of monetary policy when the opaque discretionary policies of the past were replaced by rational rule-based policy. All recessions and unemployment that were the result of a shortfall in demand were countered with monetary stimulus and fiscal policy was deemed to have no role in macroeconomic stabilisation. Deregulated financial markets enabled the impact of monetary policy decisions to flow through to the real economy more effectively than it had in the past. At least until the 2008 financial crisis, almost all mainstream economists agreed that demand management was a task best left to a technocratic central bank to manage. Unfortunately, this account of monetary policy bears very little similarity to the actual policy conducted during the Great Moderation.

Monetary policy during the Greenspan era was based on one simple thumb rule: “support asset prices and the rest will take care of itself”. The best example of this doctrine was found early in Greenspan’s tenure when the 1987 stock market crash was countered with a massive monetary stimulus based simply on the fear of a potential real-economy recession. An even more egregious example of this doctrine was the Fed’s reaction to the failure of the hedge fund Long Term Capital Management(LTCM) in 1998 when it cut rates to support the markets at a time when the real economy was booming. This doctrine of monetary policy is often referred to as the ‘Greenspan Put’ which refers to the impact that this policy had on financial markets and the banking sector. Market participants could assume that any fall in asset prices would be countered with monetary stimulus thus providing the “free” protection resembling a put option. But as disastrous as the impact of the Greenspan Put was on the financial economy, it paled next to the impact the policy had on the real economy.

If you protect a system from the effects of any particular risk, actors within the system will take on more of the protected risk assuming rationally that the system manager (in this case the Fed) will protect them. The Greenspan Put regime drove down the risk of being exposed to broad macroeconomic market risk. Market participants rationally took on more macroeconomic asset-price risk and substituted for the risk they had been relieved of by the Fed with more leverage. Conventional portfolio theory views the asset allocation decision as one of choosing the split between the risk-free asset and the market portfolio. But when the risk of the market portfolio is suppressed, the decision changes to choosing how much to borrow against the market portfolio.

And this is exactly what the financial sector proceeded to do. Far from being a neutral channel of monetary policy from the Fed to the real economy, the deregulated yet too-big-to-fail financial sector that was also protected from new entrants realigned itself to take on macroeconomic risk by lending to housing and large established firms. The attractiveness of this strategy meant that banks shunned lending exposed to non-macroeconomic idiosyncratic risks such as lending to small businesses or new firms. The Greenspan Put doctrine thus triggered a realignment away from the idiosyncratic risk-taking that lies at the heart of disruptive new product innovation. But there’s more to it than just the financial market effect. The doctrine also encouraged firms in the real economy to become as bank-like as possible. No firm took advantage of the new regime like General Electric(GE) did. GE under Jack Welch transformed itself into an industrial firm whose profits came largely due to its financial arm, GE Capital which lent to its industrial customers (amongst others). So successful was this transformation that by the time the 2008 crisis hit, GE had also become too-big-to-fail thanks to GE Capital and was found to be eligible for a bailout.

GE also exemplified how the new regime of amplified shareholder control over firms and the Greenspan Put not only discouraged product innovation but encouraged process innovation. Faced with the constant pressure of meeting quarterly earnings targets from shareholders (most of whom were themselves holding diversified market portfolios), the only innovation for which there was any appetite was low-risk process innovation that could cut costs and enable higher leverage. Once the flab had been eliminated from most large firms but the initial burst of private equity buyouts, leveraged buyouts and hostile takeovers (or the threat of hostile shareholder action against management), process innovation was the logical next step in reducing costs further.

Although the monetary policy of the Great Moderation caused stagnant innovation and stagnant wages, it also provided the temporary palliative medicine that maintained full employment and economic growth by fuelling an increase in household leverage. Despite the absence of real wage growth, households were able to increase consumption by borrowing from banks eager to lend against macroeconomic risk-bearing collateral i.e. housing.

Automation, Artificial Intelligence and Process Innovation

Almost all the important technologies of automation and artificial intelligence are technologies of process innovation that help us produce the same basket of consumer goods and services in a cheaper, more efficient manner. Although the recent wave of labour-displacing technology such as manufacturing robotics and machine learning may seem new, they are in fact simply the latest in a long chain of such technologies going back to the machines in the early eighteenth century that attracted the ire of the original Luddites. Despite the popular perception, replacing labour with a machine rarely involves constructing a machine or an artificial intelligence that can do exactly what the human worker can do. Instead, automation involves redesigning the domain itself such that the work can now be done by a machine, or increasingly now, an algorithm. An excellent example of this was seen during the industrialisation and mechanisation of agriculture where many fruits and vegetables were modified such that they could be harvested by a machine without causing damage, as opposed to constructing a machine that could harvest the existing fruits and vegetables in as careful a manner as humans would. In other words, automation and labour displacement does not require that the machine or robot be able to do exactly what the human being does, or that the machine be able to do the task in the same manner as the human worker. Indeed historically this has rarely been the case. There is very little similarity between the brute-force method by which a computer plays chess and the intuitive manner in which a human grandmaster plays chess. But because the uncertainty within the domain of a game of chess is bounded the computer is able to match or beat the human. As the uncertainty in the domain increases, expert humans tend to outperform the brute-force automated approach. This is true even for more complex games than chess such as Go, let alone more complex and ambiguous real-world domains.

In order to deal with the residual uncertainty created by the routinisation of the domain, human beings are often left the task of monitoring and managing these automated systems. But these are not the only jobs that humans have performed in our increasingly automated economy over the last one hundred years. Many routine jobs that have provided avenues of mass employment during the twentieth century have typically been jobs requiring the use of human sensory and motor skills, skills that have proven hardest to automate. This phenomenon is known as ‘Moravec’s Paradox’8 named after the artificial intelligence researcher Hans Moravec who observed that those skills we typically identify with intelligence (e.g. rational decision making) tend to be the skills that are easiest to replicate via an artificial intelligence (a combination of data and algorithms). But those skills that even a baby possesses, such as the ability to move around complex environments and pick up a variety of objects, tend to be the hardest to replicate in a robot. In a way some of what separates from the machines is what unites us with the animals. So despite the increasing automation of manufacturing and services, humans retained jobs that required routine sensory and motor skills. Even the most automated supply chain required truck drivers and restaurants still required waiting staff. Even within the routinised domain, humans and machines were complements not substitutes.

But this era where Moravec’s Paradox shielded many routine jobs from being automated away is rapidly coming to an end. A critical element of the recent success on this front has again been the acceptance that matching or beating human performance levels does not require an artificial replication of the human method. The robotic vacuum cleaner does not operate like a human vacuum cleaner. Nor does the self-driving car drive like a human driver. But in many cases, a simple algorithm combined with large amounts of sensory data is enough to match or beat the average human’s performance.

The implications of this are simple yet frightening. The routine jobs that provided avenues of mass employment in the twentieth century are increasingly a thing of the past. It is very easy to envisage a future where the vast majority of work in large, established parts of the economy is almost fully automated with human beings simply performing the role of monitoring and managing the system during extraordinary circumstances.

The Future Of Human Employment: The Near-Automated Economy

If robots and machines can perform every economic activity as well as humans can, then the very aim of full employment seems obsolete. The problem then would appear to be one of distribution, of how the returns from technological capital are distributed within the economy.

As of now however, we are nowhere near this point. What has proved amenable to being performed by machines are activities that can be performed adequately by some combination of an algorithmic process and data. This obviously leaves out artistic endeavours or economic activities that require the generation of novelty (such as creating new products). But it also leaves out any human activity that requires artisanal expertise. At least as of now, human creativity still seems to be an essential component of many economic activities. Many of you will object that I am placing too much faith in the inherent superiority of human creativity. And who is to say that the same results achieved by intuitive human expertise in “creative” activities won’t be matched by a machine sooner or later?

This objection arguing for the inevitable perfection of artificial intelligence technology ignores the fact that the imperfect human contribution itself has a positive economic value in many cases. Even if a machine creates works of art that are objectively equivalent to those of Jackson Pollock, does any one doubt that Pollock’s paintings will still command a significant premium? The same could be argued for much of the organic, local food industry whose appeal arises in part simply from the fact that the food is not produced by machines in a distant corner of the world.

This does not imply that the entire “human economy” will reorganise itself to resemble a pre-Industrial Revolution artisanal craft economy. The collapse in the cost of goods and services produced by the automated economy will mean that purely artisanal products will remain a luxury good. What will provide the majority of human employment will be the ‘near-automated’ economy where a small, yet critical, proportion of creative human endeavour is combined with a largely automated process. And it is this near-automated economy that has received the greatest fillip from the last ten years of the algorithmic revolution and the collapse in the economies of scale and scope that it has brought about (in stark contrast to automation through the twentieth century which led to increased economies of scale and scope).

In many economic and artistic fields, near-automation has been a reality for a while now. Even the smallest writers, artists and musicians can now create niche products and produce and distribute them efficiently across the world. They are able to do this precisely because although the design and initial creation remains a human activity, the replication and distribution are automated. 3-D printing and robots that can operate in smaller spaces are just a couple of the many technologies that enable a small-scale manufacturer to compete with much larger firms. The largely algorithmic nature of the production process means that a small manufacturer can simply create the design code for a product and hire a manufacturer in China to produce even small batches of this product without suffering a significant cost disadvantage vis-a-vis large firms.

Even if there remain many activities that human beings can and will continue to perform, there is no denying that large parts of the economy will be almost fully automated. It is imperative that the economic returns from these sectors are not unfairly concentrated in the hands of a few. Else, human job options in the economy of the future will be largely restricted to being servants or court jesters to the rich. It is easy to advocate simple measures of taxation and redistribution to redress the inequality of the current system. But redistribution will do nothing to restore the innovative dynamism of the economy. And moreover, we can do a lot better. The same measures that will help us transition to an innovative, resilient near-automated economy will also give us a fairer society with more equal opportunities than our current neo-feudal economic system.

Transitioning To The Near-Automated Economy

Principles

Earlier I described the 1950s and 1960s as a period of “stability for all” and the neoliberal era as a period of “stability for the classes and instability for the masses”. The essence of my policy proposals is simple: All economic actors must be subject to the disruptive, disorderly forces of competition i.e. disorder for all, masses and classes alike. Many of us would prefer that we somehow turn back the clock and recreate the imagined stable utopia of the 50s and 60s. Even if this were feasible, constructing an economy where firms and all their stakeholders are provided with perfect stability is a recipe for stasis and stagnation. It is a solution that, at best, enables us to share s static economic pie in a more equitable manner. Moreover, even this outcome of a static pie is not certain. Dynamic competitive tension and the threat of failure due to disruptive innovation at the level of the firm is not just important to expand the size of the economic pie. It also helps maintain the resilience of the system against unexpected shocks by either enabling the system to maintain critical functionality or to rapidly reorganise to an effective state after systemic collapse.

Disorder also lies at the heart of distributive justice. The collective bargaining power of labour and the share of the economic output that flows to labour increases when the firms that employ them are individually fragile and subject to the constant threat of failure. A competitive free enterprise economy does not equate to the theoretical ideal of perfect competition where no firm earns super-normal profits (also known as rents). It is the lure of super-normal profits that drives the entry of new firms into an industry. No venture capitalist has ever funded a startup that tried to make a market rent-free. A successful new entrant does not extinguish rents, it captures them. What matters is that the incumbent firm earning super-normal profits is subject to the constant threat of losing these profits. The fact that an incumbent firm makes significant super-normal profits does not imply that entrepreneurs and capital as a class do the same. For every successful firm, there are many who fail.

Proposals

Monetary/Fiscal Policy

Apart from the damage that asset price obsessed monetary policy does to the economy’s ability to innovate, doctrines like the Greenspan Put also act as a transfer of wealth from the society as a whole to some of its richest members. Contrary to the popular perception of monetary policy as a neutral macroeconomic policy with a minimal impact on income distribution, real-world monetary policy that focuses on propping up asset prices overwhelmingly favours the rich for one obvious reason that even central banks have now begun to admit9. Most assets are owned by the rich(see table below10). The asset-less poor rarely have a direct stake in the performance of the S&P500. The idea that supporting asset prices is the best way to support the wider economy is essentially a form of trickle-down economics (or as Will Rogers put it: “money was all appropriated for the top in hopes that it would trickle down to the needy.”).

Household Wealth Distribution

Instead of macroeconomic policy being directed at asset markets, interventions during economic crises should be carried out through measures that provide an equal benefit to all. The best example of such a policy is a ‘helicopter drop’11 where the government simply prints and sends a sum of money to each individual.

Safety Net

Protection against economic risk must focus on providing a safety net for the people and not a hammock for the firms that employ them. No firm should be too big or too important to fail.

Dismantle Entry Barriers

Entry barriers such as licensing, patent regimes and onerous regulatory regimes need to be comprehensively and systematically dismantled. Most of the damage done by entry barriers is not felt by venture-capital funded startups. It is felt by individuals and small businesses in much more mundane parts of the economy – hairdressers, small food businesses etc. In many sectors of the economy, an employee who is made redundant has no option but to seek employment in another one of the large incumbent firms. Even if he wants to, starting his own business is often not an option. This needs to change, especially at a time when collapsing economies of scale and scope make entrepreneurship a viable option in more and more sectors of the economy.

No Safe Assets

A popular explanation for the 2008 financial crisis is that assets earlier deemed safe (such as triple-A rated mortgage backed securities) turned out to be risky assets in reality. Proposed solutions to this problem tend to recommend the creation of truly safe assets by the government to meet this demand for safety. This idea must be rejected. Instead, all assets must be made unsafe. There is no reason why the government should provide any economic actor with the means to preserve purchasing power without taking on meaningful economic risk. Manufacturing safe assets would simply allow those who captured the benefits of the Greenspan Put era to preserve their gains at the expense of the rest of the economy.

Education For The Near-Automated Economy

Since the Industrial Revolution, the economy has become increasingly automated, routinised and algorithmic in nature. But the often incomplete nature of this routinisation combined with Moravec’s Paradox (the difficulty of replicating human sensory and motor skills in a machine) has meant that human labour was typically a complement of the automated system in performing what were still largely routine activities. Deskilling as this process of automation often was(see graph below12), it nevertheless provided avenues of mass employment providing a reasonable compensation, albeit doing routine work. Understandably, 20th century education evolved to meet the demand for workers and employees who could perform in a systematic and algorithmic manner alongside the automated system. Creativity, innovation and deep domain expertise were only required of a few employees in a firm. For the rest, it was far more important to be reliable and efficient. Even many so-called skilled employees in managerial roles were required to align themselves to the bureaucratic functioning of the organisation more than they were required to display any creative insight.

Automation and Deskilling of the Human Operator

As explained earlier, the era of mass employment in routine work is rapidly coming to an end as Moravec’s Paradox is overcome and more systems achieve near-perfect automation. However, our educational system still remains wedded to the routinised industrial model. In the near-automated economy humans are complements to the machine not within the automated domain but outside it. Their role does not require them to contribute the algorithmic knowledge and actions that the machine can provide. The human worker needs to provide the deep expertise, creative insight and innovation that the machine cannot provide.

It is beyond the scope of this essay to offer a detailed analysis of just how an educational system can prepare its students to meet this challenge. Instead I will just offer a couple of observations that are often ignored in the otherwise comprehensive literature on this topic. First, the process of learning for expertise is itself a disorderly, non-linear path where failure upon facing problems that lie beyond one’s competence is often more effective in stimulating learning than the steady imbibing of facts and techniques followed by the tackling of well-defined, familiar problems13.

Second, performance in many near-automated activities requires us to integrate artificial and human intelligence while still enabling the human to achieve deep, intuitive domain expertise. The most notable successes in this endeavour have come in activities where the fundamental essence of the domain as experienced by the human has remained amenable to intuitive expertise. For example, combined teams of a computer and a human often perform much better at chess than humans or computers alone can14. The fundamental reason why a human Grandmaster remains a valuable component in such a team is that the domain remains relatively intuitive. The chess board remains exactly the same as it has always been.

Unfortunately, this is not the case in many automated domains where, as I earlier described, the process of automation has modified the domain itself in often dramatic fashion to make the domain more amenable to algorithmic control. For example, airline pilots now fly complex computerised machines that provide little intuitive feedback. Financial market traders who used simple models and traded on physical trading floors now monitor the performance of complex black-box models and high-frequency trading algorithms. Therefore, even in domains where the algorithmic complexity of the domain is now unavoidable, students may be better off learning within less-automated domains where deep intuitive expertise may be more effectively achieved.

Conclusion

Innovation and the generation of novelty is a fundamentally disorderly process in all complex adaptive systems – biological, ecological or economic. When we manage a system to achieve perfect order and stability, we end up with stasis and fragility. But this does not imply that we need to embrace chaos and constant failure. To achieve a resilient macroeconomic system, we only need to embrace a little bit of disorder – in our firms, our financial markets and in ourselves.


  1. ‘The Aftermath of Financial Crises’ by Carmen Reinhart and Kenneth Rogoff (2008). ↩

  2. ‘The Great Stagnation’ by Tyler Cowen (2011).  ↩

  3. On product vs process innovation and how process innovation typically comes from incumbents and product innovation for new entrants, see ‘Mastering the Dynamics of Innovation’ by James Utterback (1996). ↩

  4. On rigidity see my earlier post ‘Organisational Rigidity, Crony Capitalism, Too-Big-To-Fail and Macro-Resilience’ ↩

  5. The concept of the “Invisible Foot” was introduced by Joseph Berliner as a counterpoint to Adam Smith’s “Invisible Hand” to explain why innovation was so hard in the centrally planned Soviet economy:
    “Adam Smith taught us to think of competition as an “invisible hand” that guides production into the socially desirable channels….But if Adam Smith had taken as his point of departure not the coordinating mechanism but the innovation mechanism of capitalism, he may well have designated competition not as an invisible hand but as an invisible foot. For the effect of competition is not only to motivate profit-seeking entrepreneurs to seek yet more profit but to jolt conservative enterprises into the adoption of new technology and the search for improved processes and products. From the point of view of the static efficiency of resource allocation, the evil of monopoly is that it prevents resources from flowing into those lines of production in which their social value would be greatest. But from the point of view of innovation, the evil of monopoly is that it enables producers to enjoy high rates of profit without having to undertake the exacting and risky activities associated with technological change. A world of monopolies, socialist or capitalist, would be a world with very little technological change.”
    For disruptive innovation to persist, the invisible foot needs to be “applied vigorously to the backsides of enterprises that would otherwise have been quite content to go on producing the same products in the same ways, and at a reasonable profit, if they could only be protected from the intrusion of competition.” ↩

  6. Most of this essay deals with the US economy simply because it is the ‘frontier economy’ of the second half of the twentieth century. An analysis of other countries would be clouded by the period when they were recovering from World War 2. Moreover, although stagnation is a reality for the entire developed world right now, unemployment via process innovation is not yet a reality in more ‘inefficient’ crony capitalist countries such as Japan.  ↩

  7. From ‘A Great Leap Forward: 1930s Depression and U.S. Economic Growth’ by Alexander Field (2011): “Through marketing and planned obsolescence, the disruptive force of technological change – what Joseph Schumpeter called creative destruction – had largely been domesticated, at least for a time. Whereas large corporations had funded research leading to a large number of important innovations during the 1930s, many critics now argued that these behemoths had become obstacles to transformative innovation, too concerned about the prospect of devaluing rent-yielding income streams from existing technologies. Disruptions to the rank order of the largest U.S. industrial corporations during this quarter century were remarkably few. And the overall rate of TFP growth within manufacturing fell by more than a percentage point compared with the 1930s and more than 3.5 percentage points compared with the 1920s.” ↩

  8. Quoting Hans Moravec: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” ↩

  9. As the Bank of England notes, quantitative easing for example has “increased the net wealth of asset holders” and “holdings are heavily skewed with the top 5% of households holding 40% of these assets”.  ↩

  10. Table taken from ‘Recent Trends in Household Wealth in the United States’ by Edward Wolff (2010).  ↩

  11. On monetary policy via helicopter drops, see ‘Monetary policy for the 21st century’ by Steve Waldman. ↩

  12. The graph on deskilling and automation is drawn from the discussion on ‘Automation and Management’ by James Bright (1958) in [‘Labor and Monopoly Capital’] by Harry Braverman (1974).  ↩

  13. See for example the research on ‘productive failure’ by Manu Kapur summarised in this article by Annie Murphy Paul. ↩

  14. See this post by Tyler Cowen for an insightful discussion on the challenges of integrating human and artificial intelligence and the much higher bar of performance and creativity that the human has to achieve as the automated component’s performance improves.  ↩

Comments

  1. From TechnologicalUtopia.com, with permission.

    We all want to cheer every time a robot puts someone out of work. Work should be optional given the productivity gains we are seeing and the cost of living. Our governments should be spending more of our tax-dollars on something many seem to want. I’m for robotics that are owned by all of the citizens of a country. Fully automated robotics factories, with self replicating robotic arms. Highly automated renewable energy, windmills or underwater water mills. Highly automated steel production. Highly automated chip manufacturing, and Linux. I’ve seen some automated building manufacturing companies starting up as well. Other prerequisite products can eventually be manufactured as well. All source code and blueprints have to be fully owned with rights to an infinite amount of use. All owned by the citizens of the country concerned. Small factories at first, with all of the bugs worked out, so that it largely builds itself in the end. It should be affordable, I’m an economic conservative. Eventually the complex can produce consumer goods besides steel, energy, chips, buildings, and robotics. Charities and the open source community can help as well. I support liberal licensing agreements of source code and blueprints, to allow royalty free replication. Surpluses could be sold to pay for additional engineering by those ambitious. Revenues would be paid to those citizens who have invested. Continued exponential self-replication would eventually lead to true post-scarcity, for every citizen of a country.

Trackbacks

  1. [...] vision is a perfect analogy for the dynamics of value in the near-automated economy. Even in a world where the human contribution has little objective value, it has subjective value [...]

  2. [...] Here are a few great thinkers on this issue for further reading: Albert Wenger’s series on Labor and Innovation Vaclav Smil on The Manufacturing of Decline Andrew McAfee on The Great Decoupling of the US Economy David Autor on How Technology Wrecks the Middle Class Clay Christensen on A Capitalist’s Dilemma Sam Altman on Growth and Government Paul Krugman’s various posts on Technology and Wages Tyler Cowan on 10% Unemployment Forever Ashwin Parameswaran on Technological Unemployment Amidst Stagnation [...]

  3. […] thesis of his own — much of it concerning automation –outlined in his marvelous essay “Technological Unemployment Amidst Stagnation.” This alternate explanation also attempts to explain why “the “benefits of the current economic […]

  4. […] this very much syncs with what Ashwin Parameswaran has written on how the “Greenspan Put” and its emphasis on supporting […]

  5. […] Here are a few great thinkers on this issue for further reading. I don’t agree with all of them but think they provide a valuable diversity of perspective: Albert Wenger’s series on Labor and Innovation Vaclav Smil on The Manufacturing of Decline Andrew McAfee on The Great Decoupling of the US Economy David Autor on How Technology Wrecks the Middle Class Clay Christensen on A Capitalist’s Dilemma Tyler Cowan on Who will prosper in the new world and 10% Unemployment Forever Sam Altman on Growth and Government Paul Krugman’s various posts on Technology and Wages Ashwin Parameswaran on Technological Unemployment Amidst Stagnation […]

*