The Churn Myth: Why Your Fear of Robots is 150 Years Late Date: 2026-03-01 Author: John Brennan Source: https://johnbrennan.xyz/essay/the-churn-myth Occupational churn is at historic lows, not historic highs. The real disruption is delayed productivity — and the organizational redesign required to unlock it, as shown by 150 years of GPT adoption. --- Why job disruption is overstated, productivity is understated, and GPTs only pay off after the factory regime changes TL;DR The claim that the AI era is producing unprecedented job disruption does not match the measured labor-market record, which is better explained by historically low occupational churn plus a familiar general-purpose-technology (GPT) adoption lag. Occupational churn is defined as the sum of the absolute values of jobs added in growing occupations and jobs lost in declining occupations, making it a reallocation metric rather than an anecdote index. This essay summarizes extensive academic studies of GPT diffusion in the US and UK labor markets during the first and second industrial revolutions. In the long-run U.S. series from 1850 to 2015, occupational churn peaks above 50% in 1850–1870 and falls to roughly 10% in the most recent 15 years of the series.Over the last 20 years, churn runs at 38% of 1950–2000 levels and 42% of 1850–2000 levels. Even the job-loss component is comparatively mild, with recent absolute losses at about 70% of those in the first half of the 20th century and a bit more than half of those in the 1960s, 1970s, and 1990s. The implied risk signal is therefore not “too much churn,” but too little churn and too little productivity growth. The electrification precedent illustrates why productivity gains arrive late: after a roughly three-decade productivity pause, U.S. manufacturing total factor productivity (TFP) rose at more than 5% per year in 1919–1929 alongside a new factory regime based upon the electric dynamo. That surge was broad rather than niche, with 13 of 14 major manufacturing categories accelerating in multifactor productivity growth, consistent with GPT diffusion requiring rebuilt workflows, management systems, and complementary infrastructure. Power transitions also show why institutional resilience matters: steam diffused in earnest in U.S. manufacturing from 1850–1880, after 1880 electrical power increasingly became an alternative, steam made establishments more “footloose” than water power, and by 1800 England had 2,207 steam engines installed while a benchmark increase to 0.44 engines per 1,000 persons implies a 13 percentage-point decline in unskilled share (mean 42%) alongside fewer primary schools (-64%, mean 0.47 per 1,000) and higher gender literacy inequality (+11 percentage points, mean 18%). In the long view, the Great Divergence in wages was largely produced in 1500–1750 and broad living-standard gains arrived much later in 1870–1913, so leaders should not plan around a “jobless future” but around converting AI into a new factory regime through organizational redesign, complementary investment, and scalable institutional resilience. Key Takeaways U.S. occupational churn peaked above 50% in 1850–1870 and has fallen to roughly 10% in the most recent 15 years of the historical series — the opposite of the “unprecedented disruption” narrative. General-purpose technologies (GPTs) like the electric dynamo required roughly three decades of organizational redesign before producing a broad productivity surge (more than 5% per year in 1919–1929 across 13 of 14 manufacturing categories). Net job creation has historically depended on second-order purchasing-power effects — productivity gains expanding demand — rather than on technology directly creating new occupations to replace lost ones. Institutional resilience — the capacity to reorganize production, rebuild complementary systems, and upgrade workforce skills — determines which organizations and economies capture GPT payoffs. Durable competitive advantage is often built during periods that look like mere maintenance; the “Great Divergence” in European wages was produced in 1500–1750, well before the visible industrial breakthroughs. Definitions Occupational churn: The sum of the absolute values of jobs added in growing occupations and jobs lost in declining occupations, used as a reallocation metric to measure how rapidly an economy reshuffles its employment across occupations. General-purpose technology (GPT): A technology that is adaptable across a wide range of production processes and capable of reorganizing the structure of production itself — examples include steam power, the electric dynamo, and modern AI systems. Retooling lag: The delay between the adoption of a transformative technology and the appearance of its aggregate productivity effects, caused by the time required to redesign workflows, management systems, and complementary infrastructure. Factory regime: The integrated system of power delivery, plant layout, workflow coordination, management practices, and labor organization that defines how production is structured — a concept used to explain why swapping a tool does not immediately change productivity. Total factor productivity (TFP): A measure of the efficiency with which all inputs (labor, capital, materials) are converted into output, capturing gains attributable to organizational innovation, technological change, and improved practices beyond simple input increases. The Future That Already Happened It has become “an article of faith” that knowledge workers in advanced industrial nations face almost unprecedented disruption, and that the pace of technological change is now producing an unusually violent cycle of labor-market instability. Long memos from jack dorsey contribute to the zeitgeist. The contemporary narrative is often framed in familiar terms: Schumpeterian “creative destruction,” the accelerating substitution of machines for human labor, and an underlying sense that the economy is entering uncharted territory. If this framing is correct, then one should be able to observe its imprint in the labor market’s basic structure. Occupations should rise and fall at a faster pace than in prior eras. The composition of employment should churn at historically high levels. The “future,” in other words, should be measurable as a departure from historical norms. This essay synthesizes some of the principal studies of labor economics and economic history focused on the first and second industrial revolutions. See Appendix A (Sources) for a review of these studies and Appendix B for how they were used in each section. Atkinson and Wu’s contribution is to test that belief against a long-run empirical record rather than against intuition. Their analysis is not simply a polemic against pessimism; it is an attempt to measure labor-market disruption across a century and a half using a consistent accounting framework. They define occupational churn as “the sum of the absolute values of jobs added in growing occupations and jobs lost in declining occupations.” That definition matters. It moves the discussion away from anecdote—high-profile layoffs, vivid stories of displacement, public anxiety—and toward the measurable reallocation of employment across the occupational structure. It is a metric designed to capture whether the economy is genuinely reshuffling its work at an exceptional rate. The result, stated plainly, is the opposite of what the prevailing story predicts. Atkinson and Wu conclude that “Levels of occupational churn in the United States are now at historic lows.” This is not a claim that technology has stopped changing work, nor that dislocation does not exist. It is a claim about magnitude and historical context: when measured in the way they define it, the reallocation of employment across occupations is not accelerating toward unprecedented heights. It is declining toward historical minima. The contrast becomes sharper when the long-run peak is placed beside the modern baseline. Occupational churn “peaked at over 50 percent” in the two decades from “1850 to 1870,” and fell to “around just 10 percent” in the last fifteen years of their series (ending in 2015). A reader can disagree with the interpretation of that pattern, but the pattern itself forces a recalibration of what “disruption” means in historical terms. The mid-nineteenth century was a period in which the occupational structure of the economy reconfigured at a pace that dwarfs modern churn. If one insists that today’s environment represents unprecedented occupational instability, that insistence must be reconciled with a measured past in which churn exceeded half of total employment. Atkinson and Wu provide a second framing that reinforces the point. They report that churn in the last twenty years has been “38 percent” of the levels from “1950 to 2000,” and “42 percent” of the levels from “1850 to 2000.” This is not merely a comparison of one decade to another; it is a statement about the recent era relative to both the postwar half-century and the longer-run historical average. The implication is not that the economy is static. It is that the economy’s occupational composition has been comparatively stable during a period that is widely narrated as uniquely turbulent. One might object that churn measures can hide the lived experience of loss by averaging over the whole economy. But Atkinson and Wu address this indirectly by examining job losses in declining occupations. Here too, the story is not one of historically extreme dislocation. They write that in the last fifteen years, the absolute job losses in declining occupations were “just 70 percent as many losses as in the first half of the 20th century,” and “a bit more than half as many as in the 1960s, 1970s, and 1990s.” This is a sobering finding for any argument that treats modern technology as producing a uniquely severe wave of occupational elimination. The paper’s comparative claim is not that job loss disappears, but that the magnitude of occupational decline—measured in this particular way—has been relatively tranquil. Why does this matter for business leaders, rather than only for policy analysts? Because strategic decisions are often made in an atmosphere shaped by the “article of faith” Atkinson and Wu describe. If executives assume the labor market is entering unprecedented churn, they may treat technology primarily as a force of destabilization to be managed defensively: accelerating automation to stay ahead of disruption, or restraining innovation to avoid backlash, or designing workforce strategy around a presumed collapse in job stability. But if churn is historically low, those instincts may be miscalibrated. The leadership problem changes when the underlying environment is stable rather than chaotic. Atkinson and Wu’s broader interpretation makes that recalibration explicit. In their view, the dominant economic challenge is not excessive churn but insufficient productivity growth. They write that “the single biggest economic challenge facing advanced economies today is not too much labor market churn, but too little, and thus too little productivity growth.” This claim is not merely a rhetorical inversion. It implies that a labor market characterized by low occupational reallocation can coincide with stagnating productivity, weak diffusion of innovation, and slower improvements in living standards. The familiar public fear is that innovation will move too fast; their counterargument is that the economy’s more serious risk is innovation moving too slowly to raise productivity meaningfully. Seen from this perspective, the “future that already happened” is not a science-fiction dystopia in which technology produces mass occupational collapse. It is a nineteenth-century reality in which occupational churn actually did exceed 50 percent over two decades and yet the economy continued to evolve into new forms of production. The relevance of that comparison is not to romanticize earlier turbulence, but to reset the baseline: modern anxiety about unprecedented churn is often ahistorical. This does not mean that no one is displaced, or that technology has benign distributional effects, or that firms do not face intense competitive pressure. It means that the specific claim that today’s labor market is undergoing historically unprecedented occupational churn is not supported by the evidence Atkinson and Wu assemble. Their numbers—over 50 percent churn in 1850–1870, roughly 10 percent in the modern era, and recent churn at 38 percent of the postwar benchmark—point in the opposite direction. The first discipline for business leaders, therefore, is to begin from measurement rather than mood. If the labor market is not being reshuffled at historic highs, then the more consequential managerial questions shift: not “how do we survive unprecedented churn,” but “why, despite transformative technologies, are we not seeing faster productivity growth,” and “what kinds of organizational redesign turn technological potential into measurable output.” Those questions lead directly to the second section’s puzzle—the ghost of the electric dynamo—and to a deeper framework for understanding why transformative technologies can feel omnipresent while their aggregate productivity effects arrive late. The Puzzle: The Ghost of the Electric Dynamo If the first section of this essay unsettles the claim that we are living through unprecedented labor-market upheaval, the second section confronts a different—and for business leaders often more frustrating—puzzle: why transformative technologies so frequently fail to show up quickly in measured productivity. The recurring executive experience is familiar. Capital is deployed. Tools are adopted. Pilot projects demonstrate impressive local gains. Yet at the level where strategy is judged—firm-wide performance, sector-wide productivity, economy-wide output per hour—the “revolution” looks delayed, diluted, or absent. Paul David and Gavin Wright’s historical reflections on electrification provide a disciplined way to interpret that delay. They begin from an empirical discontinuity that modern observers often overlook: after a “productivity pause” of some three decades, U.S. manufacturing total factor productivity (TFP) expanded at more than 5% per annum between 1919 and 1929. The significance of this claim is not merely that the 1920s were productive. It is that the surge followed a prolonged period in which the foundational technology—electricity—already existed and was already being installed, yet aggregate productivity growth did not immediately reflect its potential. The puzzle, then, is not whether the tool was powerful. It is why the payoff arrived late. David and Wright’s answer is that the 1920s surge “reflected the elaboration and adoption of a new factory regime based upon the electric dynamo.” The phrase “factory regime” is doing the real explanatory work here. Electrification is treated as a general-purpose technology not because it is an incremental improvement in motive power, but because it is adaptable across a wide range of production processes and is capable of reorganizing the structure of the factory itself. In their framing, the electric dynamo—understood as the enabling technological system for industrial electrification—brought fixed-capital savings and raised labor productivity in a wide array of operations. But those gains were not automatically realized at the moment an electric motor replaced a steam engine. They depended on the reconfiguration of production around the new power system. This is the central point business leaders need to hear—and the one most technology narratives flatten. David and Wright argue that “a purely technological explanation… is inadequate.” The productivity surge cannot be explained solely by the existence of the dynamo or by the diffusion of electric motors. Their account explicitly emphasizes “the interrelationships… between managerial and organizational innovations and the new dynamo-based factory technology,” as well as concurrence with “important structural changes in US labor markets,” and the broader macroeconomic conditions of the 1920s. In other words, the economy did not become more productive merely by acquiring a better machine. It became more productive by changing how it worked: reorganizing workflows, redesigning plant layouts, rethinking coordination between labor and capital, and developing managerial practices that could exploit distributed power. A common misconception is that a productivity surge must be driven by a small number of “breakthrough industries” that capture the entire gain. David and Wright test this intuition directly by examining how broadly productivity acceleration was distributed across manufacturing categories. They find the opposite of a narrow, localized effect: in 1919–1929, 13 of 14 major manufacturing categories experienced an acceleration in multifactor productivity growth. Their description of this pattern is instructive: the 1920s are characterized by “yeast-like” growth rather than “mushroom” growth—broad-based acceleration consistent with a general-purpose technology reshaping multiple industries. For the leader trying to assess whether a new technology is “real,” this matters. A GPT does not remain confined to a single niche; it diffuses through complementary changes that eventually show up as widely distributed productivity improvements. Yet diffusion is precisely what introduces time. The transition from one power system to another is not merely a swap of inputs; it often triggers re-siting, re-scaling, and re-architecting production. The nineteenth-century shift from water to steam provides an earlier illustration of how power systems reshape organization. Water-powered manufacturing was constrained by geography: suitable sites were limited and capacity depended on stream flow and the height of the fall. Steam altered that constraint because establishments using steam could be more “footloose”—freer to locate away from water sites as fuel became cheaper and transport improved. The payoff of steam was not simply higher energy availability; it was a loosening of spatial and organizational constraints that enabled different kinds of establishments to form, scale, and coordinate. This same literature also clarifies why it is historically normal for a new technological regime to arrive in phases. Steam power “began to diffuse in earnest” in the period 1850–1880, and after 1880 electrical power increasingly became an alternative. This sequencing matters for interpreting David and Wright’s puzzle: electrification did not enter an empty factory landscape. It entered an industrial system already reorganized by the preceding steam regime—its plant sizes, its division of labor, its managerial hierarchies, and its capital structures. A new power system layered atop an existing one does not immediately reveal its full potential, because legacy architectures and organizational habits persist. The “factory regime” changes slowly, and it changes in ways that are often more managerial than mechanical. David and Wright press this point further by emphasizing that leader–follower differences in GPT diffusion are shaped by institutional and policy context. Their cross-national comparison (U.S. versus the UK) is not offered as a cultural footnote; it is a mechanism. They underscore “the important role of the institutional and policy context with respect to the potential for upgrading the quality of the workforce” in the branches most affected by electrification. The implication is that even when two economies have access to the same general-purpose technology, their productivity trajectories can diverge depending on complementary capabilities: workforce-upgrading institutions, management practices, and the ability to restructure production without prohibitive friction. For business leaders confronting contemporary GPT candidates—whether AI systems, automation stacks, or platform technologies—the puzzle of the dynamo is therefore not an antiquarian story. It is a template for disciplined expectations. The correct inference from “a powerful tool exists” is not “productivity will immediately surge.” The correct inference is conditional: productivity will surge if and when a new regime of production—organizational, managerial, and infrastructural—has been built around the tool. David and Wright’s historical episode is valuable precisely because it shows the lag, shows the discontinuity, and insists that the discontinuity is a socio-technical outcome rather than a purely technological one. The ghost of the electric dynamo, in that sense, is the recurring lesson that adoption is not transformation. Transformation is redesign. Institutional Resilience If The Puzzle established the basic historical mechanism of “retooling lag,” then the next question is what actually carries an economy—or a firm—through that lag. The uncomfortable reality is that general-purpose technologies do not reward mere exposure. They reward institutional resilience: the capacity to reorganize production, rebuild complementary systems, and upgrade skills in ways that are often only loosely correlated with formal schooling. The literature in your supporting materials converges on that point from three angles: electrification as a factory-regime transition; steam as a measurable proxy for early industrial technology diffusion; and establishment-scale evidence that demonstrates how new power systems selectively advantaged organizations capable of absorbing and structuring them. From Steam to the Dynamo — Why Power Revolutions Don’t “Show Up” on Schedule The modern temptation is to treat energy and information technologies as plug-and-play upgrades: install the new tool, then watch productivity rise. The history of industrial power is a corrective. Each major power transition—water to steam, steam to electricity—did not merely swap one input for another. It changed where production could happen, how factories were laid out, how firms scaled, which skills mattered, and how management coordinated work. The headline inventions were real, but the productivity payoff arrived only after a slower, harder process: rebuilding the production regime around the new power source. Water power: the geography of constraint Early industrial production depended heavily on water, not because water was “better,” but because it was available. The waterwheel converted a local stream’s flow and fall into rotary motion that could drive machinery. That created a basic constraint that shaped entire industries: the factory had to live where the river allowed it. Suitable sites were limited; capacity was capped by stream flow and seasonality; and growth often meant squeezing more activity into the same constrained geography. Water power created early industrial clusters, but it also imposed a ceiling on flexibility and scale. Steam power: portable energy and the rise of the “footloose” factory Steam engines broke the tyranny of place. By converting heat (typically from coal) into mechanical energy, steam allowed factories to move closer to labor pools, transport links, raw materials, and markets—even when those were far from fast-moving water. This “footloose” character of steam mattered as much as its horsepower. It made manufacturing less dependent on a handful of river sites and more dependent on organizational decisions about logistics, labor, and capital. Steam’s diffusion was not instantaneous. It began with early designs aimed at practical problems—especially pumping water from mines—then expanded as engines improved, costs fell, and applications widened. One useful way to see the scale of the transition is simply to count installations: by 1800, England had thousands of engines in place (2,207 built and installed), concentrated in industrial counties and scarce in agricultural ones. That uneven geography is important: steam did not “transform the economy” uniformly; it transformed the places and industries that built complementary capability around it. Steam also changed the factory from the inside. In the first phase, power still tended to be centralized: one engine, a network of shafts and belts, and machines arranged around the mechanical transmission system. The factory’s physical layout often reflected the path of power distribution rather than the ideal flow of materials. Steam could increase output, but it did not automatically produce the modern assembly line or the modern managerial hierarchy. Those were later complements—organizational innovations built on top of the new power option. Electricity and the dynamo: a different kind of power system If steam liberated the factory from the river, electricity liberated the factory from the engine room. The electric dynamo—an electrical generator that converts mechanical energy into electrical energy—made it possible to produce and distribute power in a form that could be transmitted, subdivided, and delivered precisely where it was needed. Instead of sending motion through belts and shafts across a building, electricity could be routed as current and converted back into motion by motors at individual machines. That seems like a technical detail, but it changed the logic of factory design. Steam-era factories were often designed around a centralized prime mover; electricity allowed decentralized power at the point of use. That meant machines could be rearranged for workflow, materials handling, and coordination rather than for mechanical connectivity. It also enabled new forms of lighting, control systems, and, eventually, more continuous and specialized production lines. Electrification, in this sense, was not merely a “more efficient engine.” It was a platform that enabled a new factory regime. David and Wright’s account of electrification is explicit about why resilience matters. They describe a “productivity pause” of roughly three decades followed by a discontinuity in which U.S. manufacturing TFP expanded at more than 5% per annum between 1919 and 1929, as a new “factory regime based upon the electric dynamo” took hold. The core interpretive move is that the dynamo itself is insufficient as an explanation. They argue that “a purely technological explanation… is inadequate,” because it neglects “the interrelationships… between managerial and organizational innovations and the new dynamo-based factory technology.” In other words, the productivity surge is inseparable from the institutional capacity to redesign workflows, reorganize production, and align labor markets and management practices to a new power system. The “resilient” unit is not a gadget; it is an organizational system capable of regime change. That claim can sound abstract until it is paired with a technology whose diffusion can be mapped and measured. De Pleijt, Nuvolari, and Weisdorf provide that proxy for England’s first industrial revolution by using steam engines per person as an indicator of technological change. Their dataset documents that by 1800 a total of 2,207 steam engines had been built and installed across England. Steam is not merely an invention in their framing; it is a measurable local intensity of industrial capability, with substantial geographic variation between industrial centers and agricultural counties. What matters for institutional resilience is what steam does to human capital formation—because GPT transitions force societies and firms to reallocate labor capabilities, not merely to adopt machinery. Their empirical results are carefully structured around a distinction that modern executive discourse often blurs: working skills versus primary schooling. Using occupational classifications to measure work skill content, they find a causal relationship between steam intensity and working skills. The benchmark they provide is concrete: raising a county from zero engines to 0.44 engines per 1,000 persons (Yorkshire West Riding, the 85th percentile) would lead to a 13 percentage-point decline in the share of unskilled workers, relative to a mean of 42%. That result is not merely correlational in their approach; it is framed as causal. The implication for “institutional resilience” is that technological transitions can—and often do—pull labor markets toward more skill-intensive work content. But the same benchmark also makes clear that resilience is not the same thing as schooling expansion. In the same counterfactual shift to 0.44 engines per 1,000 persons, they estimate that the number of primary schools per 1,000 persons would fall by 64%, relative to a sample mean of 0.47 schools per 1,000. They also report that the same increase in steam intensity would increase gender inequality in literacy by 11 percentage-points, relative to a sample mean of 18%. Read together, these findings force a more precise interpretation of human capital during early industrial transitions: technology can be skill-demanding in the workplace while remaining neutral or even adverse to broad-based primary education metrics—especially for marginalized groups. Institutional resilience in this period did not necessarily mean “more schooling”; it often meant the formation of applied industrial competencies embedded in production itself. This distinction matters for how business leaders think about capability-building in modern GPT waves. If working skills can expand while primary schooling indicators stagnate or worsen, then the operative unit of adaptation is frequently the firm, the industry, or the local production ecosystem rather than the generalized education system. Put plainly: the capability that matters is the one that shows up in the workflow. The establishment-level evidence from nineteenth-century American manufacturing adds a third layer to the same argument: resilience is distributed unevenly across organizations, and scale interacts with technology adoption. Atack, Bateman, and Margo document that by 1880 “slightly more than half” of manufacturing workers were in establishments using steam power, compared with 17 percent in 1850. They also find that steam-powered establishments had higher labor productivity, and that the productivity differential was increasing in establishment size. This is not a claim that steam automatically made every producer more productive. It is a claim that the organizational capacity to adopt and structure steam power—more likely in larger establishments—was a determinant of productivity outcomes. Technology diffusion in the aggregate can therefore conceal a highly uneven distribution of payoff, concentrated among organizations that can marshal capital, restructure production, and operationalize complementary processes. David and Wright make the institutional dimension explicit by comparing the dynamics of electrification in leader and follower economies, arguing that cross-national differences illuminate “the important role of the institutional and policy context with respect to the potential for upgrading the quality of the workforce” in the affected branches of industry. This is the deeper meaning of resilience: not simply the presence of a technology, but the capacity—through policy, institutions, management, and labor-market structures—to upgrade the workforce where the technology lands. Taken together, these findings impose a disciplined conclusion for modern business leaders. A GPT transition is not a contest of who buys the newest tool first; it is a contest of which institutions can endure the lag and perform the redesign. Electrification yielded a postwar surge only after a “factory regime” changed; steam induced measurable shifts in working skill composition without necessarily raising primary schooling; and steam-powered productivity advantages in the United States grew with establishment size. Institutional resilience, in this literature, is therefore not a motivational slogan. It is an empirical description of the complement that turns invention into output: organizational reconfiguration, skill formation embedded in production, and the capacity to scale the new regime without collapsing under its coordination burden. The Long View: The “Great Divergence” and Real Wages The temptation in business writing about technology is to treat competitive outcomes as a sequence of breakthroughs: a tool arrives, a firm adopts it, productivity rises, and leadership changes hands. That story is not merely incomplete; it is chronologically misleading. When Robert Allen traces European wages and prices from the late Middle Ages to the First World War, his central finding is that the divergence visible in the mid-nineteenth century did not originate in the nineteenth century at all. Instead, “the divergence in real incomes observed in the mid-nineteenth century was produced between 1500 and 1750 as incomes fell in most European cities but were maintained (not increased) in the economic leaders.” The apparent drama of industrialization sits atop an earlier, quieter story: the long century-and-a-half in which leaders did not leap ahead so much as refuse to slip backward. That historical rearrangement matters for a modern executive audience because it shifts the implicit model of advantage. In Allen’s synthesis, the dominant pattern in early modern Europe is not uniform improvement but divergence. Real wages “declined by half on the continent, while remaining roughly constant in northwestern Europe.” This is not the language of a sudden technological miracle. It is the language of resilience: maintaining living standards while others experience prolonged erosion. In competitive terms, it suggests that leadership is often secured not by one discontinuous innovation, but by a sustained capacity to preserve purchasing power and institutional effectiveness through long periods of pressure. Allen’s method is designed to make those long arcs visible rather than accidental. He traces wages and prices in European cities from the fourteenth century to the First World War. A short horizon makes almost any movement look like a revolution. A seven-century horizon forces the observer to distinguish between cycles, noise, and structural breaks. That distinction becomes especially important when the discussion turns to the Industrial Revolution, because the modern imagination tends to treat it as a singular turning point in living standards. Allen’s London mason series is a corrective. When plotted over 1264–1913, what looks dramatic on a fifty-year chart can nearly disappear on a seven-hundred-year chart. Allen observes that the “rise in living standards” from 1815 to 1850 was “almost undetectable” in the longer series; it was “one of many minor fluctuations” and “a minor [cycle]… not a trend.” The implication is not that industrialization was economically irrelevant. It is that the distributional and wage effects commonly attributed to the “classic” period of industrial revolution were neither immediate nor uniformly transformative in real-wage terms. If one wants to understand when working-class living standards moved decisively, Allen points elsewhere. In his early synthesis, Allen emphasizes that the continent’s broad escape from early modern living standards was late. “It was only between 1870 and 1913 that the standard of living in the industrialized parts of the continent rose noticeably above early modern levels.” His long-run narrative again resists the breakthrough caricature. The decisive improvements arrive not at the moment of invention but after prolonged institutional and economic adjustment. In the London series, Allen frames the decisive features as “the maintenance of a high real wage after economic development gathered strength in the seventeenth century, and the sharp rise in living standards between 1870 and the First World War.” The long view thus contains two related claims: first, leadership was established by resisting decline in 1500–1750; second, mass improvements in living standards arrived much later, between 1870 and 1913. For business leaders attempting to interpret technological change today, this long view offers a disciplined way to think about what “wins” actually look like. The modern debate over new technologies often swings between two extremes. On one side, an assumption of instantaneous productivity miracles; on the other, an anxiety that automation implies social collapse. Allen’s wage history encourages a different posture: outcomes unfold over long horizons, and the early determinants of divergence may be quieter than the later manifestations. A region can “win” for a century by maintaining real wages while others decline, and only later see that advantage manifest as visible divergence. This is the point at which “technology” must be placed in its proper role. In the common executive narrative, technology is treated as the primary driver of leadership. In Allen’s account, however, the crucial early divergence is produced in 1500–1750, before the nineteenth-century industrial breakthroughs that dominate popular imagination. If that chronology is correct, then technology in the narrow sense cannot be the whole explanation for leadership; the maintaining institutions—those that prevent wage collapse or enable a high real wage to persist—do much of the work. And when technologies later arrive, they may amplify that institutional positioning rather than overturn it. The broader managerial implication is not that leaders should ignore innovation. It is that leaders should resist interpreting technology as a magic wand that immediately produces supremacy. When Allen shows that the 1815–1850 period is “almost undetectable” in the seven-century wage series, he is not denying progress; he is warning against misreading short windows. In corporate strategy terms, it suggests that the most consequential work may occur in the less visible domains: preserving real purchasing power, building durable capabilities, and maintaining organizational resilience through long periods in which “breakthroughs” are present but their payoff is delayed or uneven. Finally, Allen’s chronology implies a sober warning about competitive complacency. If divergence can be produced by long periods of relative decline among laggards, then the absence of dramatic improvement in any given decade does not mean the competitive map is stable. It may mean that one set of institutions is quietly maintaining its position while another is eroding. In the same way that real wages “declined by half on the continent” while remaining roughly constant in leaders, firms can lose ground not because a rival achieved a visible leap, but because they themselves failed to maintain the conditions that sustain performance over time. This is the long view’s central discipline: measure outcomes over horizons long enough to distinguish cycles from regimes; treat leadership as resilience before it is acceleration; and understand technology not as an automatic equalizer, but as a lever that tends to magnify the strengths and weaknesses of the institutions that adopt it. The Pessimist’s Fallacy The modern argument for labor-market alarmism typically proceeds as if its conclusion were self-evident: because digital technologies and artificial intelligence appear to advance rapidly, they must be producing a correspondingly rapid and unprecedented reallocation of work. Yet the empirical record assembled by Atkinson and Wu does not support that leap. Their historical series indicates that occupational churn peaked at over 50% in 1850–1870 and fell to around 10% in the last 15 years of their dataset. In the same summary framing, they note that churn levels in the last 20 years are just 38% of 1950–2000 levels and 42% of 1850–2000 levels. If the concern is that we have entered uniquely unstable terrain, the first discipline is to acknowledge that measured churn is historically subdued. What, then, sustains the intensity of pessimism? Atkinson and Wu argue that it is not merely a disagreement about the pace of innovation, but a recurring pattern of reasoning errors. Two are central. First, pessimists assume that robots can do most jobs “when in fact they can’t.” Second, they assume that once a job is lost there are no second-order job-creating effects from increased productivity and spending. These assumptions are not minor details; they are the hinges on which the “jobless future” narrative turns. If either assumption fails, the inference from “automation exists” to “mass unemployment is imminent” becomes far less secure. The first assumption collapses because “jobs” are not unitary tasks. Occupations are bundles of activities—some routine, some contextual, some relational, some physical, some judgment-laden. Treating an occupation as “automatable” because a subset of its tasks can be performed by software is a category error; it confuses partial substitution with total replacement. Atkinson and Wu’s point that robots “can’t” do most jobs is therefore not a rhetorical dismissal. It is a reminder that the operational unit of a labor market is not a discrete task, but an occupational system embedded in workflows, institutions, and constraints. Business leaders recognize this distinction intuitively when they attempt to implement automation and find that bottlenecks migrate rather than vanish. The second assumption—the denial of second-order effects—is the deeper and more consequential fallacy, because it mistakes the economy for a closed system with fixed demand. Atkinson and Wu argue that pessimists often ignore the way productivity reshapes purchasing power. When innovation allows firms and workers to produce more with the same inputs, wages can rise, prices can fall, or both. In any of those cases, real purchasing power expands. That expanded purchasing power becomes demand for other goods and services, which in turn requires labor. The labor market adjusts not by freezing and shrinking, but by reallocating across a shifting consumption frontier. This is not an abstract macroeconomic story; it is the recurrent mechanism by which advanced economies have historically absorbed technological change without collapsing into permanent joblessness. At this point, a skeptical reader may respond: even if second-order effects exist in principle, perhaps they no longer operate under AI. But Atkinson and Wu’s historical decomposition cautions against that leap. They explicitly state that “In no decade has technology directly created more jobs than it has eliminated.” Read literally, this means that direct, first-order technological displacement has always exceeded direct, first-order technological creation. Yet unemployment has not marched upward decade by decade. Why not? Because the economy’s net capacity to create jobs has historically depended on indirect channels—most importantly the productivity-driven expansion of purchasing power—rather than on the direct emergence of entirely new “technology occupations” that numerically offset those destroyed. Crucially, Atkinson and Wu add that this induced job growth shows up “more so in existing occupations.” This is an important corrective to a common business cliché: that technological revolutions work primarily by creating brand-new job categories to replace those lost. The evidence base in their account points elsewhere. The offsetting employment expansion often occurs by enlarging familiar occupational domains—health care roles, retail roles, service roles, administrative roles—rather than by replacing a lost job with a newly invented one on a one-for-one basis. Pessimism becomes more plausible when one expects the replacement to be visible as new named occupations; it becomes less plausible when one recognizes that labor-market absorption often occurs through scale increases within the ordinary categories that already exist. Their periodization strengthens the argument. In 2010–2015, they find that approximately 6 technology-related jobs were created for every 10 lost, and that this is the highest ratio (the lowest share of jobs lost to technology) of any period since 1950–1960. The significance is not that technology suddenly became job-creating in the direct sense—it did not. The significance is that, even within an era widely described as uniquely disruptive, the measured relationship between technology-linked job creation and job loss does not point toward escalating catastrophe. If anything, the period they describe suggests a comparatively modest displacement burden relative to earlier postwar decades. This line of reasoning also reframes the policy and managerial impulse to slow technological change “to protect jobs.” Atkinson and Wu’s own conclusion is that the principal danger for advanced economies is not too much churn, but too little—and therefore too little productivity growth. A labor market characterized by historically low occupational churn may feel socially calmer, but it can coincide with stagnating living standards if productivity fails to advance. The pessimistic narrative tends to treat “tranquility” as a virtue in itself; the historical view suggests tranquility can be symptomatic of dynamism gone missing. For business leaders, the practical implication is not to dismiss disruption, but to discipline it. The relevant executive question is not whether AI exists, or whether it is impressive, but whether the organization is prepared to convert productivity gains into durable demand, redeployed labor, and expanded output. That is, leaders must treat second-order effects not as automatic salvation but as an operating environment: productivity increases create room for new spending and new services, but only if firms are positioned to capture those opportunities and redesign work accordingly. Pessimism, in this sense, becomes a managerial failure mode: it focuses attention on first-order substitution while neglecting the historical channels through which economies and firms actually grow. The pessimists’ fallacy, then, is not simply that they are “too negative.” It is that they adopt a model of technology that is both ahistorical and incomplete: they treat jobs as automatable monoliths rather than bundled activities; and they treat demand as fixed rather than elastic to productivity and purchasing power. The corrective offered by the churn record is not a guarantee of painless adjustment. It is a warning against building strategy on a story that the evidence does not support. Interlude: What the government data says after 2015 Atkinson and Wu’s central empirical claim is retrospective: when the United States is measured in decades, occupational churn peaked in the mid-nineteenth century and has trended downward to modern lows. Their churn index is constructed from occupational employment shares in Census-based data and therefore cannot be mechanically extended with a simple government series through 2025, because occupation coding changes and the reconstruction problem is non-trivial. The practical question, then, is not whether a single official series replicates their exact decadal index through the present. It is whether the broader government evidence since 2015 looks like what the alarmist narrative predicts: an accelerating storm of labor-market dislocation driven by AI-era technology. On that narrower question, the most visible federal measure of labor-market “motion” is not an occupational-share churn index but the Bureau of Labor Statistics’ Job Openings and Labor Turnover Survey (JOLTS), which reports monthly flow rates: hires, quits, layoffs/discharges, and total separations. While these flows cannot substitute for the Atkinson–Wu decadal churn metric, they do provide an official test of whether the labor market is entering a period of unusually violent turnover. By late 2025, the official JOLTS data point in the opposite direction. In December 2025, the hires rate was 3.3%, the quits rate was 2.0%, the total separations rate was 3.3%, and the layoffs and discharges rate was 1.1%. These are not the signatures of an economy experiencing “unprecedented” labor-market churn; they are consistent with a cooling and normalizing labor market in which the rate of voluntary job switching has receded to levels seen in earlier low-churn periods. The second pillar of the essay—the productivity paradox explained by retooling lags—also finds support in recent government releases. The BLS reports that labor productivity in the nonfarm business sector increased 2.3% in 2024, following 1.6% in 2023. More strikingly, in its revised Productivity and Costs release, BLS reports that nonfarm business labor productivity increased 4.9% in the third quarter of 2025 at a seasonally adjusted annual rate. The point is not to over-interpret any single quarter—productivity is volatile—but to observe that official statistics now show an upswing that is directionally consistent with the general-purpose-technology pattern emphasized in the historical literature: invention and early adoption first, followed by a lag as organizations and infrastructure reorganize, and then a measurable acceleration. The Federal Reserve Bank of Kansas City has recently framed this question explicitly. In its February 11, 2026 Economic Bulletin, the authors argue that since late 2022 labor productivity has risen notably above its pre-pandemic trend, but that the pickup is not yet broad-based: a small set of industries accounts for most gains. They also find that higher reported AI adoption is associated with faster productivity growth across industries, while emphasizing that diffusion is still unfolding. This matches the essay’s core interpretive claim: a “surge” is not a single technological event; it is a regime transition that spreads unevenly, concentrated first among early adopters with complementary capabilities and only later reflected broadly in aggregate statistics. Finally, the infrastructure dimension of the retooling-lag thesis has also become more legible. NERC’s long-term reliability assessment process, summarized in its January 2026 communications and amplified by utility-sector reporting, projects a sharp increase in peak electrical demand over the coming decade—described as a 24% increase from 2025 peak demand, with new data centers accounting for most of the projected increase. This is not “proof” of any single productivity narrative, but it does reinforce the broader structural point: the path from software capability to economy-wide productivity is mediated by physical systems—power, transmission, interconnection, and build timelines—that do not move at the speed of AI product launches. Read together, the post-2015 federal data do not contradict the earlier historical interpretation. If anything, they strengthen it. The labor market does not display the official signs of unprecedented churn; and productivity behavior increasingly resembles the lag-then-acceleration pattern associated with prior general-purpose technologies. Conclusion and a Better ‘productivity reset’ Memo Taken together, the historical record in this literature urges business leaders to separate the felt experience of technological acceleration from the measured experience of labor market transformation. The popular story assumes we are entering an era of uniquely violent “creative destruction.” Yet occupational churn in the United States is not rising toward historic extremes; it has moved in the opposite direction. Churn peaked at over 50% in 1850–1870 and has fallen to around 10% in the most recent 15 years of the Atkinson–Wu series, with the last 20 years’ churn running at 38% of 1950–2000 levels and 42% of 1850–2000 levels. This does not deny that specific firms, communities, or occupations can experience painful dislocation. It does, however, challenge the executive reflex to treat today’s labor market as historically unmoored. That reframing also clarifies what leaders should worry about. Atkinson and Wu’s deeper contention is that the central macroeconomic problem is not excessive churn, but insufficient productivity growth—and that net job creation has historically been driven less by technology “directly creating” new roles than by second-order purchasing-power effects, often expanding employment within existing occupations rather than continuously birthing entirely new ones. The strategic implication is not complacency about displacement, but sobriety about how economies and firms actually absorb general-purpose technologies. This is where the electrification analogy becomes operational rather than rhetorical. David and Wright describe a long “productivity pause” of roughly three decades before the surge of the 1920s, when U.S. manufacturing TFP rose at more than 5% per year (1919–1929) alongside a new factory regime “based upon the electric dynamo.” The lesson is not that technology fails, but that returns frequently depend on complementary redesign—managerial, organizational, and infrastructural—without which revolutionary tools remain locally impressive yet macroeconomically muted. The first industrial transitions make the same point through different evidence. Steam power diffused unevenly and rewarded scale: by 1880, “slightly more than half” of manufacturing workers were in steam-powered establishments, up from 17% in 1850, and the productivity differential associated with steam increased with establishment size. Meanwhile, steam adoption is linked to a shift in the composition of human capital: de Pleijt, Nuvolari, and Weisdorf find steam intensity raising working skills even as primary schooling outcomes do not rise in parallel; their benchmark shift to 0.44 engines per 1,000 persons corresponds to a 13 percentage-point reduction in unskilled share (mean 42%). Technology, in this telling, changes what organizations demand from people—and not always in the way modern credential narratives predict. Finally, Allen’s wage history offers a long-horizon warning: durable advantage is often built in periods that look, at the time, like mere maintenance. The divergence visible in the nineteenth century was largely produced in 1500–1750, as leaders held ground while others fell back.For firms confronting AI, the point is not to romanticize the past but to internalize its structure: general-purpose technologies magnify institutional differences. The core executive problem is therefore not managing a mythical tidal wave of churn, but building the organizational conditions under which productivity gains become real—and durable. To: E-Staff Subject: AI as a factory-regime shift: questions for E-Staff + 30-day deliverables Team — I’ve been reviewing the historical evidence on general-purpose technologies (steam → electricity → modern digital systems) and what it implies for AI. The main takeaway is straightforward: the biggest risk is not a sudden, economy-wide “jobless future,” but a slow, uneven productivity transition where winners redesign their operating system and laggards only add tools. We’re going to treat AI as a factory-regime shift — a redesign of workflows, management systems, and infrastructure — not a collection of pilots. Come to the next E-Staff prepared to address the following. 1. Where are we in the retooling lag? What 3–5 workflows most constrain throughput, cycle time, quality, or cost? What is the true bottleneck in each (decision latency, coordination drag, data friction, compliance, systems, etc.)? What would have to change for AI to move the bottleneck — not just accelerate a step? 2. What is our current operating regime? Where is work actually decided? Where do we pay coordination tax (handoffs, approvals, rework)? What legacy constraints (systems, org design, incentives) are we still organized around? 3. Are we measuring productivity — or activity? What 3–5 metrics truly reflect output per unit input? Which current metrics can rise while productivity stays flat? What would constitute credible gains in 90 days vs. 12–18 months? 4. What is limiting scale? Top technical constraint (data, permissions, integration, security, cost). Top organizational constraint (governance, incentives, talent, process). Which pilots are scaling vs. stalling — and why? 5. Where will advantage concentrate? What capabilities would make AI returns compound here? Where could competitors leap ahead through redesign? What is our resilience plan if gains are uneven across units? 30-Day Deliverable (1–2 page memo, no deck) Top 3 workflows to redesign (include baseline metrics + target deltas). Two biggest scaling constraints (one technical, one organizational) + decision required to remove each. One proposed “factory-regime” change (policy, incentives, governance, org). A 90-day measurement plan proving real productivity improvement. We won’t win by being early to tools. We’ll win by being early to redesign. Appendix A. Sources. I. Literature Review & Contextual Placement Allen, Robert C. “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War.” Explorations in Economic History, vol. 38, no. 4, 2001, pp. 411–447. ScienceDirect, doi:10.1006/exeh.2001.0775. Accessed 1 Mar. 2026. Atack, Jeremy, Fred Bateman, and Robert A. Margo. “Steam Power, Establishment Size, and Labor Productivity Growth in Nineteenth-Century American Manufacturing.” Explorations in Economic History, vol. 45, no. 2, 2008, pp. 185–198. ScienceDirect. Accessed 1 Mar. 2026. Atkinson, Robert D., and John Wu. False Alarmism: Technological Disruption and the U.S. Labor Market, 1850–2015. [Data (XLSX)] Information Technology and Innovation Foundation, 8 May 2017, itif.org. Accessed 1 Mar. 2026. Çakır Melek, Nida, and Sydney Miller. “A New U.S. Productivity Chapter? What Industry Data Say About AI.” Economic Bulletin, Federal Reserve Bank of Kansas City, 11 Feb. 2026, kansascityfed.org. Accessed 1 Mar. 2026. David, Paul A., and Gavin Wright. “General Purpose Technologies and Surges in Productivity: Historical Reflections on the Future of the ICT Revolution.” The Economic Future in Historical Perspective, edited by Paul A. David and Mark Thomas, Oxford UP, 2003. Oxford University Research Archive, Oxford University Research Archive. Accessed 1 Mar. 2026. ---. “General Purpose Technologies and Surges in Productivity: Historical Reflections on the Future of the ICT Revolution.” Discussion Papers in Economic and Social History, no. 31, University of Oxford, Sept. 1999. RePEc, RePEc. Accessed 1 Mar. 2026. de Pleijt, Alexandra, Alessandro Nuvolari, and Jacob L. Weisdorf. “Human Capital Formation during the First Industrial Revolution: Evidence from the Use of Steam Engines.” Journal of the European Economic Association, vol. 18, no. 2, 2020, pp. 829–889. Oxford Academic, doi:10.1093/jeea/jvz006. Accessed 1 Mar. 2026. North American Electric Reliability Corporation. Long-Term Reliability Assessment. NERC, Jan. 2026, nerc.com. Accessed 1 Mar. 2026. U.S. Bureau of Labor Statistics. “Job Openings and Labor Turnover—December 2025.” Job Openings and Labor Turnover Survey (JOLTS) News Release, U.S. Department of Labor, 5 Feb. 2026, bls.gov. Accessed 1 Mar. 2026. ---. “Job Openings and Labor Turnover Summary.” JOLTS News Release Tables, U.S. Department of Labor, 5 Feb. 2026, bls.gov. Accessed 1 Mar. 2026. ---. “Productivity up 2.3 Percent in 2024.” The Economics Daily, U.S. Department of Labor, 12 Feb. 2025, bls.gov. Accessed 1 Mar. 2026. ---. Productivity and Costs: Third Quarter 2025, Revised. U.S. Department of Labor, 29 Jan. 2026, bls.gov. Accessed 1 Mar. 2026. Utility Dive. “NERC Forecasts Peak Demand to Rise 24% on New Data Center Loads.” Utility Dive, 30 Jan. 2026, utilitydive.com. Accessed 1 Mar. 2026. These papers reside at the intersection of Economic History and Labor Economics. They bridge the gap between "Cliometrics" (the use of economic theory and data to study history) and modern policy debate. Atkinson & Wu (2017) and David & Wright (1999) act as "long-view" anchors. They use historical precedents (the electric dynamo and 19th-century churn) to challenge modern anxieties about AI and automation. Allen (2001) and de Pleijt et al. (2020) focus on the Great Divergence and the British Industrial Revolution, exploring why certain regions succeeded and how early tech (steam) shaped human capital. Atack et al. (2008) provides a micro-level view of the transition from craft labor to factory production, specifically looking at how the "power source" (steam vs. water) dictated the organization of work. II. Evaluation of Position and Potential Bias Paper Institutional Origin Position / Potential Bias Atkinson & Wu ITIF (Think Tank) Pro-Innovation / Techno-Optimist. Explicitly aims to debunk "false alarmism." Funded by tech industry interests, its goal is to argue against restrictive regulation of automation. David & Wright Academic (Oxford/Stanford) Historical Analogy. Aims to explain the "Productivity Paradox." Bias is toward the "Delayed Effect" theory—arguing that we shouldn't judge new tech too early. Allen Academic (Oxford) Materialist/Economic Determinist. Focuses on "high-wage economy" theories. His bias is toward price/wage data as the primary driver of innovation. de Pleijt et al. Academic (International) Empirical/Revisionist. Focuses on the "skill-biased" nature of steam. No overt political bias, but leans toward a more nuanced view of "Human Capital." Atack et al. Academic (NBER/Vanderbilt) Quantitative Historian. Focuses on the "de-skilling" hypothesis. Their position is neutral-analytical, testing if steam specifically promoted the "factory system." III. Quality of Datasets Atkinson & Wu: Uses IPUMS (Census) data. Extremely high quality for 1850–2015, though 19th-century occupational codes require significant "cross-walking" (mapping old titles to new ones), which introduces some estimation error. David & Wright: Relies on secondary TFP (Total Factor Productivity) data and electrification statistics from the early 20th century. High quality but limited by the less granular data available in the 1920s. Allen: Uses a massive assembly of price and wage series across 500 years. This is the "gold standard" for historical price data, though "welfare ratios" are based on "representative baskets" that may not reflect every worker's reality. de Pleijt et al.: Combines Steam Engine registries (1800) with HISCLASS occupational skills. This is a highly innovative "synthetic" dataset. It is robust for county-level analysis but limited by the scarcity of literacy data. Atack et al.: Uses the "McLane Report" (1832) and Census of Manufactures (1850–1880). These are the most granular records of early American industry, though they often exclude very small shops (under $500 in output). IV. Core Questions Explored GPT Effects: Does a "General Purpose Technology" (Steam, Electricity, ICT) immediately change labor, or is there a lag? Skill Bias: Is tech "skill-demanding" (requiring more education) or "skill-saving" (replacing artisans with machines)? Churn vs. Stability: Are we currently in an era of "unprecedented" job destruction, or is "churn" actually slowing down? The "Why" of Tech: Did high wages cause the Industrial Revolution (inducing tech to save labor), or did tech cause high wages? V. Summary of Findings 1. Dominant Findings The Productivity Lag: Both the electric dynamo (David) and steam (de Pleijt) took decades to show significant productivity gains. Tech requires "organizational learning" before it pays off. High Wage Induction: Innovation happens where labor is expensive. Allen finds that the Industrial Revolution happened in England because labor was dear and energy (coal) was cheap. Churn is Low: Atkinson & Wu find that occupational churn in the U.S. labor market is actually at historic lows, contradicting the "AI is taking all jobs" narrative. 2. Additional Findings Factory Scale: Atack et al. prove that steam power was the primary driver for increasing the size of the workplace. Water power limited where a factory could be; steam allowed factories to move to cities and grow. Capital Saving: David & Wright find that the "Electric Revolution" wasn't just about labor; it was "capital-saving," allowing businesses to get more out of their physical buildings and machines. 3. Counter-Intuitive Findings Steam as a "De-skiller": While we think of tech as "high-skill," 19th-century steam was often skill-saving. It allowed unskilled "machine tenders" to replace highly skilled artisans (Atack et al.). The Literacy Penalty: In areas with high steam engine adoption, literacy rates actually fell or stagnated (de Pleijt). The demand for "working skills" (operating a machine) was so high that it discouraged formal schooling. The Information Age Myth: Atkinson & Wu show that the "Information Age" (2000–2015) has seen less than half the occupational churn of the "Motor Age" (1950s), suggesting our current era is actually one of relative labor market stagnation, not hyper-disruption. Appendix B — Evidence Base by Section Appendix B.1 — Section I: The Future That Already Happened (Occupational Churn) Popular premise being tested (“article of faith”) Claim: It has become “an article of faith” that advanced economies face “almost unprecedented” labor-market disruption and that technology is driving a relentless pace of Schumpeterian “creative destruction.” Source: Atkinson & Wu (2017), opening framing, p. 1. Operational definition of “occupational churn” Claim: Occupational churn is defined as “the sum of the absolute values of jobs added in growing occupations and jobs lost in declining occupations.” Source: Atkinson & Wu (2017), definition, p. 1. Headline conclusion: churn is historically low Claim: “Levels of occupational churn in the United States are now at historic lows.” Source: Atkinson & Wu (2017), executive summary, p. 1. Peak versus modern levels (magnitude and timeframe) Claim: Occupational churn peaked at over 50% in 1850–1870 and fell to around 10% in the most recent 15 years of their series (ending 2015). Source: Atkinson & Wu (2017), findings summary, p. 2. Relative comparison to earlier eras Claim: Churn in the last 20 years was 38% of 1950–2000 levels and 42% of 1850–2000 levels. Source: Atkinson & Wu (2017), executive summary, p. 1. Job-loss component also comparatively tranquil Claim: In the last 15 years, job losses in declining occupations were 70% as many as in the first half of the 20th century, and “a bit more than half” as many as in the 1960s, 1970s, and 1990s. Source: Atkinson & Wu (2017), findings summary, p. 2. Macro interpretation: the bigger challenge is too little churn/productivity Claim: The “single biggest economic challenge” is not too much churn, but too little churn and thus too little productivity growth. Source: Atkinson & Wu (2017), motivation and takeaway, p. 2. Appendix B.2 — Section II: The Puzzle (The Ghost of the Electric Dynamo) The “productivity pause,” then a discontinuity Claim: After a “productivity pause” of some three decades, U.S. manufacturing TFP expanded at more than 5% per annum between 1919 and 1929. Source: David & Wright (1999), early framing, p. 4. What changed: a new factory regime Claim: The 1920s discontinuity “reflected the elaboration and adoption of a new factory regime based upon the electric dynamo,” described as a general-purpose technology (GPT) with broad fixed-capital savings and labor productivity impacts. Source: David & Wright (1999), abstract, p. 2. Core methodological warning: not a purely technological story Claim: A purely technological explanation is “inadequate,” because the surge depended on structural labor-market change and interrelationships between managerial/organizational innovation and dynamo-based factory technology (plus macro conditions). Source: David & Wright (1999), early discussion, p. 4. The surge was broad-based (“yeast-like”) Claim: In the 1920s, 13 of 14 major manufacturing categories experienced an acceleration in multifactor productivity growth. Source: David & Wright (1999), distribution across industries, p. 5. Leader–follower differences depend on institutions Claim: Cross-national comparison underscores “the important role of the institutional and policy context with respect to the potential for upgrading the quality of the workforce” in affected branches. Source: David & Wright (1999), early discussion, p. 4. Sequencing of power regimes (steam → electricity) Claim: Steam power diffused in earnest in 1850–1880, while “after 1880, electrical power increasingly became an alternative.” Source: Atack, Bateman & Margo (2008), period selection rationale, p. 3. Why power shifts reorganize production (footloose steam vs site-bound water) Claim: Water power constrained location to limited sites; steam made establishments more “footloose,” loosening geographic and organizational constraints. Source: Atack, Bateman & Margo (2008), early discussion, p. 3. Appendix B.3 — Section III: Institutional Resilience (Complements, Skills, and Scale) Electrification required complements; the “factory regime” mattered Claim: The post-WWI productivity surge followed a long pause and depended on organizational/managerial complements; a purely technological explanation is inadequate. Source: David & Wright (1999), pp. 2–4. Scale of early steam diffusion in England (technology proxy) Claim: By 1800, England had 2,207 steam engines built and installed. Source: de Pleijt, Nuvolari & Weisdorf (2020), steam dataset description, p. 6. Working skills effect (quantified benchmark) Claim: Moving from 0 to 0.44 engines per 1,000 persons (Yorkshire West Riding, 85th percentile) implies a 13 percentage-point decline in unskilled share (mean 42%). Source: de Pleijt, Nuvolari & Weisdorf (2020), results summary, pp. 3–4. Primary schooling effect (quantified benchmark) Claim: The same shift to 0.44 engines per 1,000 persons decreases primary schools per 1,000 persons by 64% (mean 0.47 schools per 1,000). Source: de Pleijt, Nuvolari & Weisdorf (2020), results summary, pp. 3–4. Gender inequality in literacy (quantified benchmark) Claim: The same shift increases gender inequality in literacy by 11 percentage points (mean 18%). Source: de Pleijt, Nuvolari & Weisdorf (2020), results summary, pp. 3–4. Steam adoption and productivity scaled with establishment size (U.S.) Claim: By 1880, “slightly more than half” of manufacturing workers were in steam-powered establishments, versus 17% in 1850; steam-powered establishments had higher labor productivity and the differential increased with establishment size. Source: Atack, Bateman & Margo (2008), abstract (and early discussion). Institutional context shapes workforce upgrading in GPT diffusion Claim: Institutional and policy context affects the potential for upgrading workforce quality in branches affected by GPT diffusion. Source: David & Wright (1999), p. 4. Appendix B.4 — Section IV: The Long View (Great Divergence and Real Wages) Timing of divergence Claim: The divergence visible in the mid-19th century was produced between 1500 and 1750, as incomes fell in most cities but were maintained (not increased) in leaders. Source: Allen (2001), abstract/opening. Magnitude and pattern of divergence Claim: Real wages declined by half on the continent while remaining roughly constant in northwestern Europe. Source: Allen (2001), early synthesis. When living standards rose decisively Claim: Only between 1870 and 1913 did living standards in industrialized parts of the continent rise noticeably above early modern levels. Source: Allen (2001), early synthesis. Industrial Revolution gains look small in long series Claim: In the London mason series, the rise from 1815 to 1850 is “almost undetectable,” “one of many minor fluctuations,” “a minor cycle… not a trend.” Source: Allen (2001), discussion around long-run wage figures. Long-run context matters (time horizon) Claim: The London mason real-wage series is shown over 1264–1913, demonstrating which changes are structural breaks versus cycles. Source: Allen (2001), discussion of the long-run series and figures. Methodological scope for comparability Claim: The paper traces wages and prices from the fourteenth century to the First World War and emphasizes collation across cities to reveal true patterns. Source: Allen (2001), opening scope/method framing. Leadership framing Claim: Leaders are characterized by maintenance of high real wages once development strengthens (17th century onward), with a sharp rise between 1870 and the First World War. Source: Allen (2001), interpretive framing near conclusion/figures discussion. Appendix B.5 — Section V: The Pessimist’s Fallacy (Logic Errors and Second-Order Effects) Churn is historically low, not unprecedented Claim: Churn peaked at over 50% in 1850–1870 and fell to around 10% in the last 15 years of the series; last 20 years is 38% of 1950–2000 and 42% of 1850–2000. Source: Atkinson & Wu (2017), pp. 1–3. Two core pessimists’ assumptions Claim: Pessimists assume robots can do most jobs “when in fact they can’t,” and assume no second-order job-creating effects from productivity and spending. Source: Atkinson & Wu (2017), p. 1. Direct creation never exceeds direct displacement Claim: “In no decade has technology directly created more jobs than it has eliminated.” Source: Atkinson & Wu (2017), p. 2. Where net job growth actually comes from Claim: Net job growth stems from productivity-driven purchasing power increases, and job creation occurs “more so in existing occupations.” Source: Atkinson & Wu (2017), p. 2. 2010–2015 ratio Claim: In 2010–2015, about 6 technology-related jobs were created for every 10 lost, the highest ratio (lowest share lost to technology) since 1950–1960. Source: Atkinson & Wu (2017), findings summary, p. 3. Macro risk framing Claim: The biggest challenge is too little churn and thus too little productivity growth—not too much disruption. Source: Atkinson & Wu (2017), p. 2. Appendix B.6 — Interlude: What the Government Data Says After 2015 (Bridge Module) (This is the bridge you provided; keep it as a bounded module. If you want it “government-only,” use the official releases as the quantitative spine and treat non-government reporting as context rather than a pillar.) Metric clarification (not a perfect continuation of Atkinson–Wu) Claim: No official BLS/Census series extends Atkinson–Wu’s exact decadal occupational-share churn index through 2025 due to occupation-code evolution and reconstruction complexity. JOLTS as an official proxy for labor-market motion Claim: JOLTS reports monthly flow rates (hires, quits, layoffs/discharges, total separations). These flows don’t replicate the Atkinson–Wu metric but can test whether labor-market turnover is unusually “violent.” Late-2025 turnover levels (direction of change) Claim: By late 2025, quits/hires/total separations slowed sharply from post-COVID peaks; levels are consistent with cooling and normalization rather than accelerating disruption. Productivity acceleration (lag-then-uptick pattern) Claim: Government productivity data show sluggish pre-2022 growth followed by clearer acceleration in 2023–2025, consistent with a retooling lag story. Diffusion framing (narrow early adopters first) Claim: Fed analysis characterizes recent gains as concentrated in a narrow set of industries and still spreading, matching the GPT discontinuity logic. Infrastructure mediation (compute/power as a physical constraint) Claim: Grid/transmission and data-center constraints imply that AI-era productivity diffusion is mediated by physical build timelines, not software release cycles. Appendix B.7 — Conclusion Evidence (Synthesis Claims) Churn baseline (peak vs modern) Claim: Occupational churn peaked at over 50% in 1850–1870 and fell to around 10% in the last 15 years of the historical series. Churn relative to prior eras Claim: The last 20 years’ churn is 38% of 1950–2000 levels and 42% of 1850–2000 levels. Second-order mechanism Claim: Direct technological creation does not exceed direct displacement in any decade; net job growth historically reflects second-order purchasing power effects and is concentrated in existing occupations. GPT lag and discontinuity Claim: After a roughly three-decade pause, manufacturing TFP expanded at more than 5% per annum between 1919 and 1929 during the dynamo-based factory regime shift. Scale effects in power transitions Claim: Steam adoption and productivity advantages were associated with larger establishments; by 1880 more than half of manufacturing workers were in steam-powered establishments versus 17% in 1850. Long-run divergence chronology Claim: The divergence visible in the mid-19th century was produced 1500–1750, with later decisive standard-of-living increases concentrated in 1870–1913. --- Canonical: https://johnbrennan.xyz/essay/the-churn-myth