The Innovator’s Dilemma explains why so many well-established companies fail dismally when faced with the emerging markets they create. This summary focuses on one of the book’s central themes: disruptive innovation.
Genres
Entrepreneurship, Economics, Business, Industrial Management and Leadership, Development and Growth Economics, Customer Relations
Introduction: Learn about a key concept of economics.
Business cycles move fast. So fast, in fact, that theories about what’s going on rarely outlast them. Such theories “live and die like fruit flies” (The Economist). Every so often, though, an idea with lasting power comes along. An idea that won’t die. The concept of “disruptive innovation” is one of them.
Revolutions can be violent: if you want to create something utterly new, you have to break something. In economics, this is not an entirely new concept. A long time ago, in the 1940s, Austrian-born author Joseph Schumpeter came up with the term “creative destruction.” According to him, destruction can be a good thing, because it helps to advance and restructure the economy.
Half a century later, it was Clayton Christensen who offered a significant update to this idea. It’s hard to overstate the splash his book The Innovator’s Dilemma made when it was published in 1997. Steve Jobs said it had deeply influenced his thinking. Michael Bloomberg sent fifty copies to his friends. Andy Gove, the CEO of Intel, said it was the most important book of the decade. It sold over half a million copies within a year.
Why was the book so successful? Well, it predicted how a significant part of the economy would function in the new millennium – long before apps and e-commerce were omnipresent. And Christensen was right. Today, it feels obvious that innovation has a destructive side: Uber disrupted the conventional taxi system; Amazon disrupted the business of brick-and-mortar stores; and so many other companies are trying to do the same to their industries.
So, let’s zoom in. In this short summary, we’d like to focus on the key concept of The Innovator’s Dilemma: disruptive innovation.
Let’s start with a story.
Who needs cheap radios?
We’re in the United States and it’s the early 1950s. The war is over. People feel hopeful. The economy is booming. More households have more disposable income than ever before, and they’re spending it.
That’s good news for all kinds of industries, from carmakers to the manufacturers of refrigerators. It’s also great for consumer electronics companies like RCA and Zenith. One of their top sellers is the vacuum tube music console – a handsomely veneered cabinet with an integrated radio that sits at the center of middle-class living rooms across the nation.
These consoles are well-made, sturdy objects. More to the point, they’re highly engineered and sound great. All that makes them expensive, but that’s not a problem. This is an age of affluence, and people can afford to pay top dollar for what matters to them – quality. And so that’s what companies focus on. They tinker and improve and continue making big, expensive consoles that sound great.
And that’s when a small Japanese firm called Sony enters the picture. Founded in 1946 with around $6,000 start-up capital, it still has fewer than twenty employees. But Sony’s chairman, Akio Morita, has an idea.
He takes up residence in a cheap hotel in New York City and starts negotiating a license to patented transistor technology owned by the American telecommunications company AT&T. Morita gets his license, but AT&T executives are baffled by his plan to use the technology to build small radios. Why would anyone care about small radios, they ask. His answer is cryptic: “Let’s see.”
Sony’s portable transistor radio appears on the market in ’55. It’s a terrible radio. The static is so loud you can hardly hear the music, and the fidelity is much lower than those vacuum tube consoles. If you’re an affluent household that values sound quality, there’s no chance you’re buying a Sony radio! But what if you don’t have a lot of disposable cash? What if, in other words, you’re a typical American teenager? Well, the alternative to crappy transistor radios for ’50s teenagers is no radio, and so they start buying a lot of Sony radios!
You can probably guess where this story is going. Sony’s crappy radios give the company a crowbar to prize open the American market. And, slowly but surely, transistor technology improves. By the time it’s so good that it becomes interesting to more affluent market segments – those teenagers’ parents, say – it’s already too late for companies like RCA and Zenith to catch up to Sony.
That is how Sony came to dominate the radio market in the United States.
Convenience trumps quality.
Business analysts had a neat explanation of why established companies like RCA and Zenith end up losing out to upstarts like Sony. It goes like this.
Technological change is fast and furious; you have to run to stand still. Managers, though, often lose sight of this fact. They’re so focused on what works in the present that they fail to plan for the future. That’s how they get picked off. Call it complacency. Call it lack of innovation. Call it bad management.
But for Christensen, that isn’t the moral of the Sony story or any of the many other stories that follow the same pattern. When he looked at industries in which incumbents were overtaken by new entrants, he realized that technological breakthroughs were rarely the work of plucky start-ups – they were typically developed in the well-funded R&D departments of big companies. As we saw, Sony, a new entrant to the radio market, piggybacked the sophisticated technology of an established player – AT&T. Then there’s Kodak, the market leader in photographic film for much of the twentieth century before it was devoured by digital entrants. The first digital camera, though, was developed by a Kodak engineer in the late ’70s! There are countless other examples.
So, the real question isn’t why big companies fail to innovate – it’s why they don’t capitalize on the breakthrough technologies they often have a hand in developing. Christensen’s answer is that breakthrough technologies usually are worse than what already exists. Sony’s portable radios sounded terrible. The first cell phone cameras took awful pictures. The first car Toyota released in the American market, the Corona, couldn’t hold a candle to the vehicles rolling off GM and Ford’s production lines.
Christensen sees low-quality innovation of this kind as fundamentally disruptive. He compares it to “sustaining innovation” – the constant tinkering that leads to higher performance. To go back to radios, companies like RCA and Zenith were constantly innovating their core product, which sounded better and better over time. Sony disrupted that pattern. Akio Morita didn’t work in his lab until his transistor radios could compete with the radios made by the industry’s big hitters. Instead, he gambled on finding a new market which would value portability and low cost over quality.
It had to be a new market, too. Established companies’ customers aren’t interested in breakthroughs: they already have something that’s proven to work really well. And from a manager’s perspective, it’s perfectly rational to ignore shoddy new products that have no existing market and focus a company’s resources on improving the high-margin products that do have customers.
Those new markets often end up being hugely profitable, however. Teenagers will buy crappy radios if they’re cheap and portable. Cell phone cameras were so convenient, people used them even though they took grainy photos. Toyota’s Coronas looked like rust buckets, but they got people to work for less money than GM’s or Ford’s cars. All these products were extremely useful.
Which brings us to the dilemma in the book’s title. You can’t invest in every dumb-sounding new idea – that’s how you bankrupt a company. But say you continue pursuing those high margins while waiting to see if that dumb idea turns out to be a stroke of genius. By the time you find out that it is, it’s already too late: the new market that’s suddenly interesting enough to enter has already been cornered. Even worse, the shoddy, low-end products created by upstarts are likely to improve to the point that they become attractive to your customers. That’s also a recipe for bankruptcy.
Why Gillette is stuck on the horns of a dilemma.
Business analysts had a neat explanation of why established companies like RCA and Zenith end up losing out to upstarts like Sony. It goes like this.
Technological change is fast and furious; you have to run to stand still. Managers, though, often lose sight of this fact. They’re so focused on what works in the present that they fail to plan for the future. That’s how they get picked off. Call it complacency. Call it lack of innovation. Call it bad management.
But for Christensen, that isn’t the moral of the Sony story or any of the many other stories that follow the same pattern. When he looked at industries in which incumbents were overtaken by new entrants, he realized that technological breakthroughs were rarely the work of plucky start-ups – they were typically developed in the well-funded R&D departments of big companies. As we saw, Sony, a new entrant to the radio market, piggybacked the sophisticated technology of an established player – AT&T. Then there’s Kodak, the market leader in photographic film for much of the twentieth century before it was devoured by digital entrants. The first digital camera, though, was developed by a Kodak engineer in the late ’70s! There are countless other examples.
So, the real question isn’t why big companies fail to innovate – it’s why they don’t capitalize on the breakthrough technologies they often have a hand in developing. Christensen’s answer is that breakthrough technologies usually are worse than what already exists. Sony’s portable radios sounded terrible. The first cell phone cameras took awful pictures. The first car Toyota released in the American market, the Corona, couldn’t hold a candle to the vehicles rolling off GM and Ford’s production lines.
Christensen sees low-quality innovation of this kind as fundamentally disruptive. He compares it to “sustaining innovation” – the constant tinkering that leads to higher performance. To go back to radios, companies like RCA and Zenith were constantly innovating their core product, which sounded better and better over time. Sony disrupted that pattern. Akio Morita didn’t work in his lab until his transistor radios could compete with the radios made by the industry’s big hitters. Instead, he gambled on finding a new market which would value portability and low cost over quality.
It had to be a new market, too. Established companies’ customers aren’t interested in breakthroughs: they already have something that’s proven to work really well. And from a manager’s perspective, it’s perfectly rational to ignore shoddy new products that have no existing market and focus a company’s resources on improving the high-margin products that do have customers.
Those new markets often end up being hugely profitable, however. Teenagers will buy crappy radios if they’re cheap and portable. Cell phone cameras were so convenient, people used them even though they took grainy photos. Toyota’s Coronas looked like rust buckets, but they got people to work for less money than GM’s or Ford’s cars. All these products were extremely useful.
Which brings us to the dilemma in the book’s title. You can’t invest in every dumb-sounding new idea – that’s how you bankrupt a company. But say you continue pursuing those high margins while waiting to see if that dumb idea turns out to be a stroke of genius. By the time you find out that it is, it’s already too late: the new market that’s suddenly interesting enough to enter has already been cornered. Even worse, the shoddy, low-end products created by upstarts are likely to improve to the point that they become attractive to your customers. That’s also a recipe for bankruptcy.
Final Summary
Stuck in the innovator’s dilemma: that’s a scary place to be. Is there a way out? In Gillette’s case, it’s too early to tell – we’ll have to wait to see if its own home delivery subscription service will be enough to fend off competitors. But Christensen’s book isn’t really about finding a way out. Its biggest lesson is that managers have to avoid the trap in the first place.
As Paul Steinberg, the chief technology officer for Motorola Solutions, put it, Christensen’s message is that companies must learn to incubate new ideas or perish. Steinberg adds that that message “scared the crap” out of him when he first read The Innovator’s Dilemma. He wasn’t alone. Christensen’s greatest legacy may well be that he taught a generation of business leaders that fear is often the best guide on the path to success.
About the author
CLAYTON M. CHRISTENSEN is the Kim B. Clark Professor at Harvard Business School, the author of nine books, a five-time recipient of the McKinsey Award for Harvard Business Review’s best article, and the cofounder of four companies, including the innovation consulting firm Innosight. In 2011 and 2013 he was named the world’s most influential business thinker in a biennial ranking conducted by Thinkers50.
Table of Contents
In Gratitude vii
Preface ix
Introduction xiii
Part 1 Why Great Companies Can Fail 1
1 How Can Great Firms Fail? Insights from the Hard Disk Drive Industry 3
2 Value Networks and the Impetus to Innovate 29
3 Disruptive Technological Change in the Mechanical Excavator Industry 61
4 What Goes Up, Can’t Go Down 77
Part 2 Managing Disruptive Technological Change 97
5 Give Responsibility for Disruptive Technologies to Organizations Whose Customers Need Them 101
6 Match the Size of the Organization to the Size of the Market 121
7 Discovering New and Emerging Markets 143
8 How to Appraise Your Organization’s Capabilities and Disabilities 161
9 Performance Provided, Market Demand, and the Product Life Cycle 183
10 Managing Disruptive Technological Change: A Case Study 205
11 The Dilemmas of Innovation: A Summary 225
The Innovator’s Dilemma Book Group Guide 231
Index 239
About the Author 255
Overview
The Innovator’s Dilemma is the revolutionary business book that has forever changed corporate America. Based on a truly radical idea—that great companies can fail precisely because they do everything right—this Wall Street Journal, Business Week and New York Times Business bestseller is one of the most provocative and important business books ever written. Entrepreneurs, managers, and CEOs ignore its wisdom and its warnings at their great peril.
In this revolutionary bestseller, innovation expert Clayton M. Christensen says outstanding companies can do everything right and still lose their market leadership—or worse, disappear altogether. And not only does he prove what he says, but he tells others how to avoid a similar fate.
Focusing on “disruptive technology,” Christensen shows why most companies miss out on new waves of innovation. Whether in electronics or retailing, a successful company with established products will get pushed aside unless managers know when to abandon traditional business practices. Using the lessons of successes and failures from leading companies, The Innovator’s Dilemma presents a set of rules for capitalizing on the phenomenon of disruptive innovation.
Find out:
- When it is right not to listen to customers.
- When to invest in developing lower-performance products that promise lower margins.
- When to pursue small markets at the expense of seemingly larger and more lucrative ones.
Sharp, cogent, and provocative, The Innovator’s Dilemma is one of the most talked-about books of our time—and one no savvy manager or entrepreneur should be without.
Video and Podcast
Review/Endorsements/Praise/Award
” I cannot recommend this book strongly enough .. ignore it at your peril. ” – Martin Fakley, Information Access
” [A] masterpiece…The most profound and useful business book ever written about innovation. ” – George Gilder, Gilder Technology Report
” Absolutely brilliant. Clayton Christensen provides an insightful analysis of changing technology and its importance to a company’s future success. ” – Michael R. Bloomberg, CEO & Founder, Bloomberg Financial Markets
” This book addresses a tough problem that most successful companies will face eventually. It’s lucid, analytical, and scary. ” – Dr. Andrew S. Grove, chairman & CEO, Intel Corporation
” Clayton Christensen’s groundbreaking book…brings fresh insight and understanding to the complex and critically important relationships between technological change and business success…His conclusions provide food for thought for the top management of every company. ” – Richard N. Foster, Director, McKinsey & Company
” The Best Business Book of 1997. ” – The Financial Times/Booz Allen & Hamilton Global Business Book Awards
” Succinct and clearly written, The Innovator’s Dilemma is an important book that belongs on every manager’s bookshelf. Highly recommended. ” – Harry C. Edwards, Amazon.com
” This book ought to chill any executive who feels bulletproof … and inspire entrepreneurs aiming their guns. ” – Forbes
” This is a compelling argument, thoroughly researched and superbly written, which challenges conventional theory. ” – Jon Hughes, Supply Management
“The Innovator’s Dilemma is becoming a handbook for CEOs remaking their businesses for the Net. ” – BusinessWeek
” In a sea of mostly worthless business books, his is an upside surprise… sharply written and rigorous enough to be predictive…The Innovator’s Dilemma could be the wake-up call you need. ” – Rich Karlgaard, Forbes
” The Innovator’s Dilemma captures the critical role of leadership in creating markets. ” – John Seely Brown, chief scientist, Xerox Corp., and director, Xerox Parc
“The process of Low End Disruption is beautifully described in Clayton Christensen’s series of books: The Innovator’s Dilemma, The Innovator’s Solution and The Innovator’s DNA. If you haven’t read them, you should. What’s amazing about these books is not only how important their conclusions are but how well researched they are.” – TechCrunch
“a holy book for entrepreneurs in Silicon Valley…” – Bloomberg BusinessWeek
Named one of “The 25 Most Influential Business Management Books” by TIME Magazine (TIME.com)
“I came very late to that book [The Innovator’s Dilemma]. I only read it six months ago. And I haven’t stopped thinking of it ever since. – Malcolm Gladwell, FastCompany.com
“Clayton Christensen’s The Innovator’s Dilemma (1997) introduced one of the most influential modern business ideas—disruptive innovation—and proved that high academic theory need not be a disadvantage in a book aimed at the general reader.” – The Economist
Read an Excerpt/PDF Preview
Chapter One
How Can Great Firms Fail?
Insights from the Hard Disk
Drive Industry
When I began my search for an answer to the puzzle of why the best firms can fail, a friend offered some sage advice. “Those who study genetics avoid studying humans,” he noted. “Because new generations come along only every thirty years or so, it takes a long time to understand the cause and effect of any changes. Instead, they study fruit flies, because they are conceived, born, mature, and die all within a single day. If you want to understand why something happens in business, study the disk drive industry. Those companies are the closest things to fruit flies that the business world will ever see.”
Indeed, nowhere in the history of business has there been an industry like disk drives, where changes in technology, market structure, global scope, and vertical integration have been so pervasive, rapid, and unrelenting. While this pace and complexity might be a nightmare for managers, my friend was right about its being fertile ground for research. Few industries offer researchers the same opportunities for developing theories about how different types of change cause certain types of firms to succeed or fail or for testing those theories as the industry repeats its cycles of change.
This chapter summarizes the history of the disk drive industry in all its complexity. Some readers will be interested in it for the sake of history itself. But the value of understanding this history is that out of its complexity emerge a few stunningly simple and consistent factors that have repeatedly determined the success and failure of the industry’s best firms. Simply put, when the best firms succeeded, they did so because they listened responsively to their customers and invested aggressively in the technology, products, and manufacturing capabilities that satisfied their customers’ next-generation needs. But, paradoxically, when the best firms subsequently failed, it was for the same reasons–they listened responsively to their customers and invested aggressively in the technology, products, and manufacturing capabilities that satisfied their customers’ next-generation needs. This is one of the innovator’s dilemmas: Blindly following the maxim that good managers should keep close to their customers can sometimes be a fatal mistake.
The history of the disk drive industry provides a framework for understanding when “keeping close to your customers” is good advice–and when it is not. The robustness of this framework could only be explored by researching the industry’s history in careful detail. Some of that detail is recounted here, and elsewhere in this book, in the hope that readers who are immersed in the detail of their own industries will be better able to recognize how similar patterns have affected their own fortunes and those of their competitors.
HOW DISK DRIVES WORK
Disk drives write and read information that computers use. They comprise read-write heads mounted at the end of an arm that swings over the surface of a rotating disk in much the same way that a phonograph needle and arm reach over a record; aluminum or glass disks coated with magnetic material; at least two electric motors, a spin motor that drives the rotation of the disks and an actuator motor that moves the head to the desired position over the disk; and a variety of electronic circuits that control the drive’s operation and its interface with the computer. See Figure 1.1 for an illustration of a typical disk drive.
The read-write head is a tiny electromagnet whose polarity changes whenever the direction of the electrical current running through it changes. Because opposite magnetic poles attract, when the polarity of the head becomes positive, the polarity of the area on the disk beneath the head switches to negative, and vice versa. By rapidly changing the direction of current flowing through the head’s electromagnet as the disk spins beneath the head, a sequence of positively and negatively oriented magnetic domains are created in concentric tracks on the disk’s surface. Disk drives can use the positive and negative domains on the disk as a binary numeric system–1 and 0–to “write” information onto disks. Drives read information from disks in essentially the opposite process: Changes in the magnetic flux fields on the disk surface induce changes in the micro current flowing through the head.
EMERGENCE OF THE EARLIEST DISK DRIVES
A team of researchers at IBM’s San Jose research laboratories developed the first disk drive between 1952 and 1956. Named RAMAC (for Random Access Method for Accounting and Control), this drive was the size of a large refrigerator, incorporated fifty twenty-four-inch disks, and could store 5 megabytes (MB) of information (see Figure 1.2). Most of the fundamental architectural concepts and component technologies that defined today’s dominant disk drive design were also developed at IBM. These include its removable packs of rigid disks (introduced in 1961); the floppy disk drive (1971); and the Winchester architecture (1973). All had a powerful, defining influence on the way engineers in the rest of the industry defined what disk drives were and what they could do.
As IBM produced drives to meet its own needs, an independent disk drive industry emerged serving two distinct markets. A few firms developed the plug-compatible market (PCM) in the 1960s, selling souped-up copies of IBM drives directly to IBM customers at discount prices. Although most of IBM’s competitors in computers (for example, Control Data, Burroughs, and Univac) were integrated vertically into the manufacture of their own disk drives, the emergence in the 1970s of smaller, nonintegrated computer makers such as Nixdorf, Wang, and Prime spawned an original equipment market (OEM) for disk drives as well. By 1976 about $1 billion worth of disk drives were produced, of which captive production accounted for 50 percent and PCM and OEM for about 25 percent each.
The next dozen years unfolded a remarkable story of rapid growth, market turbulence, and technology-driven performance improvements. The value of drives produced rose to about $18 billion by 1995. By the mid-1980s the PCM market had become insignificant, while OEM output grew to represent about three-fourths of world production. Of the seventeen firms populating the industry in 1976–all of which were relatively large, diversified corporations such as Diablo, Ampex, Memorex, EMM, and Control Data–all except IBM’s disk drive operation had failed or had been acquired by 1995. During this period an additional 129 firms entered the industry, and 109 of those also failed. Aside from IBM, Fujitsu, Hitachi, and NEC, all of the producers remaining by 1996 had entered the industry as start-ups after 1976.
Some have attributed the high mortality rate among the integrated firms that created the industry to its nearly unfathomable pace of technological change. Indeed, the pace of change has been breathtaking. The number of megabits (Mb) of information that the industry’s engineers have been able to pack into a square inch of disk surface has increased by 35 percent per year, on average, from 50 Kb in 1967 to 1.7 Mb in 1973, 12 Mb in 1981, and 1100 Mb by 1995. The physical size of the drives was reduced at a similar pace: The smallest available 20 MB drive shrank from 800 cubic inches ([in..sup.3]) in 1978 to 1.4 [in..sup.3] by 1993–a 35 percent annual rate of reduction.
Figure 1.3 shows that the slope of the industry’s experience curve (which correlates the cumulative number of terabytes (one thousand gigabytes) of disk storage capacity shipped in the industry’s history to the constant-dollar price per megabyte of memory) was 53 percent–meaning that with each doubling of cumulative terabytes shipped, cost per megabyte fell to 53 percent of its former level. This is a much steeper rate of price decline than the 70 percent slope observed in the markets for most other microelectronics products. The price per megabyte has declined at about 5 percent per quarter for more than twenty years.
THE IMPACT OF TECHNOLOGICAL CHANGE
My investigation into why leading firms found it so difficult to stay atop the disk drive industry led me to develop the “technology mudslide hypothesis”: Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you’ve got to stay on top of it, and if you ever once stop to catch your breath, you get buried.
To test this hypothesis, I assembled and analyzed a database consisting of the technical and performance specifications of every model of disk drive introduced by every company in the world disk drive industry for each of the years between 1975 and 1994. This database enabled me to identify the firms that led in introducing each new technology; to trace how new technologies were diffused through the industry over time; to see which firms led and which lagged; and to measure the impact each technological innovation had on capacity, speed, and other parameters of disk drive performance. By carefully reconstructing the history of each technological change in the industry, the changes that catapulted entrants to success or that precipitated the failure of established leaders could be identified.
This study led me to a very different view of technology change than the work of prior scholars on this question had led me to expect. Essentially, it revealed that neither the pace nor the difficulty of technological change lay at the root of the leading firms’ failures. The technology mudslide hypothesis was wrong.
The manufacturers of most products have established a trajectory of performance improvement over time. Intel, for example, pushed the speed of its microprocessors ahead by about 20 percent per year, from its 8 megahertz (MHz) 8088 processor in 1979 to its 133 MHz Pentium chip in 1994. Eli Lilly and Company improved the purity of its insulin from 50,000 impure parts per million (ppm) in 1925 to 10 ppm in 1980, a 14 percent annual rate of improvement. When a measurable trajectory of improvement has been established, determining whether a new technology is likely to improve a product’s performance relative to earlier products is an unambiguous question.
But in other cases, the impact of technological change is quite different. For instance, is a notebook computer better than a mainframe? This is an ambiguous question because the notebook computer established a completely new performance trajectory, with a definition of performance that differs substantially from the way mainframe performance is measured. Notebooks, as a consequence, are generally sold for very different uses.
This study of technological change over the history of the disk drive industry revealed two types of technology change, each with very different effects on the industry’s leaders. Technologies of the first sort sustained the industry’s rate of improvement in product performance (total capacity and recording density were the two most common measures) and ranged in difficulty from incremental to radical. The industry’s dominant firms always led in developing and adopting these technologies. By contrast, innovations of the second sort disrupted or redefined performance trajectories–and consistently resulted in the failure of the industry’s leading firms.
The remainder of this chapter illustrates the distinction between sustaining and disruptive technologies by describing prominent examples of each and summarizing the role these played in the industry’s development. This discussion focuses on differences in how established firms came to lead or lag in developing and adopting new technologies, compared with entrant firms. To arrive at these examples, each new technology in the industry was examined. In analyzing which firms led and lagged at each of these points of change, I defined established firms to be those that had been established in the industry prior to the advent of the technology in question, practicing the prior technology. I defined entrant firms as those that were new to the industry at that point of technology change. Hence, a given firm would be considered an entrant at one specific point in the industry’s history, for example, at the emergence of the 8-inch drive. Yet the same firm would be considered an established firm when technologies that emerged subsequent to the firm’s entry were studied.
SUSTAINING TECHNOLOGICAL CHANGES
In the history of the disk drive industry, most technology changes have sustained or reinforced established trajectories of product performance improvement. Figure 1.4, which compares the average recording density of drives that employed successive generations of head and disk technologies, maps an example of this. The first curve plots the density of drives that used conventional particulate oxide disk technology and ferrite head technology; the second charts the average density of drives that used new-technology thin-film heads and disks; the third marks the improvements in density achievable with the latest head technology, magneto-resistive heads.
The way such new technologies as these emerge to surpass the performance of the old resembles a series of intersecting technology S-curves. Movement along a given S-curve is generally the result of incremental improvements within an existing technological approach, whereas jumping onto the next technology curve implies adopting a radically new technology. In the cases measured in Figure 1.4, incremental advances, such as grinding the ferrite heads to finer, more precise dimensions and using smaller and more finely dispersed oxide particles on the disk’s surface, led to the improvements in density from 1 to 20 megabits per square inch (Mbpsi) between 1976 and 1989. As S-curve theory would predict, the improvement in recording density obtainable with ferrite/ oxide technology began to level off toward the end of the period, suggesting a maturing technology. The thin-film head and disk technologies’ effect on the industry sustained performance improvement at its historical rate. Thin-film heads were barely established in the early 1990s, when even more advanced magneto-resistive head technology emerged. The impact of magneto-resistive technology sustained, or even accelerated, the rate of performance improvement.
Figure 1.5 describes a sustaining technological change of a very different character: an innovation in product architecture, in which the 14-inch Winchester drive is substituted for removable disk packs, which had been the dominant design between 1962 and 1978. Just as in the thin-film for ferrite/oxide substitution, the impact of Winchester technology sustained the historically established rate of performance improvement. Similar graphs could be constructed for most other technological innovations in the industry, such as embedded servo systems, RLL and PRML recording codes, higher RPM motors, and embedded interfaces. Some of these were straightforward technology improvements; others were radical departures. But all had a similar impact on the industry: They helped manufacturers to sustain the rate of historical performance improvement that their customers had come to expect.
In literally every case of sustaining technology change in the disk drive industry, established firms led in development and commercialization. The emergence of new disk and head technologies illustrates this.
In the 1970s, some manufacturers sensed that they were reaching the limit on the number of bits of information they could pack onto oxide disks. In response, disk drive manufacturers began studying ways of applying super-thin films of magnetic metal on aluminum to sustain the historical rate of improvements in recording density. The use of thin-film coatings was then highly developed in the integrated circuit industry, but its application to magnetic disks still presented substantial challenges. Experts estimate that the pioneers of thin-film disk technology–IBM, Control Data, Digital Equipment, Storage Technology, and Ampex–each took more than eight years and spent more than $50 million in that effort. Between 1984 and 1986, about two-thirds of the producers active in 1984 introduced drives with thin-film disks. The overwhelming majority of these were established industry incumbents. Only a few entrant firms attempted to use thin-film disks in their initial products, and most of those folded shortly after entry.
The same pattern was apparent in the emergence of thin-film heads. Manufacturers of ferrite heads saw as early as 1965 the approaching limit to improvements in this technology; by 1981 many believed that the limits of precision would soon be reached. Researchers turned to thin-film technology, produced by sputtering thin films of metal on the recording head and then using photolithography to etch much finer electromagnets than could be attained with ferrite technology. Again, this proved extraordinarily difficult. Burroughs in 1976, IBM in 1979, and other established firms first successfully incorporated thin-film heads in disk drives. In the period between 1982 and 1986, during which some sixty firms entered the rigid disk drive industry, only four (all commercial failures) attempted to do so using thin-film heads in their initial products as a source of performance advantage. All other entrant firms–even aggressively performance-oriented firms such as Maxtor and Conner Peripherals–found it preferable to learn their way using conventional ferrite heads first, before tackling thin-film technology.
As was the case with thin-film disks, the introduction of thin-film heads entailed the sort of sustained investment that only established firms could handle. IBM and its rivals each spent more than $100 million developing thin-film heads. The pattern was repeated in the next-generation magneto–resistive head technology: The industry’s largest firms–IBM, Seagate, and Quantum–led the race.
The established firms were the leading innovators not just in developing risky, complex, and expensive component technologies such as thin-film heads and disks, but in literally every other one of the sustaining innovations in the industry’s history. Even in relatively simple innovations, such as RLL recording codes (which took the industry from double- to triple-density disks), established firms were the successful pioneers, and entrant firms were the technology followers. This was also true for those architectural innovations–for example, 14-inch and 2.5-inch Winchester drives–whose impact was to sustain established improvement trajectories. Established firms beat out the entrants.
Figure 1.6 summarizes this pattern of technology leadership among established and entrant firms offering products based on new sustaining technologies during the years when those technologies were emerging. The pattern is stunningly consistent. Whether the technology was radical or incremental, expensive or cheap, software or hardware, component or architecture, competence-enhancing or competence-destroying, the pattern was the same. When faced with sustaining technology change that gave existing customers something more and better in what they wanted, the leading practitioners of the prior technology led the industry in the development and adoption of the new. Clearly, the leaders in this industry did not fail because they became passive, arrogant, or risk-averse or because they couldn’t keep up with the stunning rate of technological change. My technology mudslide hypothesis wasn’t correct.
FAILURE IN THE FACE OF DISRUPTIVE TECHNOLOGICAL CHANGES
Most technological change in the disk drive industry has consisted of sustaining innovations of the sort described above. In contrast, there have been only a few of the other sort of technological change, called disruptive technologies. These were the changes that toppled the industry’s leaders.
The most important disruptive technologies were the architectural innovations that shrunk the size of the drives–from 14-inch diameter disks to diameters of 8, 5.25, and 3.5-inches and then from 2.5 to 1.8 inches. Table 1.1 illustrates the ways these innovations were disruptive. Based on 1981 data, it compares the attributes of a typical 5.25-inch drive, a new architecture that had been in the market for less than a year, with those of a typical 8-inch drive, which at that time was the standard drive used by minicomputer manufacturers. Along the dimensions of performance important to established minicomputer manufacturers–capacity, cost per megabyte, and access time–the 8-inch product was vastly superior. The 5.25-inch architecture did not address the perceived needs of minicomputer manufacturers at that time. On the other hand, the 5.25-inch drive had features that appealed to the desktop personal computer market segment just emerging in the period between 1980 and 1982. It was small and lightweight, and, priced at around $2,000, it could be incorporated into desktop machines economically.
Generally disruptive innovations were technologically straightforward, consisting of off-the-shelf components put together in a product architecture that was often simpler than prior approaches. They offered less of what customers in established markets wanted and so could rarely be initially employed there. They offered a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream.
The trajectory map in Figure 1.7 shows how this series of simple but disruptive technologies proved to be the undoing of some very aggressive, astutely managed disk drive companies. Until the mid-1970s, 14-inch drives with removable packs of disks accounted for nearly all disk drive sales. The 14-inch Winchester architecture then emerged to sustain the trajectory of recording density improvement. Nearly all of these drives (removable disks and Winchesters) were sold to mainframe computer manufacturers, and the same companies that led the market in disk pack drives led the industry’s transition to the Winchester technology.
Table 1.1 A Disruptive Technology Change: The 5.25-inch Winchester Disk Drive (1981) 8-Inch Drives 5.25-Inch Drives Attribute (Minicomputer Market) (Desktop Computer Market) Capacity (megabytes) 60 10 Physical volume (cubic inches) 566 150 Weight (pounds) 21 6 Access time (milliseconds) 30 160 Cost per megabyte $50 $200 Unit cost $3000 $2000 Source: Data are from various issues of Disk/Trend Report.
The trajectory map shows that the hard disk capacity provided in the median priced, typically configured mainframe computer system in 1974 was about 130 MB per computer. This increased at a 15 percent annual rate over the next fifteen years–a trajectory representing the disk capacity demanded by the typical users of new mainframe computers. At the same time, the capacity of the average 14-inch drive introduced for sale each year increased at a faster, 22 percent rate, reaching beyond the mainframe market to the large scientific and supercomputer markets.
Between 1978 and 1980, several entrant firms–Shugart Associates, Micropolis, Priam, and Quantum–developed smaller 8-inch drives with 10, 20, 30, and 40 MB capacity. These drives were of no interest to mainframe computer manufacturers, which at that time were demanding drives with 300 to 400 MB capacity. These 8-inch entrants therefore sold their disruptive drives into a new application–minicomputers. The customers–Wang, DEC, Data General, Prime, and Hewlett-Packard–did not manufacture mainframes, and their customers often used software substantially different from that used in mainframes. These firms hitherto had been unable to offer disk drives in their small, desk-side minicomputers because 14-inch models were too big and expensive. Although initially the cost per megabyte of capacity of 8-inch drives was higher than that of 14-inch drives, these new customers were willing to pay a premium for other attributes that were important to them–especially smaller size. Smallness had little value to mainframe users.
Once the use of 8-inch drives became established in minicomputers, the hard disk capacity shipped with the median-priced minicomputer grew about 25 percent per year: a trajectory determined by the ways in which minicomputer owners learned to use their machines. At the same time, however, the 8-inch drive makers found that, by aggressively adopting sustaining innovations, they could increase the capacity of their products at a rate of more than 40 percent per year–nearly double the rate of increase demanded by their original “home” minicomputer market. In consequence, by the mid-1980s, 8-inch drive makers were able to provide the capacities required for lower-end mainframe computers. Unit volumes had grown significantly so that the cost per megabyte of 8-inch drives had declined below that of 14-inch drives, and other advantages became apparent: For example, the same percentage mechanical vibration in an 8-inch drive, as opposed to a 14-inch drive, caused much less variance in the absolute position of the head over the disk. Within a three-to-four-year period, therefore, 8-inch drives began to invade the market above them, substituting for 14-inch drives in the lower-end mainframe computer market.
As the 8-inch products penetrated the mainframe market, the established manufacturers of 14-inch drives began to fail. Two-thirds of them never introduced an 8-inch model. The one-third that introduced 8-inch models did so about two years behind the 8-inch entrant manufacturers. Ultimately, every 14-inch drive maker was driven from the industry.
The 14-inch drive makers were not toppled by the 8-inch entrants because of technology. The 8-inch products generally incorporated standard off-the-shelf components, and when those 14-inch drive makers that did introduce 8-inch models got around to doing so, their products were very performance-competitive in capacity, areal density, access time, and price per megabyte. The 8-inch models introduced by the established firms in 1981 were nearly identical in performance to the average of those introduced that year by the entrant firms. In addition, the rates of improvement in key attributes (measured between 1979 and 1983) were stunningly similar between established and entrant firms.
Held Captive by Their Customers
Why were the leading drive makers unable to launch 8-inch drives until it was too late? Clearly, they were technologically capable of producing these drives. Their failure resulted from delay in making the strategic commitment to enter the emerging market in which the 8-inch drives initially could be sold. Interviews with marketing and engineering executives close to these companies suggest that the established 14-inch drive manufacturers were held captive by customers. Mainframe computer manufacturers did not need an 8-inch drive. In fact, they explicitly did not want it: they wanted drives with increased capacity at a lower cost per megabyte. The 14-inch drive manufacturers were listening and responding to their established customers. And their customers–in a way that was not apparent to either the disk drive manufacturers or their computer-making customers–were pulling them along a trajectory of 22 percent capacity growth in a 14-inch platform that would ultimately prove fatal.
Figure 1.7 maps the disparate trajectories of performance improvement demanded in the computer product segments that emerged later, compared to the capacity that changes in component technology and refinements in system design made available within each successive architecture. The solid lines emanating from points A, B, C, D, and E measure the disk drive capacity provided with the median-priced computer in each category, while the dotted lines from the same points measure the average capacity of all disk drives introduced for sale in each architecture, for each year. These transitions are briefly described below.
The Advent of the 5.25-inch Drive
In 1980, Seagate Technology introduced 5.25-inch disk drives. Their capacities of 5 and 10 MB were of no interest to minicomputer manufacturers, who were demanding drives of 40 and 60 MB from their suppliers. Seagate and other firms that entered with 5.25-inch drives in the period 1980 to 1983 (for example, Miniscribe, Computer Memories, and International Memories) had to pioneer new applications for their products and turned primarily to desktop personal computer makers. By 1990, the use of hard drives in desktop computers was an obvious application for magnetic recording. It was not at all clear in 1980, however, when the market was just emerging, that many people could ever afford or use a hard drive on the desktop. The early 5.25-inch drive makers found this application (one might even say that they enabled it) by trial and error, selling drives to whomever would buy them.
Once the use of hard drives was established in desktop PCs, the disk capacity shipped with the median-priced machine (that is, the capacity demanded by the general PC user) increased about 25 percent per year. Again, the technology improved at nearly twice the rate demanded in the new market: The capacity of new 5.25-inch drives increased about 50 percent per year-between 1980 and 1990. As in the 8-inch for 14-inch substitution, the first firms to produce 5.25-inch drives were entrants; on average, established firms lagged behind entrants by two years. By 1985, only half of the firms producing 8-inch drives had introduced 5.25-inch models. The other half never did.
Growth in the use of 5.25-inch drives occurred in two waves. The first followed creation of a new application for rigid disk drives: desktop computing, in which product attributes such as physical size, relatively unimportant in established applications, were highly valued. The second wave followed substitution of 5.25-inch disks for larger drives in established minicomputer and mainframe computer markets, as the rapidly increasing capacity of 5.25-inch drives intersected the more slowly growing trajectories of capacity demanded in these markets. Of the four leading 8-inch drive makers–Shugart Associates, Micropolis, Priam, and Quantum–only Micropolis survived to become a significant manufacturer of 5.25-inch drives, and that was accomplished only with Herculean managerial effort, as described in chapter 5.
The Pattern Is Repeated: The Emergence of the 3.5-inch Drive
The 3.5-inch drive was first developed in 1984 by Rodime, a Scottish entrant. Sales of this architecture were not significant, however, until Conner Peripherals, a spinoff of 5.25-inch drive makers Seagate and Miniscribe, started shipping product in 1987. Conner had developed a small, lightweight drive architecture that was much more rugged than its 5.25-inch ancestors. It handled electronically functions that had previously been managed with mechanical parts, and it used microcode to replace functions that had previously been addressed electronically. Nearly all of Conner’s first year revenues of $113 million came from Compaq Computer, which had aided Conner’s start-up with a $30 million investment. The Conner drives were used primarily in a new application–portable and laptop machines, in addition to “small footprint” desktop models–where customers were willing to accept lower capacities and higher costs per megabyte to get lighter weight, greater ruggedness, and lower power consumption.
Seagate engineers were not oblivious to the coming of the 3.5-inch architecture. Indeed, in early 1985, less than one year after Rodime introduced the first 3.5-inch drive and two years before Conner Peripherals started shipping its product, Seagate personnel showed working 3.5-inch prototype drives to customers for evaluation. The initiative for the new drives came from Seagate’s engineering organization. Opposition to the program came primarily from the marketing organization and Seagate’s executive team; they argued that the market wanted higher capacity drives at a lower cost per megabyte and that 3.5-inch drives could never be built at a lower cost per megabyte than 5.25-inch drives.
Seagate’s marketers tested the 3.5-inch prototypes with customers in the desktop computing market it already served–manufacturers like IBM, and value-added resellers of full-sized desktop computer systems. Not surprisingly, they indicated little interest in the smaller drive. They were looking for capacities of 40 and 60 megabytes for their next-generation machines, while the 3.5-inch architecture could provide only 20 MB–and at higher costs.
In response to lukewarm reviews from customers, Seagate’s program manager lowered his 3.5-inch sales estimates, and the firm’s executives canceled the program. Their reasoning? The markets for 5.25-inch products were larger, and the sales generated by spending the engineering effort on new 5.25-inch products would create greater revenues for the company than would efforts targeted at new 3.5-inch products.
In retrospect, it appears that Seagate executives read the market–at least their own market–very accurately. With established applications and product architectures of their own, such as the IBM XT and AT, these customers saw no value in the improved ruggedness or the reduced size, weight, and power consumption of 3.5-inch products.
Seagate finally began shipping 3.5-inch drives in early 1988–the same year in which the performance trajectory of 3.5-inch drives (shown in Figure 1.7) intersected the trajectory of capacity demanded in desktop computers. By that time, the industry had shipped, cumulatively, nearly $750 million in 3.5-inch products. Interestingly, according to industry observers, as of 1991 almost none of Seagate’s 3.5-inch products had been sold to manufacturers of portable/laptop/notebook computers. In other words, Seagate’s primary customers were still desktop computer manufacturers, and many of its 3.5-inch drives were shipped with frames for mounting them in computers designed for 5.25-inch drives.
The fear of cannibalizing sales of existing products is often cited as a reason why established firms delay the introduction of new technologies. As the Seagate-Conner experience illustrates, however, if new technologies enable new market applications to emerge, the introduction of new technology may not be inherently cannibalistic. But when established firms wait until a new technology has become commercially mature in its new applications and launch their own version of the technology only in response to an attack on their home markets, the fear of cannibalization can become a self-fulfilling prophecy.
Although we have been looking at Seagate’s response to the development of the 3.5-inch drive architecture, its behavior was not atypical; by 1988, only 35 percent of the drive manufacturers that had established themselves making 5.25-inch products for the desktop PC market had introduced 3.5-inch drives. Similar to earlier product architecture transitions, the barrier to development of a competitive 3.5-inch product does not appear to have been engineering-based. As in the 14- to 8-inch transition, the new-architecture drives introduced by the incumbent, established firms during the transitions from 8 to 5.25 inches and from 5.25 to 3.5 inches were fully performance-competitive with those of entrant drives. Rather, the 5.25-inch drive manufacturers seem to have been misled by their customers, notably IBM and its direct competitors and resellers, who themselves seemed as oblivious as Seagate to the potential benefits and possibilities of portable computing and the new disk drive architecture that might facilitate it.
Prairietek, Conner, and the 2.5-inch Drive
In 1989 an industry entrant in Longmont, Colorado, Prairietek, upstaged the industry by announcing a 2.5-inch drive, capturing nearly all $30 million of this nascent market. But Conner Peripherals announced its own 2.5-inch product in early 1990 and by the end of that year had claimed 95 percent of the 2.5-inch drive market. Prairietek declared bankruptcy in late 1991, by which time each of the other 3.5-inch drivemakers–Quantum, Seagate, Western Digital, and Maxtor–had introduced 2.5-inch drives of their own.
What had changed? Had the incumbent leading firms finally learned the lessons of history? Not really. Although Figure 1.7 shows the 2.5-inch drive had significantly less capacity than the 3.5-inch drives, the portable computing markets into which the smaller drives were sold valued other attributes: weight, ruggedness, low power consumption, small physical size, and so on. Along these dimensions, the 2.5-inch drive offered improved performance over that of the 3.5-inch product: It was a sustaining technology. In fact, the computer makers who bought Conner’s 3.5-inch drive–laptop computer manufacturers such as Toshiba, Zenith, and Sharp–were the leading makers of notebook computers, and these firms needed the smaller 2.5-inch drive architecture. Hence, Conner and its competitors in the 3.5-inch market followed their customers seamlessly across the transition to 2.5-inch drives.
In 1992, however, the 1.8-inch drive emerged, with a distinctly disruptive character. Although its story will be recounted in detail later, it suffices to state here that by 1995, it was entrant firms that controlled 98 percent of the $130 million 1.8-inch drive market. Moreover, the largest initial market for 1.8-inch drives wasn’t in computing at all. It was in portable heart monitoring devices!
Figure 1.8 summarizes this pattern of entrant firms’ leadership in disruptive technology. It shows, for example, that two years after the 8-inch drive was introduced, two-thirds of the firms producing it (four of six), were entrants. And, two years after the first 5.25-inch drive was introduced, 80 percent of the firms producing these disruptive drives were entrants.
SUMMARY
There are several patterns in the history of innovation in the disk drive industry. The first is that the disruptive innovations were technologically straightforward. They generally packaged known technologies in a unique architecture and enabled the use of these products in applications where magnetic data storage and retrieval previously had not been technologically or economically feasible.
The second pattern is that the purpose of advanced technology development in the industry was always to sustain established trajectories of performance improvement: to reach the higher-performance, higher-margin domain of the upper right of the trajectory map. Many of these technologies were radically new and difficult, but they were not disruptive. The customers of the leading disk drive suppliers led them toward these achievements. Sustaining technologies, as a result, did not precipitate failure.
The third pattern shows that, despite the established firms’ technological prowess in leading sustaining innovations, from the simplest to the most radical, the firms that led the industry in every instance of developing and adopting disruptive technologies were entrants to the industry, not its incumbent leaders.
This book began by posing a puzzle: Why was it that firms that could be esteemed as aggressive, innovative, customer-sensitive organizations could ignore or attend belatedly to technological innovations with enormous strategic importance? In the context of the preceding analysis of the disk drive industry, this question can be sharpened considerably. The established firms were, in fact, aggressive, innovative, and customer-sensitive in their approaches to sustaining innovations of every sort. But the problem established firms seem unable to confront successfully is that of downward vision and mobility, in terms of the trajectory map. Finding new applications and markets for these new products seems to be a capability that each of these firms exhibited once, upon entry, and then apparently lost. It was as if the leading firms were held captive by their customers, enabling attacking entrant firms to topple the incumbent industry leaders each time a disruptive technology emerged. Why this happened, and is still happening, is the subject of the next chapter.