The Actual History
In April 1965, Gordon Moore, then the director of research and development at Fairchild Semiconductor, made an observation that would become one of the most influential predictions in technological history. In an article for Electronics Magazine titled "Cramming More Components onto Integrated Circuits," Moore noted that the number of components (transistors, resistors, diodes) on an integrated circuit had doubled approximately every year since their invention in 1958, and he predicted this trend would continue for at least ten years.
This observation was not based on theoretical physics but rather on empirical data from the early semiconductor industry. Moore had plotted the number of components on chips against time and noticed a striking pattern of exponential growth. In 1975, Moore revised his prediction to a doubling approximately every two years, which became the version most commonly cited today.
In 1968, Moore and Robert Noyce left Fairchild Semiconductor to found Intel Corporation, which would become one of the world's leading semiconductor manufacturers. At Intel, Moore's observation became enshrined as "Moore's Law" after being given this name by Caltech professor Carver Mead in the early 1970s.
What's remarkable is that this observation, later elevated to a "law," became a self-fulfilling prophecy. The semiconductor industry adopted Moore's Law as a target for research and development roadmaps. Companies like Intel, AMD, and others structured their business models around this predictable improvement cycle, releasing new chip generations that doubled transistor counts roughly every 24 months.
This consistent progression enabled several decades of rapid advancement in computing technology. In 1971, Intel's 4004 microprocessor contained 2,300 transistors. By 2022, Apple's M2 Ultra chip contained over 134 billion transistors. This represents an increase of nearly 60 million times in just over 50 years.
The economic and social impacts of Moore's Law have been profound. Computing power that once required massive, expensive mainframes became accessible in personal computers, then in smartphones and embedded devices. The predictable progression enabled long-term planning for software development, creating cycles of innovation where new applications utilized the regularly increasing hardware capabilities.
The exponential improvement in computing power enabled revolutions in numerous fields: artificial intelligence, the internet, mobile computing, digital entertainment, scientific research, and virtually every industry that uses computation. This reliable growth in computing power has been a fundamental driver of the global digital economy, enabling companies like Microsoft, Apple, Google, and Amazon to build increasingly sophisticated technologies and services.
In recent years, the pace of Moore's Law has slowed somewhat as manufacturers have encountered physical limitations and increasing costs. The industry has adapted by exploring new architectures, materials, and computing paradigms. However, the decades of predictable exponential improvement established computing as a uniquely progressive technology and shaped expectations that technological capabilities would steadily increase while costs would decrease.
Gordon Moore himself, who passed away in 2023 at the age of 94, lived to see his casual observation become one of the most durable and influential concepts in technological history, guiding an industry that transformed human civilization in the late 20th and early 21st centuries.
The Point of Divergence
What if Gordon Moore never formulated his famous law? In this alternate timeline, we explore a scenario where the predictable doubling of transistor counts on integrated circuits was never observed, articulated, or established as an industry goal.
There are several plausible ways this divergence might have occurred:
First, Moore might simply have never written his seminal 1965 article. As director of R&D at Fairchild Semiconductor, Moore was asked to predict what would happen with silicon components for the 35th anniversary issue of Electronics Magazine. In our alternate timeline, perhaps he declined this invitation due to other commitments, or focused his article on different aspects of semiconductor development without making the specific observation about component density doubling.
Alternatively, Moore might have made the observation privately but never published it widely. Many scientific insights occur to researchers but don't get formalized or widely disseminated. Without publication in a prominent industry journal, Moore's observation might have remained an internal note at Fairchild rather than becoming an industry-defining concept.
A third possibility is that Moore made the observation, but it failed to gain traction. Perhaps in this timeline, Carver Mead never coined the term "Moore's Law," or industry leaders didn't adopt it as a planning tool and competitive benchmark. Without a name and without industry buy-in, the concept might have remained just another forgotten prediction rather than becoming a self-fulfilling prophecy.
Most intriguingly, Moore might have observed a different pattern entirely. The early data on integrated circuits was limited, and different interpretations were possible. Perhaps in this timeline, Moore plotted the same points but drew a different conclusion—maybe identifying a linear rather than exponential growth pattern, or focusing on performance metrics other than transistor counts.
The most consequential divergence would be if Moore and the semiconductor industry concluded that regular, predictable scaling of integrated circuits was neither feasible nor economically viable. Without the confidence that transistor counts would reliably double, companies might have pursued very different research and development strategies, focusing on optimizing existing designs rather than continuously shrinking transistor dimensions.
In this alternate timeline, the absence of Moore's Law as both an observation and a goal fundamentally changes the development trajectory of computing technology, with profound implications for the digital revolution that defined the late 20th and early 21st centuries.
Immediate Aftermath
Fragmented Industry Standards (1965-1975)
Without Moore's Law providing a common roadmap, the semiconductor industry in the late 1960s and early 1970s developed in a more fragmented fashion. Rather than aligning around the regular doubling of transistor counts, different companies pursued divergent goals:
- IBM doubled down on mainframe computing, focusing on reliability and backward compatibility rather than dramatic increases in component density.
- RCA, Motorola, and other electronics giants emphasized incremental improvements to existing designs rather than pushing toward miniaturization.
- Fairchild Semiconductor still made advances in integrated circuit technology, but without Moore's explicit goal, progress was less aggressive and less predictable.
When Gordon Moore and Robert Noyce founded Intel in 1968, the company's initial focus was on memory chips rather than microprocessors. Without the guiding principle of exponential transistor growth, Intel developed its early products with more modest density improvements, typically 20-30% per generation rather than 100%.
Slower Development of the Microprocessor (1971-1976)
The Intel 4004, historically released in 1971 as the first commercial microprocessor, emerged later and with fewer capabilities in this timeline. Without the aggressive scaling targets derived from Moore's Law:
- The first commercial microprocessor appeared around 1974, three years later than in our timeline.
- This delayed microprocessor contained approximately 1,000 transistors instead of 2,300, limiting its functionality.
- Pricing remained higher, as the economic benefits of scaling weren't fully understood or pursued.
Federico Faggin, one of the key engineers behind the 4004, later commented in this alternate timeline: "We were designing with conservative assumptions about what could be manufactured reliably. Without a clear industry target for component density, we opted for approaches we knew would work rather than pushing the boundaries."
Impact on Early Personal Computing (1975-1985)
The absence of Moore's Law had significant consequences for the personal computer revolution:
- The Altair 8800, which launched the personal computer era in 1975 in our timeline, emerged in 1979 in this alternate history, with more limited capabilities.
- Apple Computer still formed, but the Apple I and Apple II offered significantly less processing power, constraining what software could run on these early machines.
- Microsoft's BASIC interpreter and early software required more optimization to run on the limited hardware, slowing development cycles.
Steve Wozniak, co-founder of Apple, faced greater challenges designing the Apple II: "Every byte of memory was precious, and processor cycles were golden. We couldn't count on next year's machines being twice as powerful, so we had to be incredibly efficient with what we had."
Business Models and Industry Structure (1975-1985)
Without the predictable progress of Moore's Law, the semiconductor and computer industries developed different business models:
- Product lifecycles lengthened substantially, with new computer generations appearing every 4-5 years rather than every 2-3 years.
- Backward compatibility became even more crucial, as hardware upgrades were less frequent and more expensive.
- Vertical integration became more common, with companies like IBM maintaining control over both hardware and software longer than in our timeline.
The economic implications were substantial. Without the predictable cost reductions that Moore's Law enabled, computers remained more expensive for longer. The computing industry grew at a slower pace, with global revenues in 1985 reaching only about 60% of what they were in our timeline.
Military and Research Computing (1965-1985)
One area that saw continued strong investment was military and research computing:
- DARPA (Defense Advanced Research Projects Agency) allocated more funding to alternative computing architectures when integrated circuit progress slowed.
- Supercomputing development followed a different path, with more emphasis on parallel processing and specialized architectures earlier than in our timeline.
- University research in computing science focused more on software efficiency and algorithm optimization than on hardware-intensive approaches.
The ARPANET still developed, but with more limited computing resources at each node, its growth was slower and its applications more constrained. By 1985, the network connected fewer institutions and supported more limited functionality than in our actual timeline.
Long-term Impact
Divergent Computing Architectures (1985-2000)
Without Moore's Law driving a singular focus on shrinking transistors, the computing landscape diversified significantly through the 1980s and 1990s:
Specialized Computing Flourishes
- Parallel processing emerged as a dominant paradigm much earlier than in our timeline. Companies like Thinking Machines, Cray, and others developed massively parallel systems to compensate for the slower improvement in single-processor performance.
- Analog computing, almost entirely abandoned in our timeline, maintained a significant niche for certain applications where digital precision wasn't required but speed was essential.
- Optical computing received substantially more investment, with companies like Bell Labs making significant advances in photonic computing by the mid-1990s.
As Dr. Richard Stevens of MIT observed in this alternate 1992: "Without the reliable doubling of digital performance, we're seeing a renaissance of diverse architectural approaches. No single computing paradigm dominates."
Software Evolution
The limitations in hardware capabilities fundamentally changed software development:
- Programming languages emphasized efficiency over developer productivity, with C remaining dominant longer and languages like Python and Java emerging later and with more limited adoption.
- Operating systems evolved more slowly, with command-line interfaces remaining standard for much longer. Microsoft Windows equivalent systems emerged in the mid-1990s rather than the mid-1980s, and with more modest graphical capabilities.
- Software optimization became a central focus of computer science education and professional practice, with companies employing larger teams of algorithm specialists to squeeze performance from limited hardware.
Mobile Computing's Delayed and Different Emergence (2000-2015)
The mobile revolution was profoundly altered in this timeline:
Limited Smartphone Capabilities
- The first devices comparable to smartphones emerged around 2010 (rather than 2007 with the iPhone), offering functionality more similar to 1990s PDAs in our timeline.
- These devices featured:
- Monochrome or limited color displays
- Processing power comparable to a 1995 desktop computer
- Limited memory constraining multi-tasking capabilities
- Battery life measured in hours rather than days due to less energy-efficient components
- Simplified applications with text-centric interfaces
Cellular Network Differences
- Mobile networks developed differently, emphasizing voice and text communication longer while data services remained expensive and limited.
- 3G equivalent technology emerged in the late 2010s rather than the early 2000s.
- Mobile internet penetration reached only about 30% of the global population by 2025, compared to over 65% in our timeline.
Impact on Artificial Intelligence (1985-2025)
The slower growth in computing power profoundly affected artificial intelligence development:
Extended AI Winters
- The AI winters of the 1970s and 1980s lasted longer and cut deeper, with neural network research remaining largely theoretical until the early 2010s.
- Rule-based expert systems remained the dominant AI paradigm through the 2000s, with machine learning limited to specialized applications with modest data requirements.
Limited Deep Learning Revolution
- Deep learning techniques still emerged but required massive infrastructure investments that only the largest research institutions and companies could afford.
- By 2025, AI capabilities in this timeline roughly match those from 2010 in our actual history.
- OpenAI equivalent organizations focused on creating specialized, efficient AI systems rather than general-purpose large language models.
As researcher Fei-Fei Li noted in this alternate 2020: "Our image recognition systems achieve about 75% accuracy on complex datasets—impressive given our hardware constraints, but far from the human-level performance many had hoped for by now."
Internet and Digital Society (1990-2025)
The internet still emerged as a transformative technology, but its development and social impact differed significantly:
Network Infrastructure
- The global internet backbone developed more slowly, with international connectivity limited by the higher costs of networking equipment.
- Household internet adoption followed a slower curve, with broadband-equivalent connectivity reaching only about 40% of homes in developed nations by 2025.
- The "digital divide" between wealthy and developing nations widened further, with digital connectivity becoming an even stronger marker of socioeconomic inequality.
Digital Economy
- E-commerce developed as a supplement to traditional retail rather than a disruptive force.
- Streaming media services emerged in limited forms, with video quality comparable to DVD rather than 4K, and music services offering modest catalogs.
- The platform economy (ride-sharing, home-sharing, etc.) emerged in simplified forms, often blending digital coordination with analog service delivery.
Social Media Evolution
- Text-based and limited-media social platforms dominated longer.
- Facebook-equivalent services emerged in the 2010s rather than the mid-2000s, with functionality closer to early bulletin board systems than multimedia platforms.
- User-generated video content remained niche rather than mainstream due to bandwidth and processing limitations.
Scientific and Medical Computing (1985-2025)
The limitations in computing power had mixed effects on scientific advancement:
Computational Science
- Computational models in climate science, physics, and chemistry developed with greater emphasis on mathematical elegance and efficiency.
- The Human Genome Project equivalent took until approximately 2010 to complete, rather than 2003.
- Drug discovery processes relied more heavily on traditional laboratory methods rather than computational screening, slowing pharmaceutical development.
Medical Imaging and Diagnostics
- MRI and CT scan technology still developed but produced lower-resolution images and required longer scanning times.
- Computer-aided diagnosis emerged more recently and with more limited capabilities.
- Personalized medicine based on genomic data remained more theoretical than practical by 2025.
Industrial and Economic Impact (1985-2025)
The global economy developed along a different technological trajectory:
Manufacturing and Automation
- Robotics and automation advanced more slowly, with industrial robots remaining more specialized and less adaptable.
- Computer-controlled manufacturing became widespread but with systems focused on reliability rather than flexibility.
- 3D printing emerged as a prototyping technology but didn't achieve the precision or material versatility seen in our timeline.
Global Economic Structure
- The technology sector constituted a smaller portion of the global economy, accounting for perhaps 10-15% of global market capitalization rather than 25-30%.
- Technology-driven productivity gains were more modest, resulting in approximately 15-20% lower global GDP by 2025 compared to our timeline.
- Employment shifted toward knowledge work more slowly, with manufacturing and service jobs remaining a larger share of the economy.
By 2025, computing technology in this alternate timeline roughly paralleled our world's technology of the early 2000s—still transformative compared to the pre-digital era, but far less pervasive and powerful than what we experience today.
Expert Opinions
Dr. Margaret Chen, Professor of Computer Architecture at Stanford University, offers this perspective: "Without Moore's Law setting expectations for regular performance improvements, the computing industry would have resembled other industrial sectors more closely. We would have seen periods of rapid innovation followed by plateaus of optimization and refinement. The key difference is that computing technology wouldn't have been exceptional in its continuous exponential improvement. Imagine if cars or airplanes doubled in efficiency every two years—they don't, and computing wouldn't have either without Moore's observation becoming a self-fulfilling prophecy."
James Keller, former chip architect at AMD, Intel, and Tesla, provides a contrasting view: "I believe semiconductor scaling would have happened regardless of Moore's formulation, but at a significantly slower pace and with more diversions into alternative approaches. The physics of miniaturization was always there to be exploited. The industry might have achieved a doubling every 4-5 years rather than every 2 years. The real impact of Moore's Law wasn't the observation itself but how it aligned an entire industry around a common roadmap. Without that alignment, we'd have seen more experimentation but less consistent progress."
Dr. Aisha Rahman, Technology Historian at MIT, contextualizes the broader implications: "The absence of Moore's Law would have fundamentally altered our relationship with technology. The rapid obsolescence cycle we've come to expect—new phones every two years, regularly updating computers—simply wouldn't exist. Digital technology would likely be viewed more like other durable goods: you buy the best you can afford and expect it to last a decade or more. This would have profound implications for consumption patterns, environmental impact, and how we value technological skills. Software developers would be celebrated for efficiency rather than for creating feature-rich applications. It's possible this slower digital evolution might have allowed social adaptation to better keep pace with technological change, potentially avoiding some of the disruptive aspects of our rapid digital transformation."
Further Reading
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson
- Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company by Andrew S. Grove
- Moore's Law: The Life of Gordon Moore, Silicon Valley's Quiet Revolutionary by Arnold Thackray, David C. Brock, and Rachel Jones
- The Chip: How Two Americans Invented the Microchip and Launched a Revolution by T.R. Reid
- Code: The Hidden Language of Computer Hardware and Software by Charles Petzold
- Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael A. Hiltzik