Disclaimer: Opinions shared in this, and all my posts are mine, and mine alone. They do not reflect the views of my employer(s), and are not investment advice.
Every college course on computing, starts with Moore’s law. It is not a physical law - it just is an empirical observation - but it has been largely consistent for over half a century. That doesn’t make it a fluke. To understand the semiconductor industry it is important to first understand Moore’s law.
How Moore’s Law Started:
In 1965, Gordon Moore, at Fairchild Semiconductor, authored a paper titled “Cramming more components onto integrated circuits”. This paper talks about how increasing the number of “components” (referring mainly to transistors) on an integrated circuit decreases the cost per component, but also decreases the yield. So each new chip generation will have an optimal number of components per integrated circuit that would result in the lowest manufacturing cost per component. Moore observed that this “optimal” number of components doubles every year.
Reference: Cramming more components onto integrated circuits
At the time of publishing, this paper was more like an advertisement for integrated circuits as the cheapest way to scale computing. Soon, in 1968, Gordon Moore went on to found Intel, and the exponential growth observed in this paper became a key part of Intel’s chip roadmap. The paper does mention that “over the long term, the rate of increase is a bit more uncertain”, and that held true. Intel initially worked on memory chips, and saw that after 1975, the one year cycle was hard to maintain - memory density improved by about 4x every 3 years. Eventually, from the 1980s, Moore revised this trend to “doubling of components every two years”. Once again, in 1985, the trends was changed to “doubling every 18 months”. Even today, the number is different in different sources. But in general:
Moore’s law states that the number of transistors in a chip doubles in every new generation of chips (1-2 years).
Why Moore’s Law became important?
Interestingly, the term Moore’s Law was coined after the paper was released - by Carver Mead, in 1970. There are two main reasons why Moore’s Law became important:
Intel’s roadmap was built on Moore’s Law. And Intel was the most important integrated circuit company of the time. So, other chip companies followed Intel’s roadmap.
The chip industry has a long ship cycle (from idea to chip fabrication takes 1 - 2 years). Having an accurate prediction of the state of the art manufacturing allowed the companies to make claims about future products before they were ready.
Dennard Scaling - how computer architects interpret Moore’s Law
Moore’s Law promised more transistors in a chip, but did not directly talk about processor performance. Robert Dennard, in 1974, stated that as transistors got smaller following Moore’s law, the power density in the chip stays constant. Essentially, this meant that as Moore’s law continued, we can pack more transistors in a chip of the same size, without consuming any extra power. For chip designers, this means that every new chip generation gives you some “free” transistors - they are available to use, but cost no power. The extra transistors were used to improve the performance by supporting:
Higher Clock Frequency
Higher IPC (Instructions Per Cycle)
Essentially, Dennard Scaling resulted in an increase in performance/watt that tracked Moore’s Law. Typically, this involved more sophisticated microarchitecture for processor cores, and increased on-chip memory to support advanced computer architecture techniques.
How Dennard Scaling (and in-turn Moore’s Law) affects software:
So far, Moore’s Law and Dennard Scaling only talk about semiconductor chips. But in 1970s, around the same time that Moore’s Law gained popularity, companies like Microsoft and Oracle were founded. The success of these companies were strongly tied to these two laws, because:
Transistor scaling has enabled newer computing platforms (Workstations → Personal Computer → Smartphones → AR/VR, and whatever is next)
Improvement in hardware performance drives the next generation of software (Better OS → Internet → Cloud Computing → AI/ML)
When software stakeholders talk about Moore’s law, they usually talk about the performance gains. So both Dennard scaling and Moore’s law are used interchangeably to imply “better computing performance”.
What it takes to keep this exponential trend continuing?
In one of Gordon Moore’s interview, some years after his paper on Moore’s law, he mentioned that the continuing trend also had him surprised. He said “The industry has made Moore’s law into a self-fulfilling prophesy.” The roadmap of every company involved is centered around keeping this trend alive. These are some key stakeholders:
The chip fabrication companies (Intel, TSMC, Samsung, etc) plan their next semiconductor node (like 7nm, 5nm, 3nm, etc). This typically reduces by a fraction of 0.7 each generation (5nm = 7nm * 0.7) to account for the doubling in transistor density (0.7*0.7 = 0.5)
Speed and capacity are very important for a foundry - a lot of chip and software release timelines are planned around the foundry’s commitment.
As the node is decided, the companies evaluate if all their equipment can support the new node. The timeline for major node jumps aligns strongly with the availability of better technology from equipment manufacturers (Like ASML, Applied Materials, etc)
One of the biggest bottleneck in latest nodes is the equipment used for Lithography (It contributes around 20% of the cost of building a foundry) - ASML is the sole provider of some of these equipment, making them a key player in keeping Moore’s law alive.
To deliver the next nodes, the foundries also work closely with EDA tool vendors (like Cadence, Synopsys, etc) to come up with a proprietary Process Design Kit (PDK) that chip design companies use.
Once the foundries promise the next node, chip design companies (Nvidia, AMD, Qualcomm, etc) plan and design their next generation of processors.
The PDK allows design teams to have realistic area and power target
Performance is optimized within a reasonable area and power budget
New software is developed to align with new chip launches
This was not a trend in the past, but with a rise in competition, software that best utilizes the hardware is ready to go as soon as the hardware becomes available (especially on smartphones)
Is Moore’s Law slowing? What’s next
While writing this, I found multiple post from more than 10 years ago, claiming that Moore’s law is dead. And yet, we can found ways to continue to scale transistors through so many ways:
Better lithography equipment (Immersion → EUV → High NA EUV? → X-ray?)
New transistor gate structure (Polysilicon gate, FinFETs, etc)
Better fabrication techniques (Twin scan system, multiple patterning, etc)
On the other hand, Dennard scaling continued till about 2010, after which the power density started to increase as the transistor size decreased (Due to increased leakage current and heat dissipation). So after 2010s, the original thesis of Dennard scaling does not hold. However, chip designers and architects still push to deliver the increased performance that Dennard scaling promised through:
Multicore processors
Domain specific processors (GPUs, Accelerators, etc)
So while it is definitely true that Moore’s law and the performance improvement are slowing, this is not a one man show. A lot of very smart people stubbornly believe in improving computer performance and keeping Moore’s Law alive, so you can never bet against the semiconductor industry. I read this quote which sums things up nicely:
“If you see a duck moving calmly in water, remember that you cannot see is how frantically it is flapping its legs underwater”
References:
Original Moore’s Law paper - https://download.intel.com/newsroom/2023/manufacturing/moores-law-electronics.pdf
Intel Website for Moore’s Law - https://www.intel.com/content/www/us/en/newsroom/resources/moores-law.html#gs.dt5u55