Moore’s Law has never been a law of nature in the strict sense. It’s more a rule of thumb based on a prediction made by Intel co-founder Gordon Moore in 1965. He observed that transistors were getting smaller at a fairly regular rate, and predicted that the number of transistors (the building block of semiconductor integrated circuits) that could fit on a silicon chip would approximately double every two years or so. Moore’s prediction has turned out to be surprisingly accurate and has enabled the industry to forecast the ability of semiconductor chips to get smaller and more powerful.
Introduced in 1971, the modern microprocessor celebrated its fiftieth anniversary last year. That first modern microprocessor was the Intel 4004, a chip with a 10-micron process containing 2300 transistors. Today, a similarly sized microprocessor contains hundreds of millions of these tiny transistors.
In addition to giving access to the totality of human knowledge, the phones in our pockets today are more powerful computers than the ones NASA used to land the Apollo on the moon around the time Gordon Moore made his now-famous prediction. All the innovation of the past half-century has been made possible because microprocessors kept getting smaller and faster, more powerful and less expensive.
In a sense, Moore’s Law even became a self-fulfilling prophecy: all parties in the manufacturing chain – researchers, designers, manufacturers, material suppliers – worked together on the same goal. They knew that things were supposed to double every two years, so they collaborated to make sure that designs could be brought to market that doubled every two years.
But Moore’s Law is reaching its limit, and began to slow considerably in the past decade. Back in the early 1970s, a chip node was around 10,000 nanometers (nm) in size, and today those nodes are down to 5nm, which is only a few atoms in width. We are reaching the lower limit of size. But electronics consumers want things to keep getting faster, smaller, and more powerful, so the semiconductor industry has been investigating what’s next, and developing solutions to the stagnation of Moore’s Law.
New Approaches, New Innovation, New Techniques
Necessity, as the saying goes, is the mother of invention. Once a transistor reaches one atom in width, it’s physically impossible to make it smaller. So, when silicon chips began to approach these lower limits of size, researchers began to develop novel approaches to increasing processing capability while the footprint of the chip remained constant.
Historically, all silicon chips were manufactured on a single plane, and were therefore essentially two-dimensional. Several years ago, innovators began to ask: “what if we took these 2D transistors and made them 3D monolithically?” This led to the development of what are known as FinFET transistors, where the gate of the transistor is placed on multiple sides of the signal channel. This resulted in higher performance than similarly sized 2D chips.
Another technique, referred to as nanogates, increased the surface area of the transistor gate, allowing the manufacturer to wedge more processing capability into the same surface area.
Neither of these techniques were ever going to get us very far. Some have suggested moving away from silicon as a substrate, and moving toward organic compounds which show some promise of helping to increase capacity. While this does have certain appeal, it faces a major limitation: the silicon-based chip industry is deeply entrenched.
Every fabrication facility is based on silicon manufacturing, all the engineering is based on silicon, and all the materials suppliers base their supply chains on silicon. In order to move to a different base compound, the entire industry will have to change its most fundamental common aspect. It’s not impossible, but doing so will take a lot of time and a great deal of money.
Different approaches that can give us the gains we need without scrapping the fundamentals of this massive, multi-billion-dollar industry are warranted.
Redefining Moore’s Law
Since its origin, Moore’s Law has been defined primarily by the transistor: it measured how many transistors could fit in a given unit area. Historically, more transistors meant greater processing capability and performance, while smaller area meant lower power consumption, smaller devices, and frankly, lower cost.
What if we could redefine the implications of Moore’s Law? People are realizing that the number of transistors alone doesn’t matter anymore. What truly matters is how the system performs. A processing system was conceived as being made up of two main elements: processor and memory. System performance is tightly related to those two main elements plus how they communicate with each other.
So, what if we could start to look at Moore’s Law this way? What if, instead of the relatively simplistic way we’ve measured it in the past, we looked at processing now as a combination of three traits:
- Processor capability: essentially, how many transistors and how fast they work
- Memory: both in terms of speed and capacity
- Communication between the two: how many wires are connected, and is it a narrow or a wide bus
Keeping in mind that, like any chain, this three-step processing chain is only as strong as its weakest link. Therefore, it’s critical to strengthen each of these links as much as possible so the whole processing system is as robust and optimum as can be.
There are different ways to do this, and it has already begun. For example, roughly two decades ago, Intel took the approach of keeping a narrow bus between the processor and memory, while accelerating the speed of the processor. On the other hand, AMD chose to make the bus very wide, and run the processor at a lower speed. Both approaches achieved comparable gains in bandwidth.
Regardless of which method is chosen, or if some novel approach is taken, the industry needs to be looking at how to create an assembly platform that will give rise to an interconnect that would open the floodgate of communication for some time in the future, without making the system limited by the bus width. Today, that’s a limitation that exists.
In today’s state of the art technology, there is an interposer with a graphic processor attached, next to an HBM memory. This model is limited by the number of wires, so it currently has only 1,024 signals going between the memory and processor.
By increasing the interconnect density, it is possible to tremendously increase the total bus width between the processor and memory. This can be done by stacking, for example, the HBM memory in width instead of in depth. This also entices architects and designers alike to look for designing higher performance memories and processors.
The Benefits of 3D Architectures
There is also a new approach called hybrid bonding 3D interconnect – also known as DBI or hybrid interconnect – which has the potential to expand the horizon of Moore’s Law. In this model, instead of a single layer of transistors on a silicon wafer, we build additional layers then stack them on top of each other, finding ways to connect them vertically as well as horizontally.
In today’s HBM memory approach, for example, two prepared dies are connected by solder. The use of thermocompression with solder connection even in the state of the art is limited by the interconnect pitch, due to the inherent limitations of the assembly techniques themselves. For example, the limit pitch used today in HBM is 55 microns. This pitch makes it very difficult to further expand bandwidth, as the HBM is limited to a 1024-bit bus and the memory dies can only stack in depth and not in width.
Adeia invented and pioneered a hybrid bonding technology referred to as DBI (direct bond interconnect) that dramatically increases the number of vertical connections between two surfaces. With DBI technology, a transistor in an active chip would be able to see the transistor in the chip above it and the transistor in the chip below it, similarly to a transistor in the same chip.
This is possible because with DBI, the horizontal and vertical interconnects are rendered roughly equal in number. This development points the way to the future of three-dimensional electronics architecture.
Our DBI technique essentially works by taking two silicon chips with exposed contacts on both sides, and joining them together in such a way that the silicon parts fuse at a relatively low temperature (which reduces damage to the transistors themselves while creating an insulated, integrated, three-dimensional circuit). The finished product might look like a stacked chip, but function as a single chip.
These chips are in use today. But in the future, as these chips become more powerful and more commonplace, they will create bandwidth that is unimaginable by any other technique.
Power efficiency is another benefit realized by this 3D architecture approach. Within each of these chips, each transistor sends and receives transmissions billions and billions of times. If those processing increases were spread out horizontally, the signals have to travel further, and therefore require more power.
When we move into the vertical dimension to increase our processing landscape, the transistor can look at connections next to it, above it and below it, so much more efficient signal paths can be taken, and overall power consumption can be reduced. This new approach of stacking the architecture means that we also have the flexibility of aggregating chips based on a stack of a latest node chip with a legacy chip, which further increases efficiencies and performance while reducing cost.
Just a few short years ago, many in the industry were worried that Moore’s Law was coming to an end, and that it would have dire consequences for the chip industry. With these innovations and new ways of thinking, we can steadily continue to increase processing capability for possibly another fifty years. By slightly redefining Moore’s Law and opening up the vertical dimension in the architecture, we have opened a new vertical and given more life to it.