Philip Wong, professor of electrical engineering at Stanford University, along with colleagues from MIT, TSMC, the University of California, Berkeley, and his own institution, wrote a paper for the April IEEE conference paper, which covered progress in silicon scaling. And based on it. Give a keynote speech at the Design Automation Conference in July. They believe that Moore's Law is still in operation, but the assumptions supporting it have changed. As a result, technicians should pay little attention to the simple area conversion of transistor footprint and pitch, but should pay attention to the effective density of each continuous node.
When considering other factors, you might think that the chip manufacturing industry is returning to basics. It is not as famous as the article in the "Electronics" magazine published ten years ago, but Gordon Moore's speech at the International Electronic Device Conference (IEDM) in 1975 was precisely the rhythm that Intel executives determined to double the equipment regularly for two years. Time density. Until then, the industry has been developing at a faster rate, doubling every year. By 1975, Moore saw that the rate of progress was declining.
In an IEDM presentation 45 years ago, Moore saw that 2D geometric scaling was only part of providing twice the functionality at the same cost over time. He thinks this is a considerable part, but certainly not all. He expects that the significant increase in chip size and improvements in circuit design will bring the remaining advantages. However, at that time, fab owners were just beginning to take advantage of the scale factor noted by IBM researcher Robert Dennard: Smaller, tighter-packed transistors would not only achieve cost increases, but also improve energy.
The transition to CMOS in the 1980s drove this development until the mid-2000s the industry exhausted most of the benefits of Dennard scaling. After that, simple 2D scaling will become more and more troublesome.
In recent years, this is most obvious in the trend of SRAM scaling, and SRAM scaling has historically been a good guide for increasing density. Although it has been logically maintained to around 28nm, it is starting to fall behind because it is difficult to improve gradually when the metal pitch and transistor size are different.
The team of EDA tool supplier Synopsys will give one of the presentations at the upcoming IEDM. It will show how the contribution to expansion has changed over the past few years.
What Moore called "circuit cleverness" has made a dramatic comeback, albeit in a different form than originally proposed. This time, it adopted the name Design Technology Collaborative Optimization (DTCO). By letting the designer suggest which changes are best for circuit layout purposes, process engineers can make better trade-offs. This is reflected in the changes in SRAM scaling. Due to the changes in the cell layout, the density has suddenly jumped.
Wong, the Synopsys team and others believe that it is DTCO's most important contribution to density in the next decade leading to the so-called 1nm node. But pure size scaling is not over yet. Although the space for 2D scaling is small, the prospects for three-dimensional display are good, without the need for memory standards like HBM to be implemented by stacking chips. You can see it as 3D through invisibility.
One way to use vertical dimensions is to turn the transistor to the side. This continues the evolution of field effect transistors from purely planar devices with top gate contacts to vertical fins through finFETs. By surrounding the gate on three sides, the fin provides greater electrostatic control over the transistor channel. But when it exceeds 5nm, a gate all-round structure is required. This can be met by using nanosheets that actually pass through the gate electrode. Even better, although it adds complexity and cost to the process, you can stack nanosheets to get more drive current in the same way that finFETs usually use two or more fins. The area consumed by the stack may be less than the multi-fin structure.
The stumbling block to achieving nanochip scaling is the required separation between the n-channel and p-channel devices of the CMOS pair. But Imec proposed a fork last year. This stacks complementary n-doped and p-doped layers on top of each other and consists of a common pillar. In this way, you can build a complete CMOS inverter into a single transistor structure, saving about 30% of the area.
Putting power in and out of the logic unit takes up valuable area, but this is another place to solve the three-dimensional problem. Imec's suggestion at the 2018 VLSI seminar is to bury the power rail under the silicon surface. The next step is CFET: a two-layer structure that forms the n-channel transistor of the inverter directly on top of its p-channel sibling.
At the upcoming IEDM, Intel’s engineers will describe their views on nanosheet-based CFET-type structures. The combined transistor uses epitaxy to construct a vertically stacked source-drain structure, and the threshold voltage is adjusted for the two types of transistors. Although the gate for this work is relatively long, around 30nm, the Intel team predicts that the cell size can be significantly reduced through self-aligned stacking.
According to Synopsys calculations, although some DTCO is needed to achieve it, CFET has a great effect on SRAM. One disadvantage of CFET is that stacking will introduce another form of variability, but design adjustments will also help solve this problem. For example, the most compact structure does not entirely rely on surround gate transistors. Instead, it includes a pseudo p-channel transistor with a three-sided gate to achieve sufficiently good write performance.
Even with the increase in transistor density, the main problem of chip design is still the parasitic resistance and capacitance in the long metal interconnection. This may force future processes to shift from the bulk of copper to more rare metals, such as ruthenium.
Intel proposed a design-based alternative based on the following observations: Although it seems desirable to cut resistance and capacitance together, not all circuit paths benefit in the same way.
Separate paths can benefit from individually adjusted resistance and capacitance. This is Intel’s intuition for guiding work on so-called "interleaved interconnects."
The staggering method is not to make each parallel line the same, but to make the high and short lines alternate, and the short lines are located on the higher pile of insulating material. This reduces the net effective capacitance between the lines. In fact, high lines will be affected by crosstalk, and similar effects will be further separated. Intel’s simulations show that register files and memory arrays can benefit from this structure. The decoder and word line memory receive higher rows, while bit lines use shorter traces. Longer length interconnects also show an improvement, which can pack more traces into a smaller area without increasing RC delay.
Synopsys said that these DTCO-inspired designs have higher complexity, which will drive wafer costs up: an average of 13% per node. But the effective density is still reasonable up to the 1nm node, and it is still possible to reduce the cost of each transistor by 32% per node.
This is not Moore's Law yesterday, but the expansion should last about ten years. How many companies will be able to control large amounts of capital to justify startup costs is another matter.
This material is protected by
copyright
Allow one-time use, but not bulk copy. For multiple copies
.
Your comment/feedback may be edited before posting. Not all entries will be published.
Please check our
Before commenting.
MA Business Ltd
Holly Mill
Holly Road
Dartford
DA2 7TJ
01322 221144
Registered in England No. 6799864
VAT number GB943 2415 37