- News & Views
- Chiplet-Based Integration Will Support Semiconductor Next Revolution
By Eric Esteve (PhD.), Analyst, IPnest
As the semiconductor industry continues developing smaller process nodes, the economical benefits of Moore’s law are diminishing. Data-centric applications are now the driver of the electronic industry, and these applications demand increasing bandwidth, computing power, and ultimately require more transistors. We believe that chiplet-based architectures will be the solution to deliver ever-increasing processing capabilities and ultimately more transistors in a single package. In the 2000’s, Design IP vendors emerged and grew, because they supported semiconductor companies to design System-On-Chips (SoCs) by using third party IPs to integrate an ever-increasing number of functions and to meet product time to market needs. In the upcoming decade, Design IP vendors will have an ability to support their customers by now offering chiplets to help integrate, within the package, the functions needed to meet the requirements of modern-day processors. Alphawave sponsored the creation of this white paper, but the opinions and analysis are those of the author.
During the 2010-decade, the benefits of Moore’s law began to fall apart. Moore’s law stated transistor density doubled every two years, the cost of compute would shrink by a corresponding 50%. The change in Moore’s law is due to increased in design complexity the evolution of transistor structure from planar devices, to Finfets. Finfets need multiple patterning for lithography to achieve devices dimensions to below 20-nm nodes.
At the beginning of this decade, computing needs have exploded, mostly due to proliferation of datacenters and due to the amount of data being generated and processed. In fact, adoption of Artificial Intelligence (AI) and techniques like Machine Learning (ML) are now used to process ever-increasing data and has led to servers significantly increasing their compute capacity. Servers have added many more CPU cores, have integrated larger GPUs used exclusively for ML, no longer used for graphics, and have embedded custom ASIC AI accelerators or complementary, FPGA based AI processing. Early AI chip designs were implemented using larger monolithic SoCs, some of them reaching the size limit imposed by the reticle, about 700mm2.
At this point, disaggregation into a smaller SoC plus various compute and IO chiplets appears to be the right solution. Several chip makers, like Intel, AMD or Xilinx have select this option for products going into production. In the excellent white paper from The Linley Group, “Chiplets Gain Rapid Adoption: Why Big Chips Are Getting Small”, it was shown that this option leads to better costs compared to monolithic SoCs, due to the yield impact of larger.
The major impact of this trend on IP vendors is mostly on the interconnect functions used to link SoCs and chiplets. At this point (Q3 2021), there are several protocols being used, with the industry trying to build formalized standards for many of them. Current leading D2D standards includes i) Advanced Interface Bus (AIB, AIB2) initially defined by Intel, and now has offered royalty free usage, ii) High Bandwidth Memory (HBM) where DRAM dies are stacked on each other on top of a silicon interposer and are connected using TSVs, iii) Open Domain-Specific Architecture (ODSA) subgroup, an industry group, has defined two other interfaces, Bunch of Wires (BoW) and OpenHBI.
These D2D standards called HBM-like are based on a DDR-like protocol, a parallel group of single-ended data wires being accompanied with a forwarded clock currently operating in the 2 GHz to 4 GHz range. By using hundreds of parallel wires over very short distances, these interfaces compete with VHS SerDes optimized for eXtra Short Range (XSR). The parallel, clock forwarded D2D standards, offer a strong advantage to enable a much lower latency and lower power consumption, compared to VHS SerDes.
IPnest has evaluated the 2020-2025 forecast for the two forms of D2D IP.
Figure 1: D2D (HBI & SerDes) IP Market 2020-2025 (From “Interface IP Survey & Forecast – 2021”)
Figure 1 shows the forecasted growth of the D2D Interface IP category for 2020-2025, surpassing from $10 million in 2020 to $171 million in 2025 (87% CAGR). This forecast assumes that the chiplet market is expected to pick up momentum in 2023, when most of advanced SoCs will be designed in 3nm. This will make integration of high-end IP like SerDes, more costly and risky, leading to externalizing this functionality into a chiplet designed in more mature nodes like 7 or 5nm. The same is true for I/O chips and other functions.
The selection of one interconnect type will depend on various factors, latency, power consumption, performance (bandwidth) and packaging technology.
Integration of a central SoC plus compute chiplets in the same package can be seen as the solution to overcome the challenge facing Moore’s law. This strategy is effective for scaling not only server CPUs, but also for heterogeneous design of complex systems. Intel Ponte Vecchio is a superb example as it integrates 47 active dies manufactured in five different process nodes, employing 2.5D and 3D interconnect technologies to build a heterogeneous design.
Figure 2: Intel Ponte Vecchio (PVC): Intel employs its 2.5D and 3D interconnect technologies to build a heterogeneous design that integrates 47 active die, which it calls tiles, manufactured in five different process nodes. (Courtesy of Intel)
The base SoC (640 sq mm) integrates L2 cache, HBM2e memory controller, EMIB bridge and PCIe 5 Host, while the compute chip integrates L1 cache and 8 CPU cores. In fact, Intel PVC is made of 47 tiles (Intel name for chiplet) interconnected via HBM2e for memory devices or Advanced Interface Bus (AIB) for the other chiplet. This heterogeneous design allows Intel to integrate more than 100 billion transistors within the same package. Although, there are products in the market built using 3D integration of memory devices integrating HBM2e, the Intel’s innovation comes from the incorporation of several dozens of chiplets designed on different process nodes onto a single package substrate. Some chiplets are I/O devices, implemented on less advanced nodes and likely to be reused on other Intel products. Specifically the link tile consists of an 8 ports embedded switch, integrating 90G SerDes targeted on TSMC 7nm process node.
Heterogeneous chiplet design allows us to target different applications or market segments by modifying or adding just the relevant chiplets while keeping the rest of the system unchanged. New developments could be launched quicker to the market, with significantly lower investment, as redesign will only impact the package substrate used to house the chiplets. For example, the compute chiplet can be redesigned from TSMC 5nm to TSMC 3nm to integrate larger L1 cache or higher performing CPU cores, while keeping the rest of the system unchanged. At the opposite end of the spectrum, only the chiplet integrating SerDes can be redesigned for faster rates on new process nodes offering more IO bandwidth for better market positioning.
The evolution of the newer, faster, protocol standards is picking up speed as the industry keeps asking for higher performance. Unfortunately, the various standards are not synchronized by a single organization. New PCIe standards can come one year (or more) earlier or later than the new Ethernet protocol standard.
Using heterogeneous integration allows silicon providers to adapt to the fast-changing market by changing the design of the relevant chiplet only. Adopting this strategy provides better TTM and minimizes design cost for a new product, while better targeting end markets, compared to a general SoC. Considering advanced SoC design fabrication requires massive capital expenditures for 5nm, 4nm or 3nm process nodes, the impact of chiplet architectures is tremendous to drive future innovation in the semiconductor space.
Intel PVC is a perfect example of heterogeneous integration (various functional chiplet, CPU, switch, etc.) that we could call vertical integration, when the same chip maker owns the various chiplet components (except for memory devices).
Chip maker developing SoCs for high-end applications, such as HPC, datacenter, AI or networking are likely to be early adopters for chiplet architectures. Specific functions, like SRAMs for larger L3 cache, or AI accelerators, either Ethernet, PCIe or CXL standards should be the first interface candidate for chiplet designs. When these early adopters have demonstrated the validity of heterogeneous chiplets leveraging multiple different business models, and obviously the manufacturing feasibility for test and, packaging, it will create an ecosystem will have been create that is critical to support this new technology. At this point, we can expect a wider market adoption, not only for high-performance applications.
We could imagine that heterogeneous products can go further, if a chip maker will launch on the market a system made of various chiplets targeting compute and IO functionality. This approach makes convergence on a D2D protocol mandatory, as an IP vendor offering chiplets with an in-house D2D protocol is not attractive to the industry. An analogy to this, is the SoC building in the 2000’s, where semiconductor companies transition to integrating various design IPs coming from different sources. The IP vendors of the 2000’s will inevitably become the chiplet vendors of the 2020’s. For certain functions, such as advanced SerDes or complex protocols, like PCIe, Ethernet or CXL, IP vendors have the best know-how to implement it on silicon.
For complex Design IP, even if simulation verification has been run before shipping to customers, vendors have to validate the IP on silicon to guarantee performance. For digital IP, the function can be implemented in FPGA because it’s faster and far less expensive than making a test chip. For mixes-signal IP, like a SerDes based PHY, vendors select the Test Chip (TC) option allowing to silicon enabling them characterize the IP in silicon before shipping to customer. Even though a chiplet is not simply a TC, because it will be extensively tested and qualified before being used in the field, the amount of incremental work to be done by the vendor to develop a production chiplet is far less. In other words, the IP vendor is the best positioned to quickly release a chiplet built from his own IP and offer the best possible TTM and minimize risk. The business model for heterogeneous integration is in favor of various chiplets being made by the relevant IP vendor (eg. ARM for ARM-based CPU chiplets, Si-Five for Risc-V based compute chiplets and Alphawave for high-speed SerDes chiplets) since they are owner of the Design IP.
None of this prevents chip makers to design their own chiplets and source complexe design IPs to protect their unique architectures or implement house-made interconnects. Similar to SoC Design IP in the 2000’s, the buy or make decision for chiplets will be weighted between core competency protection and sourcing of non-differentiating functions. We have seen that the historical and modern-day Design IP business growth since the 2000’s has been sustained by continuous adoption of external sourcing. Both models will coexist (chiplet designed in-house or by an IP vendor) but history has shown that the buy decision eventually over takes the make.
There is now consensus in the industry that a maniacal focus on achieving Moore’s law is not valid anymore for advanced technology nodes, eg. 7nm and below. Chip integration is still happening, with more transistors being added per sq. mm at every new technology node. However, the cost per transistor is growing higher every new node as well. Chiplet technology is a key initiative to drive increased integration for the main SoC while using older nodes for other functionality. This hybrid strategy decreases both the cost and the design risk associated with integration of other Design IP directly onto the main SoC.
IPnest believes this trend will have two main effects in the interface IP business, one will be the strong growth of D2D IP revenues soon (2021-2025), and the other is the creation of the heterogenous chiplet market to augment the high-end silicon IP market. This market is expected to consist of complex protocols functions like PCIe, CXL or Ethernet. IP vendors delivering interface IP integrated in I/O SoCs (USB, HDMI, DP, MIPI, etc.) may decide to deliver I/O chiplets instead.
The other IP categories impacted by this revolution will be SRAM memory compiler IP vendors, for L3 cache. By nature, the cache size is expected to vary depending on the processor. Nevertheless, designing L3 cache chiplet can be a way for IP vendor to increase Design IP revenues by offering a new product type. As well, the NVM IP category can be positively impacted, as NVM IP are no longer integrated in SoCs designed on advanced process nodes. It would be a way for NVM IP vendors to generate new business by offering chiplets.
We think that FPGA and AI accelerator chiplets will be a new source of revenues for ASSP chip makers, but we don’t think they can be strictly ranked as IP vendors.
If Interface IP vendors will be major actors in this silicon revolution, the silicon foundries addressing the most advanced nodes like TSMC and Samsung will also play a key role. We don’t think foundries will design chiplets, but they could make the decision to support IP vendors and push them to design chiplets to be used with SoCs in 3nm, like they do today when supporting advanced IP vendors to market their high-end SerDes as hard IP in 7nm and 5nm. Intel’s recent transition to 3rd party foundries is expected to also leverage third party IPs, as well as heterogenous chiplet adoption by semiconductor heavyweights. In this case, no doubt that Hyperscalars like Microsoft, Amazon and Google will also adopt chiplet architectures… if they don’t precede Intel in chiplet adoption.