HBM2 is a high performance, 3D-stacked memory solution that leverages the 2.5D silicon interposer technology. A typical system involving HBM2 has an interposer die on which two or more interfacing dies (known as top-dies) are assembled into a single package. Such a system is commonly called a 2.5D system-in-package (SiP), where 3D stacked memory die (HBM2) and ASIC dies interface through fine pitch routes connecting fine pitched micro-bumps. This results in a wide interface architecture that allows it to achieve very high bandwidths, low power, and a small form factor, making it the preferred architecture for high bandwidth applications. In fact, HBM2 (X1024) offers the maximum possible bandwidth of up to 256 GBps compared to 4 GBps with DDR3 (X16) at 1/3rd of the power efficiency.
While 2.5D SiPs provide a lot of advantages in terms of area reduction, achieving high bandwidth, lower power/pin, and smaller package size, they also present challenges with interoperability, 2.5D design, overall SiP design, packaging, test, and manufacturing. It requires careful planning in the physical design of the interposer, signal integrity analysis and STA, rail analysis, and power integrity analysis. This can be accomplished through multiple built-in test and diagnostic features, such as probe pads and loop-back for issue-isolation within the various IP subsystem components. This can not only address the test and debug challenges, but helps in yield management and yield improvement.
One of the most notable advances enabling the development of HBM2 ASIC SiPs is the HBM2 IP subsystem, which is an IP that consists of the controller, PHY, and die-to-die I/O. The IP can translate user requests into HBM command sequences (ACT, Pre-Charge) and handle memory refresh, bank/page management, and power management on the interface. The high performance, low latency controller leverages the HBM parallel architecture and protocol efficiency to achieve maximum bandwidth. One such subsystem solution is available from Open-Silicon. Their IP includes a scalable and optimized PHY and die-to-die custom I/O needed to drive the interface between the logic-die and the memory die-stack on the 2.5D silicon interposer. The subsystem was silicon proven in 16 nm FinFET technology on a 2.5D HBM2 ASIC SiP platform, which successfully demonstrated high bandwidth data transfer and interoperability between the HBM2 IP subsystem and HBM2 memory die-stack. This particular subsystem solution can achieve data transfer rates of 1.6Gbps/2Gbps, and interposer trace lengths of up to 5 mm. This means that it’s capable of a full 8-channel connection from a 16 nm SoC to a single HBM2 memory stack at 2 Gbps, achieving bandwidths of up to 256GB/s. The company is also working on its next generation HBM2 IP subsystem in 7nm FinFET technology, which they say will feature 2.4 Gbps per-pin data rate and achieve bandwidths of > 300 GB/s.
About the Author
Richard Nass is the Executive Vice-President of OpenSystems Media. His key responsibilities include setting the direction for all aspects of OpenSystems Media’s Embedded and IoT product portfolios, including web sites, e-newsletters, print and digital magazines, and various other digital and print activities. He was instrumental in developing the company’s on-line educational portal, Embedded University. Previously, Nass was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events in the U.S., Europe, and Asia. Nass has been in the engineering OEM industry for more than 25 years. In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites. Nass holds a BSEE degree from the New Jersey Institute of Technology.