Similarly, AMD’s MI 300 AI accelerator, touted as the world’s fastest AI hardware, boasts 8 HBM memory stacks in each unit, with 12 vertically stacked DRAM dies with through-silicon via on a base logic die.
“With increased bandwidth requirements, HBMs, which are essentially vertical stacks of interconnected DRAM chips- are in growing demand,” said Arun Mampazhi, an independent analyst. “Perhaps higher cost was a factor restricting its wide use before, but with other options running out, as we have seen in many cases, the efficiency requirements eventually break the cost barrier. Moreover, SRAM, which is generally used for cache, is no longer scaling at the rate of logic.”
Broad benefits for key industry players
The demand for DRAM or HBM stacks is set to benefit several major companies active in this segment.
“As per Fab Economics Research and Analysis, based on our 6-year demand forecast for HBM, the supply is highly skewed when compared to massive AI product-driven demand,” Faruqui said. “HBM as a percentage of DRAM sales will increase by a factor of 2.5 from 2023 to 2024 and other specific growth factors forecasted by our firm for each follow-on year until 2030. Due to the skew in demand and supply for AI product-driven HBM, we have forecasted average selling price (ASP) premiums for HBM for each respective year, which will boast profit margins for players like SK Hynix, Micron, and Samsung.”
The impact of HBM varies for each player based on factors such as their technology readiness, manufacturing capacity roadmap, customer loyalty, and geopolitical considerations.
According to Faruqui, there are 37 players well-positioned to benefit from the HBM wave driven by AI hardware across the ecosystem, including the Design-Fab-Packaging-Test value chain and materials/equipment supply chains. Some of these players have the potential for exceptionally high growth.