In the era of rapid development of artificial intelligence, high-performance memory chips have become a key force driving the evolution of technology. In March 2025, SK hynix announced the world's first delivery of HBM4 samples to customers, a development that has attracted enthusiastic attention in the semiconductor industry, not only consolidating its leading position in the field of AI storage, but also laying a stronger foundation for the future of AI computing.
HBM4 accelerates AI computing and deploys market demand in advance
As the sixth generation of the HBM series, HBM4 plays a vital role in AI computing, mainly used in GPUs and AI acceleration chips, which can significantly improve computing efficiency. SK hynix's 12-layer HBM4 samples have been sent to core customers for qualification testing and are scheduled to be ready for mass production in the second half of 2025. It is worth noting that HBM4 was originally scheduled to be mass-produced in 2026, but driven by the surge in market demand, SK hynix accelerated the R&D and delivery process, especially Nvidia CEO Jensen Huang's request for supply six months in advance, making HBM4 enter the market earlier than expected.
HBM4 data bandwidth exceeds 2TB/s
The technological breakthrough of HBM4 is particularly significant. Its data bandwidth exceeds 2TB/s, which is equivalent to processing more than 400 Full HD movies per second, which is more than 60% faster than HBM3e. This performance leap makes it one of the most advanced AI storage solutions in the world, capable of meeting the extreme demand for high-speed data access during AI training and inference.
In terms of packaging process, SK hynix adopts advanced large-scale reflow molding underfill technology to effectively control chip warpage, optimize heat dissipation performance, and significantly improve product stability. This improvement ensures that HBM4 can still run efficiently and reliably in high-load AI computing environments.
Photo: SK hynix's world-first HBM4 sample (Source: SK hynix)
Consolidate the leading position in the industry and promote the transformation of AI storage
SK hynix's leading position in the HBM market is no accident. Since the world premiere of HBM3 in 2022, the company has continued to innovate and successfully launched 8-layer and 12-layer HBM3e in 2024, becoming the main HBM supplier of NVIDIA GPUs. The success of the HBM market has enabled SK hynix to surpass Samsung Electronics as the most profitable memory chip manufacturer in Korea. The first delivery of HBM4 will undoubtedly further consolidate its market dominance and continue to expand its influence in the AI storage ecosystem.
Kim Joo-sun, President of SK hynix's AI Infrastructure Division, said that the company will accelerate the certification and mass production of HBM4 based on its leading technology accumulation to ensure that customers can provide the most competitive AI storage solutions.
HBM4 leads a new era of AI storage
The mass production of HBM4 is not only of great significance to AI chip giants such as NVIDIA, but will also have a profound impact on the entire AI computing ecosystem. Its higher bandwidth and energy efficiency will help AI chips break through computing bottlenecks and promote the further development of technologies such as deep learning, natural language processing, and computer vision. At the same time, the application of HBM4 will accelerate the upgrade of data centers, improve data processing efficiency, reduce energy consumption and operating costs, and provide stronger power for cloud computing, big data and other related fields.
SK hynix's world premiere of HBM4 not only demonstrates its technological prowess in the field of high-bandwidth memory, but also marks the beginning of a new phase of AI storage. As HBM4 enters mass production, it will reshape the AI computing landscape and provide a solid storage foundation for the future development of intelligent technology.