Home > All news > Industry news > Rambus Announces HBM4 Memory Controller for AI GPUs
芯达茂F广告位 芯达茂F广告位

Rambus Announces HBM4 Memory Controller for AI GPUs

Although the final specification for HBM4 has not yet been finalized, there is an urgent need for the industry to introduce more advanced memory technology as soon as possible as the demand for high-performance AI GPUs for AI becomes more urgent. It was recently reported that Rambus has unveiled its latest HBM4 memory controller, which is said to exceed the currently announced HBM4 specifications.

Compared to HBM3, HBM4 has doubled the number of channels per stack, from 1024 bits to 2048 bits in HBM3, resulting in significant data transfer speeds and performance. Rambus' HBM4 memory controller IP supports data transfer rates of up to 10 Gbps and delivers 2.56 TB/s of throughput per memory device, which is unmatched by HBM3. In addition, the development of HBM4 involves more advanced process technologies, such as Samsung and SK hynix, both of which are advancing DRAM in the 1c process, which will bring higher density and energy efficiency improvements, which is of great significance for improving the performance of HBM4 memory and reducing power consumption. The development of HBM4 technology also involves the integration with logic chips, such as SK hynix's collaboration with TSMC to develop the next generation of HBM, which leverages TSMC's advanced logic process to improve the performance of HBM4 products.

Rambus' HBM4 controller not only supports the 6.4 GT/s data rate specified in the JEDEC standard, but also has the potential to be increased to 10 GT/s in the future. In this way, the memory bandwidth of each HBM4 stack can reach 2.56 TB/s, with a 2048-bit wide memory interface. Rambus' HBM4 controller IP can be paired with third-party or customer-owned PHY solutions to build a complete HBM4 memory system. In addition, Rambus is working with industry giants such as Cadence, Samsung, and Siemens to ensure that this technology can be smoothly integrated into the existing memory ecosystem, accelerating the transition to next-generation memory systems.

Figure Rambus launches HBM4 memory for AI GPUs

According to the JEDEC HBM4 preliminary specification, HBM4 memory will support 4-, 8-, 12-, and 16-high stack configurations, with 24 Gb and 32 Gb memory tiers. The 16-stack 32 Gb memory tier will provide 64 GB of capacity, with a total memory capacity of up to 256 GB for the four stacks, with 6.56 TB/s of peak bandwidth through an 8192-bit interface, dramatically increasing processing power for complex workloads.

If the HBM4 memory subsystem is capable of running at 10 GT/s, the four stacks will provide more than 10 TB/s of bandwidth. However, Rambus and other memory manufacturers often support enhanced transfer speeds that are higher than the JEDEC standard to ensure stable and energy efficiency at standard rates.

Neeraj Paliwal, Senior Vice President and General Manager of Silicon IP at Rambus, said, "With large language models (LLMs) exceeding one trillion parameters and continuing to grow in scale, addressing memory bandwidth and capacity bottlenecks is critical to meeting the real-time performance requirements of AI training and inference. As a leading silicon IP provider in the AI 2.0 era, we are the first to introduce the industry's first HBM4 controller IP solution, enabling customers to leapfrog performance in their most advanced processors and accelerators.”

It is worth noting that HBM4 has twice as many channels per stack as HBM3 and an interface width of 2048 bits, so it needs to take up more physical space. In addition, the interposer design of HBM4 differs from that of HBM3/HBM3E, which has an impact on its potential data transfer rate.

Related news recommendations

Login

Register

Login
{{codeText}}
Login
{{codeText}}
Submit
Close
Subscribe
ITEM
Comparison Clear all