What are you looking for ?
itpresstour
RAIDON

Rambus Sets New Benchmark for AI Memory Performance with Industry-Leading HBM4E Controller IP

With up to 16 Gbps per pin, HBM4E delivers breakthrough bandwidth for next-gen AI accelerators, GPUs, and HPC workloads, helping remove one of AI's biggest bottlenecks

Highlights:

  • Built on a proven track record of over one hundred HBM design wins to ensure first-time silicon success
  • Delivers up to 16 Gigabits per second per pin at low latency to meet the demands of next-generation AI and HPC workloads
  • Expands industry-leading silicon IP portfolio of high-performance memory solutions

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, announced the industry’s leading HBM4E Memory Controller IP, extending its market leadership in HBM IP. This new solution delivers breakthrough performance with advanced reliability features enabling designers to address the demanding memory bandwidth requirements of next-generation AI accelerators and GPUs.Rambus Logo“Given the insatiable bandwidth demands of AI, it’s imperative for the memory ecosystem to continue aggressively advancing memory performance,” said Simon Blake-Wilson, SVP and GM, silicon IP, Rambus. “As a leading silicon IP provider for AI applications, we are bringing the HBM4E Controller IP solution to the market as a key enabler for breakthrough performance in next-generation AI processors and accelerators.”

“HBM4E represents a significant milestone for HBM technology, delivering unprecedented performance for advanced AI and HPC workloads,” said Ben Rhew, corporate VP and the head of the foundry IP development team, Samsung Electronics. “HBM4E IP solutions will be essential for broad industry adoption, and Samsung looks forward to collaborating closely with Rambus and the wider ecosystem to drive innovation in AI.”

“HBM bandwidth is one of the main bottlenecks on LLM performance, and we’re excited by efforts across the industry to push it further,” said Reiner Pope, co-founder and CEO, MatX.

“AI processors and accelerators need high-performance, high-density HBM memory for the massive computational requirements of AI workloads,” said Soo Kyoum Kim, program associate VP, memory semiconductors, IDC. “As the requirements of AI processors and accelerators continue their rapid rise, HBM solutions must advance apace. HBM4E IP reaching the market now will be an essential building block for designers of cutting-edge AI hardware.”

Rambus HBM4E Controller IP Features
The Rambus HBM4E Controller enables a new gen of HBM memory deployments for cutting-edge AI accelerators, graphics and HPC applications. The HBM4E Controller is capable of supporting operation up to 16 Gigabits per second (Gbps) per pin providing an unprecedented throughput of 4.1 Terabytes per second (TB/s) to each memory device. For an AI accelerator with eight attached HBM4E devices, this translates to over 32 TB/s of memory bandwidth for next-gen AI workloads. The Rambus HBM4E Controller IP can be paired with third-party standard or TSV PHY solutions to instantiate a complete HBM4E memory subsystem in a 2.5D or 3D package as part of an AI SoC or custom base die solution.

Availability and More Information
The Rambus HBM4E Controller IP is the latest addition to the Rambus leading-edge portfolio of digital controller solutions. The HBM4E Controller is available for licensing, and early access design customers can engage today.

Read also :
Articles_bottom
SNL Awards_2026
AIC