What are you looking for ?
facts 2025 and predictions 2026
RAIDON

Credo Unveils Weaver Memory Fanout Gearbox for Scalable, High-Bandwidth AI Inference

Boosts memory bandwidth and memory density to optimize computing efficiency of AI accelerators or xPUs

Credo Technology Group Holding Ltd. announced Weaver, a memory fanout gearbox that significantly boosts memory bandwidth and memory density to optimize computing efficiency of AI accelerators or xPUs.

Credo Weaver Intro

The company’s OmniConnect family, in which Weaver is the 1st member, encompasses solutions designed to address scale up and scale out concerns for AI buildouts. Weaver is engineered to overcome the memory bottlenecks in AI inference workloads, delivering unparalleled scalability, bandwidth, and efficiency for next-gen data center and AI applications.

AI inference workloads are increasingly limited by memory quantity and throughput rather than compute power. Traditional memory solutions, such as LPDDR5X/GDDRX, face constraints in bandwidth, density, and power consumption, restricting system performance and scalability. High Bandwidth Memory (HBM) suffers from very high cost, limited availability, and density issues. Weaver leverages advanced 112G very short reach (VSR) SerDes and Credo’s proprietary design to boost I/O density by up to 10x, enabling up to 6.4TB of memory and 16TB/s bandwidth using LPDDR5X—far surpassing conventional architectures.

Weaver is designed to deliver the flexibility and scalability required for future AI inference systems,” said Don Barnetson, SVP, product, Credo. “This innovation empowers our partners to optimize memory provisioning, reduce costs, and accelerate deployment of advanced AI workloads.”

The future of AI acceleration requires efficiency at all levels and innovative technology to process extremely large workloads,” said Mitesh Agrawal, CEO, Positron. “Credo’s Weaver is instrumental in helping us solve our toughest memory challenges, enabling us to deliver the high-performance compute power for our next generation of AI inference servers.”

Weaver supports flexible DRAM packaging and late binding, allowing system integrators to tailor memory configurations for evolving model requirements. The technology is ready for migration to next-gen memory protocol, ensuring long-term value and compatibility as the ecosystem advances. Weaver also integrates robust telemetry and diagnostics for enhanced reliability and uptime.

Availability:
The Credo OmniConnect 112G VSR interface is available for design-in now. The Weaver memory fanout gearbox is scheduled to be available in 2H of 2026 from the company.

Credo Omniconnect Weaver

Credo Weaver, Tab1

Resource:
To discover more about Weaver and OmniConnect, register for Credo’s upcoming webinar on November 10 at 8:00 am PT/11:00 am ET, Breaking the Memory Wall: Scaling AI Inference with Innovative Memory Fanout Architecture, (registration required)

Read also :
Articles_bottom
ExaGrid
AIC
ATTO
OPEN-E