What are you looking for ?
facts 2025 and predictions 2026
RAIDON

SC25: Weka Breaks AI Memory Barrier With Augmented Memory Grid on NeuralMesh

Validated on Oracle Cloud Infrastructure, democratizes inference, delivering 1000x more memory and 20x faster time to first token for NeuralMesh customers

At SC25, Weka.IO Ltd. announced the commercial availability of Augmented Memory Grid on NeuralMesh, a memory extension technology that solves the fundamental bottleneck throttling AI innovation: GPU memory.

Weka Augmented Memory Grid 2511

Validated on Oracle Cloud Infrastructure (OCI) and other leading AI cloud platforms, Augmented Memory Grid extends GPU memory capacity by 1,000x, from GBs to PBs, while reducing time-to-first-token by up to 20x. This breakthrough enables AI builders to streamline long-context reasoning and agentic AI workflows, dramatically improving the efficiency of inference workloads that have previously been challenging to scale.

From Innovation to Production: Solving For AI Memory Wall
Since its introduction at Nvidia GTC 2025, Augmented Memory Grid has been hardened, tested, and validated in leading production AI cloud environments, starting with OCI. The results have confirmed what early testing indicated: as AI systems evolve toward longer, more complex interactions – from coding copilots to research assistants and reasoning agents – memory has become the critical bottleneck limiting inference performance and economics.

We’re bringing to market a proven solution validated with Oracle Cloud Infrastructure and other leading AI infrastructure platforms,” said Liran Zvibel, co-founder and CEO, Weka. “Scaling agentic AI isn’t just about raw compute – it’s about solving the memory wall with intelligent data pathways. Augmented Memory Grid enables customers to run more tokens per GPU, support more concurrent users, and unlock entirely new service models for long-context workloads. OCI’s bare metal infrastructure with high-performance RDMA networking and GPUDirect Storage capabilities makes it a unique platform for accelerating inference at scale.”

Today’s inference systems face a fundamental constraint: GPU high-bandwidth memory (HBM) is extraordinarily fast but limited in capacity, while system DRAM offers more space but far less bandwidth. Once both tiers fill, key-value cache (KV cache) entries are evicted and GPUs are forced to recompute tokens they’ve already processed – wasting cycles, power, and time.

WEKA’s Augmented Memory Grid breaks through the GPU memory wall by creating a high-speed bridge between GPU memory (typically HBM) and flash-based storage. It continuously streams key-value cache data between GPU memory and the company’s token warehouse, using RDMA and Nvidia Magnum IO GPUDirect Storage to achieve memory speeds. This allows large language and agentic AI models to access far more context without having to recompute previously computed KV cache or previously generated tokens, dramatically improving efficiency and scalability.

OCI-Tested Performance and Ecosystem Integration
Independent testing, including validation on OCI, has confirmed:

  • 1,000x more KV cache capacity while maintaining near-memory performance.
  • 20x faster time to first token when processing 128,000 tokens compared to recomputing the prefill phase.
  • 7.5M read IOs and 1.0M write IOs in an eight-node cluster.

For AI cloud providers, model providers, and enterprise AI builders, these performance gains fundamentally change inference economics. By eliminating redundant prefill operations and sustaining high cache hit rates, organizations can maximize tenant density, reduce idle GPU cycles, and dramatically improve ROI/kilowatt-hour. Model providers can now profitably serve long-context models, slashing input token costs and enabling entirely new business models around persistent, stateful AI sessions.

The move to commercial availability reflects deep collaboration with leading AI infrastructure collaborators, including Nvidia and Oracle. The solution integrates tightly with Nvidia GPUDirect Storage, Nvidia Dynamo, and Nvidia NIXL, with Weka having open-sourced a dedicated plugin for the Nvidia Inference Transfer Library (NIXL). OCI’s bare-metal GPU compute with RDMA networking and Nvidia GPUDirect Storage capabilities provides the high-performance foundation Weka needs to deliver an Augmented Memory Grid without performance compromises in cloud-based AI deployments.

The economics of large-scale inference are a major consideration for enterprises,” said Nathan Thomas, VP, multicloud, Oracle Cloud Infrastructure. “Weka’s Augmented Memory Grid directly confronts this challenge. The 20x improvement in time-to-first-token we observed in joint testing on OCI isn’t just a performance metric; it fundamentally reshapes the cost structure of running AI workloads. For our customers, this makes deploying the next generation of AI easier and cheaper.”

Availability:
Augmented Memory Grid is now included as a feature for NeuralMesh deployments and on the Oracle Cloud Marketplace, with support for additional cloud platforms coming soon.

Resource :
Weka’s Augmented Memory Grid page

Read also :
Articles_bottom
ExaGrid
AIC
ATTO
OPEN-E