Optimized storage server for high-performance software-defined storage (SDS) workloads.
Validated NetApp enterprise storage systems with ONTAP for environments powering AI training and inferencing
Boosts business productivity and efficiency with scalable Agentic AI solutions
And unveils augmented memory grid
InsightEngine with NVIDIA DGX converges instant automated data ingestion, EB-scale vector search, event-driven orchestration, and GPU-optimized inferencing into single system with unified global enterprise-grade security
Increases inference performance while lowering costs for scaling test-time compute; inference optimizations on NVIDIA Blackwell boosts throughput by 30x on DeepSeek-R1
Integrating NVIDIA AI Data Platform reference design with EXAScaler and Infinia 2.0, part of the DDN AI Data Intelligence Platform
Purpose-built to eliminate inference bottlenecks, optimize GPU utilization to 99%, and integrate multimodal AI data pipelines
HBM3E 12H and LPDDR5X-based SOCAMM solutions designed to unlock full potential of AI platforms
Including 12-high HBM3E and SOCAMM memory standard for AI servers, and sampling 12-layer HBM4 ultra-high performance DRAM for AI
To simplify how enterprises deploy, manage, and secure AI infrastructure at any scale
Built with NVIDIA AI, the expanded version is expected by mid-2025 for use with Cisco UCS, HPE, and Nutanix
Modular design with hybrid cloud data orchestration powered by Hammerspace and VSP One
For flexible virtualization and data protection
Features flexible data storage for historical analysis, enabling rapid, informed decision-making at edge
Supported by cloud aggregator Altitude Sync
Participants: AppsCode, Auwau, DAOS, FerretDB, Quesma and Storadera
Authors develop H-Rocks to judiciously leverage both CPU and GPU for accelerating wide range of RocksDB operations.
Non-volatile electro-optical high-bandwidth ultra-fast large-scale memory architecture