TORmem Unveils Revolutionary Memory Disaggregation Platform for AI Infrastructure
Engineered for scale with ASUS, powered by RDMA, CXL 2.0, and 400G networking plus 3TB–8TB memory appliances and 100G/400G ethernet switches set for Q4 2025 production
This is a Press Release edited by StorageNewsletter.com on July 23, 2025 at 2:01 pmTORmem Inc., a US-based innovator in memory disaggregation, unveils its next-gen memory-centric platform, designed to obliterate memory and networking bottlenecks. Partnering with ASUS to manufacture memory disaggregation appliances and leveraging cutting-edge Marvell Technology for TORswitch 100G/400G Ethernet switches, TORmem delivers unmatched scalability, performance, and cost-efficiency for AI, HPC, and enterprise computing.
AI workloads are exploding, with rack densities exceeding 100 kW and chips exceeding 2,500 watts. Traditional server architectures can’t keep up, leaving memory stranded and costs soaring. TORmem’s solution redefines the data center with 3TB-8TB memory appliances and TORswitch 100G/400G Ethernet switches, enabling data centers to scale dynamically while slashing total cost of ownership (TCO) by up to 50%.
Breakthrough Technology for the AI Era
TORmem’s platform integrates compute, memory pooling, and high-speed networking, powered by:
- CXL 2.0: Enables memory pooling and coherent sharing for in-memory computing, ideal for AI and HPC.
- PCIe Gen5: Delivers up to 768GB /s bandwidth for seamless connectivity.
- InfiniBand RDMA: Provides ultra-low-latency data transfer for multi-node AI systems.
- 400G Networking: TORswitch, built on Marvell’s industry-leading silicon, ensures blazing-fast, low-latency interconnects.
Configurations
AMD EPYC 9645 Appliance: Dual 96-core CPUs, DDR5-6400, 160 PCIe 5.0/CXL 2.0 lanes, 400G Ethernet + RDMA.
Intel Xeon 6767P Appliance: Dual high-performance CPUs, DDR5-6400, 192 PCIe 5.0/CXL 2.0 lanes, 400G Ethernet + RDMA.
Manufactured by ASUS, these appliances support 3TB-8TB of disaggregated memory, unlocking 2-3× memory scaling per CPU/GPU node for AI inference, in-memory databases, and data-intensive analytics.
Strategic Partnership with ASUS
“TORmem’s vision of memory disaggregation is a game-changer for AI infrastructure,” said Thao Nguyen, founder and CEO, TORmem. “By decoupling memory from compute, we empower data centers to scale efficiently while maintaining ultra-low latency. Our partnership with ASUS ensures enterprise-grade quality and global scalability.”
“ASUS is proud to collaborate with TORmem to bring this revolutionary platform to market,” said Timothy Lin, VP of product management, ASUS. “Together, we’re addressing the critical memory and networking challenges faced by cloud providers, enterprises, and research institutions.”
TORswitch: Redefining AI Networking
The TORswitch-400-32QX2S, TORmem’s 32-port 400G Ethernet switch, powered by Marvell’s advanced networking silicon, delivers industry-leading density and low latency for AI training and inference. Entering volume production in Q4 2025 through a trusted ODM partner, TORswitch ensures seamless integration with TORmem appliances for end-to-end performance.
Key Benefits
- Up to 50% Lower TCO: Eliminate memory overprovisioning and optimize resources.
- Flexible Architecture: Seamlessly supports CXL 2.0, PCIe Gen5, and RDMA for diverse workloads.
- Unmatched Scalability: Scale memory independently, supporting AI, HPC, and enterprise needs.
- Future-Proof Design: Built for evolving AI workloads with 400G Ethernet and DDR5-6400.
- Production-Ready: Appliances manufactured by ASUS, switches in production Q4 2025.
Why Now
With AI driving unprecedented demand for memory and bandwidth, TORmem’s platform positions enterprises, cloud providers, and research institutions to stay ahead. “Our technology democratizes HPC-grade performance, making it accessible to a new class of server operators,” said Nguyen.
Highlights
- 3TB-8TB RDMA and CXL 2.0-based memory appliances built with ASUS
- 100G/400G TORswitch powered by Marvell, production-ready Q4 2025.
- PoC program is open now for early adopters.