What are you looking for ?
Advertise with us
RAIDON

Memory Manufacturers Focus on Compute Express Link Memory Expander Products

To integrate performance between various xPUs

According to TrendForce Corp.‘s server-related report, the original goal of CXL (Compute Express Link) was to integrate performance between various xPUs and thereby optimize hardware costs required for AI and HPC, breaking through original hardware limitations. CXL support remains sourced to the CPU and, since the server CPUs that support CXL functionality such as Intel Sapphire Rapids and AMD Genoa only support the CXL 1.1 spec at this time, the product that this spec can realize first is CXL Memory Expander. Therefore, Tamong various CXL-related products, CXL Memory Expander will become a precursor product and this product is also the most closely related to DRAM.

Assisted by CXL memory pooling functionality, AI and HPC expected to surmount hardware limitations and drive whole server DRAM consumption
However, CXL 1.1 is currently only practical as memory expansion in terms of DRAM and memory pooling will not be implemented until CXL 2.0. At this stage, average RDIMM usage of a dual-socket CPU server is approximately 10-12 units (maximum 16 units), meaning the number of RDIMM slots and channels are not being fully utilized. Thus, general computing servers still have room to upgrade their original RDIMM in the short term and only applications that require AI and HPC will have CXL requirements. The consumption of CXL memory expanders has a limited impact on the overall DRAM market and CXL memory expanders are products created to optimize HPC performance in the short term.

However, under existing applications, a portion of DRAM in whole servers will still idle and foster unused capacity during operation, leading to the data center industry paying for the cost of excess DRAM. In the long run, the average DRAM capacity of whole servers will increase year by year as applications become more diverse and complicate. However, if CXL memory pooling becomes practical in the future, internal xPU memory resources will be utilized effectively. CXL memory pooling functionality will reduce the demand for RDIMM modules purchased by buyers, which will slow the growth rate of installed server DRAM capacity in individual servers in coming years.

The CXL Consortium ultimately hopes to use this interface to effectively utilize the resources of every device, thereby breaking through AI and HPC hardware bottlenecks. With the assistance of CXL, the development of AI and HPC will accelerate according to model complexity and contribute to the shipment volume of related models. Therefore, from this perspective, CXL will drive the average capacity of DRAM at the whole server level (calculated as combined RDIMM and CXL memory expander). However, in terms of the annual growth rate of DRAM consumption on servers, growth will slow since CXL will efficiently use DRAM installed in whole devices.

Buyers who will desire a substantial amount of CXL functionality are mainly focused on high-end computing machinery, so cloud service providers will be major adopters. Some OEMs ship models to their HPC computing customers requiring large-capacity DRAM expansion, which will also create potential adopters of this product.

Montage, Marvell, Microchip revenue expected to ascend again due to the rise of CXL
At present, the CXL memory expander developed by manufacturers employs DDR5 but remains limited by the speed of the PCIe 5.0 interface at this stage and output speed is only marginally equivalent to DDR4. DDR5 will be able to realize its full speed after CPUs support PCIe 6.0 or higher specifications in the future. From the perspective of CXL memory expander structure, a CXL controller is required in addition to DRAM. CXL controller manufacturers include Montage, Marvell, Microchip, etc. Therefore, the rise of CXL not only directly drives controller supplier revenue, but also does not rule out a self-development model similar to that of module houses or cloud service providers may appear in the future to produce CXL memory expanders after preparing controllers and DRAM.

To sum up, the stagnation in current server performance is expected to be improved due to the development of CXL, which will effectively increase the usage of DRAM in servers while avoiding a spike in idling costs. In the future, the CXL2.0 specification will reform the existing hardware bottleneck and, with the assistance of memory pooling, CXL will be able to exhibit greater advantages. As applications become more diverse and complex, high-intensity operations such as HPC and AI will rely on xPU more than ever. With shared memory pooling, model design can break free of hardware bottlenecks and continue to build more complex architectures. In addition, the introduction of CXL will be popularized on the strength of future functions, especially in the large-scale introduction of cloud services into the industry. This spec can better optimize communication between servers because CXL establishes high-speed communications interconnectivity and these interactions help to expand the application of computing power between parallel servers and optimizes the TCO.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E