CES 2026: SK hynix Showcases Next-Gen AI Memory Innovation
And unveils 16-layer HBM4 with 48GB for 1st time and showcases conventional and AI focused products such as SOCAMM2 and LPDDR6
This is a Press Release edited by StorageNewsletter.com on January 7, 2026 at 2:01 pmSummary:
- SK hynix to operate customer exhibition booth to enhance customer connection
- Unveils 16-layer HBM4 with 48GB for the first time and showcases conventional and AI focused products such as SOCAMM2 and LPDDR6
- With ‘AI System Demo Zone’, visualizes custom HBM structure to present future technology
- Company to create new value based on differentiated memory solution and close collaboration with customers
SK hynix Inc. announced that it will open a customer exhibition booth at Venetian Expo and showcase its next-gen AI memory solution at CES 2026, in Las Vegas, NV, from January 6 to 9.
The company said that, “Under the theme ‘Innovative AI, Sustainable tomorrow’, we plan to showcase a wide range of next generation memory solutions optimized for AI and will work closely with customers to create new value in the AI era.”
The firm has previously operated both a SK Group joint exhibition and a customer exhibition booth at CES. This year, the company will focus on the customer exhibition booth to expand touchpoint with key customers to discuss potential collaboration.
The company showcases 16-layer HBM4 product with 48GB, next-gen HBM product, for the 1st time during the exhibition. The product is the next-gen product of 12-layer HBM4 product with 36GB, which demonstrated industry’s fastest speed of 11.7Gb/s, and is under development aligned with customers’ schedules.
12-layer HBM3E product with 36GB which will drive the market this year will also be presented. In particular, the company will jointly exhibit GPU modules that have adopted HBM3E for AI servers with customer and demonstrate its role within AI systems.
In addition to HBM, the company plans to showcase SOCAMM2, a low-power memory module specialized for AI servers, to demonstrate the competitiveness of its diverse product portfolio in response to the rapidly growing demand for AI servers.
Also, SK hynix will exhibit its lineup of conventional memory products optimized for AI, demonstrating its technological leadership across the market. The company will present its LPDDR6, optimized for on-device AI, offering significantly improved data processing speed and power efficiency compared to previous-gen.
In NAND flash, the company will present its 321-layer 2Tb QLC product, optimized for ultra-high capacity eSSDs, as demand surges from rapid expansion of AI data centers. With best-in-industry integration, this product significantly improves power efficiency and performance compared to previous-gen QLC products, making it particularly advantageous in AI data center environments where lower power consumption is needed.
The company will set up an ‘AI System Demo Zone’ where visitors can experience how its AI system memory solution that is being prepared for the future, interconnect to form AI ecosystem.
Customized cHBM
In this zone, the company will present customized cHBM [1] optimized for specific AI chip or system, PIM [2] based AiMX [3], CuD [4] which conducts computing in memory, CMM-Ax [5] that integrated computing capabilities into CXL [6] memory, and Data-aware CSD [ 7].
For cHBM (Custom HBM), due to specific interest from customers, a large-scale mock-up has been prepared to allow visitors to visually sight its innovative structure. As the competition of the AI market shifts from mere performance to inference efficiency and cost optimization, this visualizes a new design approach that integrates part of computation and control functions into HBM which was handled by conventional GPU or ASIC in the past.
“As innovation triggered by AI accelerates further, customers’ technical requirements are evolving rapidly,” said Justin Kim, president and head, AI Infra, SK hynix. “We will meet customer needs with differentiated memory solutions. With close cooperation with customers, the company will create new value to contribute to the advancement of the AI ecosystem.”
[1] Custom HBM (cHBM): A product that integrates some functions located in GPUs and ASICs to the HBM base die, reflecting customer requirements. As the AI market evolves from conventional to inference efficiency and optimization, HBM is also evolving from conventional products to customized solution. This solution is expected to enhance the performance of GPUs and ASICs while reducing the power required to transfer data with HBM, leading to imrprove overall system efficiency.
[2] Processing-In-Memory (PIM): A next-gen memory technology that integrates computational capabilities into memory, addressing data movement bottlenecks in AI and big data processing.
[3] Accelerator-in-Memory based Accelerator (AiMX): SK hynix’s accelerator card prototype featuring a GDDR6-AiM chip which is specialized for large language models (LLMs).
[4] Compute-using-DRAM (CuD): A next-gen product that contributes to accelerating data processing by performing simple computations within the cell.
[5] CXL Memory Module-Accelerator xPU (CMM-Ax): A solution that adds computational functionality to CXL’s advantage of expanding high-capacity memory, contributing to improving performance and energy efficiency of the next-gen server platforms.
[6] Compute Express Link (CXL): A next-gen interface that efficiently connects CPU, GPU, memory, and other components in high-performance computing systems to support massive, ultra-fast computation. Based on PCIe interface, CXL allows fast data transfer and has pooling capability to efficiently utilize memory
[7] Computational Storage Drive (CSD): A storage device that can process data on its own.












