Samsung Electronics is expected to begin shipments of its next-generation high-bandwidth memory, HBM4, later this month. According to well-informed industry sources, the shipments are expected to occur after the Lunar New Year holiday. Samsung Electronics is also set to become the first memory maker to commercialize what is widely seen as a game-changing chip for AI computing, the sources added. The company has plans to start shipping HBM4 to Nvidia as early as the third week of February. Nvidia is expected to use the memory in its next-generation AI accelerator platform, Vera Rubin. Samsung set to begin HBM4 shipment this month The move signals a turn in fortune for Samsung, which had faced questions and criticisms over its competitiveness in earlier HBM generations. With the HBM4, Samsung is aiming to close the gap and even move ahead of crosstown rival SK Hynix , which had gained an early lead in the sector thanks to the surging demand from AI data centers. According to an industry official, the move gives Samsung the much-needed recovery it needed in the technology sector. The industry source also mentioned that by being the first to mass-produce the highest performance HBM4, it gives the company a clear advantage in shaping the market the way it wants. Nvidia is expected to unveil Vera Rubin accelerators incorporating the HBM4 at its annual conference, GTC 2026, which is expected to be held later this month. Samsung mentioned that the shipment timing was concluded after coordination with Nvidia’s product roadmap and downstream system-level testing schedules. Aside from speed, Samsung’s technological approach to the product is also notable. From the onset, the company planned to improve upon the standards set by JEDEC, adopting the industry’s first combination of a sixth-generation 10-nanometer-class DRAM (1c) process with a 4-nanometer logic die produced through its own boundary. As a result, Samsung’s HBM4 delivers data transfer speeds of about 11.7 Gbps per second, which is well above the JEDEC’s 8 Gbps standards. The figure also represents a 37% improvement over the standard and a 22% gain over the previous HMB3E generation. According to the sources, memory bandwidth per stack reaches up to 3 terabytes per second, which is about 2.4 times higher than its predecessor. In addition, it boasts a 12-high stacking design that enables a capacity of up to 36 gigabytes. With its future 16-high configuration, capacity could grow as much as 48GB, the industry estimates show. Further improvements are expected before mass production Despite making use of cutting-edge processes, Samsung has been able to achieve a stable yield ahead of mass production, with further improvements expected to happen as output scales up, industry sources note. Samsung has also talked about power efficiency, noting that HBM4 is designed to maximize computing performance while reducing energy consumption, helping data centers lower electricity use and cooling costs. The company also expects its HBM sales volume this year to more than triple from last year and has decided to install additional production lines at its Pyeongtaek Campus Line 4 to expand its capacity. The facility is expected to produce roughly 100,000 to 120,000 wafers every month, dedicated to 1c DRAM used in HBM4 products, industry sources added. Last year, Samsung had already built a monthly capacity of around 60,000 to 70,000 wafers for the 1c DRAM process. With the planned expansion, the total 1c output planned for HBM4 could rise to about 200,000 wafers per month, accounting for roughly a quarter of Samsung’s total DRAM production capacity of approximately 780,000 wafers. The HBM4 market is expected to be dominated by Samsung and SK hynix, with US-based Micron Technology already seen as out of the race. According to market tracker SemiAnalysis, SK hynix is expected to take about 70% of the HBM4 market, while Samsung will account for the remaining 30%. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .