Memory Semiconductors, the Hidden Heroes of AI
The Rebound of Memory Chips as a Core Condition for the AI Revolution
Running AI systems such as ChatGPT requires NVIDIA GPUs paired with HBM (High Bandwidth Memory). AI training and inference consume enormous computing power, and these semiconductors are the essential components. For instance, the NVIDIA H100 GPU uses HBM3 memory, while the successor H200 upgrades to HBM3E, packing 141GB per GPU and delivering 4.8TB/s of bandwidth—up from the H100’s 80GB and 3TB/s. The next generation, HBM4, planned for 2026, will further increase stacked capacity and speed, dramatically improving efficiency. Just as fast GPUs are indispensable, so too are high-capacity, high-performance memory chips.
This applies not only to servers and data centers but also to personal devices. With AI acceleration now appearing in laptops, memory configurations are becoming critical. Intel’s Core Ultra (Meteor Lake) introduced the first integrated NPU (Neural Processing Unit) for PCs, enabling on-device generative AI with eightfold higher power efficiency. Apple’s M4 chip also boosts AI performance, requiring at least 16GB of LPDDR5 memory with 120GB/s bandwidth. Smartphones follow the same trend: Samsung’s Galaxy S25 uses LPDDR6 with the Snapdragon 8 Gen 4 to reduce bottlenecks, while Apple’s iPhone 16 Pro relies on LPDDR5X to support its enhanced Neural Engine. As AI workloads spread, memory’s role becomes pivotal, with low-power DRAM (LPDDR) essential for fast, efficient processing on mobile devices.
Next-generation vehicles and robotics heighten the stakes. Autonomous cars, essentially “moving data centers,” require more than 16GB of DRAM and 200GB+ of storage per unit. Automotive memory must endure extreme conditions, prompting development of specialized DRAM and NAND. Robots and industrial automation systems also demand low-latency, high-bandwidth memory to support real-time AI decisions. MR (Mixed Reality) devices are equally memory-intensive, as AR/VR headsets must process high-resolution graphics in real time. With PC-class MR products emerging, memory specifications are scaling up rapidly.
Looking forward, AI semiconductors will evolve toward higher bandwidth and lower power consumption. On the server side, HBM4 and CXL (Compute Express Link) memory pooling will be dominant, allowing CPUs and accelerators to share memory efficiently. On the consumer side, LPDDR6 and SOCAMM (System on a Chip Attached Memory Module) will define the future. SOCAMM, proposed by NVIDIA, modularizes memory for compact devices like laptops, enabling easy upgrades and better space efficiency compared to fixed LPDDR.
Ultimately, faster, more efficient memory technologies are indispensable for AI progress. HBM, LPDDR, CXL, and SOCAMM will shape the performance of future computing infrastructure. The growth of AI and cloud is not a passing trend but a structural shift, guaranteeing sustained demand for high-performance memory. Memory semiconductors, once overlooked, are now central to maximizing system capabilities. From AI and edge computing to autonomous driving and MR, memory will remain a critical enabler, demanding continuous innovation in high-performance, energy-efficient designs.

