How will HBM4 impact the AI-centric memory landscape?

HBM4 enhances data processing rates while maintaining essential features such as higher bandwidth and lower power consumption. The post How will HBM4 impact the AI-centric memory landscape? appeared first on EDN.

How will HBM4 impact the AI-centric memory landscape?

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

Just when Nvidia is prepping its Blackwell GPUs to utilize HBM3e memory modules, the JEDEC Solid State Technology Association has announced that the next version, HBM4, is near completion. HBM3e, an enhanced variant of the existing HBM3 memory, tops out at 9.8 Gbps, but HBM4 is likely to reach the double-digit 10+ Gbps speed.

HBM4, also an evolutionary step beyond the current HBM3 standard, further enhances data processing rates while maintaining essential features such as higher bandwidth, lower power consumption, and increased capacity per die and/or stack. Its features and capabilities are critical in applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing (HPC), high-end graphics cards, and servers.

For a start, HBM4 comes with a larger physical footprint as it introduces a doubled channel count per stack compared to HBM3. It also features different configurations that require various interposers to accommodate the differing footprints. Next, it will specify 24-Gb and 32-Gb layers with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks.

There are media reports about JEDEC having eased memory configurations by reducing thickness of HBM4 to 775 µm for 12-layer, 16-layer HBM4 due to rising complexity at higher thickness levels. However, while HBM manufacturers like Micron, SK hynix, and Samsung were poised to use hybrid bonding technology, the HBM4 design committee is reportedly of the view that hybrid bonding would increase pricing. That, in turn, will make HBM4-powered AI processors more expensive.

Hybrid bonding enables memory chip designers to add more stacks compactly without the need for through-silicon-via (TSV), which uses filler bumps to connect multiple stacks. However, with a thickness of 775 µm, hybrid bonding may not be needed for the HBM4 form factor.

For compatibility, the new spec will ensure that a single controller can work with both HBM3 and HBM4 if needed. The designers of the HBM4 spec have also reached an initial agreement on speed bins up to 6.4 Gbps with discussion ongoing for higher frequencies.

Related Content

The post How will HBM4 impact the AI-centric memory landscape? appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow