This site may earn chapter commissions from the links on this page. Terms of use.

Samsung has appear that it's bringing new, advanced GDDR6 modules to market with higher capacities and clock rates. It's a step forward for the land of GDDR retention as a whole, with higher data transfer rates (up to 16Gbps) and larger capacities (16Gb, or 2GB per die). In other words, a system could now field 8GB of RAM in but four GDDR6 chips while maintaining a respectable 256GB/s of retentivity bandwidth. Obviously smaller dies volition exist bachelor if companies want to deploy more than retention channels with less VRAM per channel, but the figures for the new RAM standard are respectable.

The chart beneath is from Micron, merely it illustrates the standard difference between GDDR5X and GDDR6. Power consumption may decrease a bit from existence congenital on newer nodes, and in that location'southward a different channel configuration, but overall bandwidth should be an evolutionary gain.

GDDR5X-vs-GDDR6

The funny thing is, once upon a time it wasn't clear if GDDR6 would come up to marketplace at all, at to the lowest degree as a major GPU solution. HBM was supposed to scale apace into HBM2, and HBM2 was supposed to deploy in major markets relatively quickly. It wasn't unusual for AMD to exist the merely company to deploy HBM; AMD and Nvidia had separate similarly over the use of GDDR4 a decade ago, with AMD adopting the technology and Nvidia choosing to stick with GDDR3. Nvidia's conclusion to utilise a stopgap GDDR5X raised a few eyebrows, only HBM2 nonetheless seemed to exist the amend long-term technology, especially when Nvidia deployed it start for its high-end GPUs and AMD was following suit with Vega.

But there take been consistent rumors all yr that HBM2'due south manufacturing difficulties are causing problems for anybody who adopts the tech, not but AMD. A report from Semiengineering suggests some reasons why. While the data coach in HBM2 is 1,024 bits broad, that's simply the data transmission lines. Factor in power, ground, and other signaling into the mix, and information technology's more similar 1,700 wires to each HBM2 die. The interposer isn't necessarily difficult to manufacture, only the design of the interposer is still challenging–companies have to balance signal length, ability consumption, and cross talk. Managing heat flow in HBM stacks is as well challenging–considering each die is stacked on peak of the other, y'all can wind up with lower dies becoming extremely hot, since they radiate heat upwards through the retention stack.

4GB HBM2 - Samsung

Samsung'due south diagram for its own HBM2 design.

HBM2 may well remain in the upper cease of the product stack, but it seems no one has had much luck bringing it down market yet, or even in ensuring easy deployment. That'south non because the engineering is intrinsically bad–AMD's Vega gets some very existent thermal headroom from HBM2–simply a technology that can't scale easily downwards to lower markets is a technology that's fundamentally express in terms of its ain appeal. Our argument is simple: If Nvidia deploys GDDR6 in its next generation of high-end cards and AMD makes a similar move with whatsoever information technology uses to follow upwards the Polaris family (RX 560-RX 580), it'll exist a sign both companies are yet struggling to bring the technology to market place.