On May 19th, AMD found a way to get some of the biggest names in tech publishing, including Anandtech, Tom’s Hardware, TechReport, HardOCP, and ArsTechnica, to simultaneously publish what was essentially a giant press release for AMD’s new high-bandwidth memory (HBM) technology. HBM will be showcased in its newest Radeon video cards, rumored to be appearing some time next month. AMD’s press release on HBM is quite telling, really. It’s banking on HBM to sell video cards, and it’s hoping that by revealing details of HBM at least a month before anything is actually known about the performance of its new GPUs, it can stop the bleeding of market share to its competitor, Nvidia. Pretty bold, and no doubt a fairly impressive public relations coup.

Less impressive is AMD’s hedging on a little detail that nearly got lost in the press coverage (only a few outlets really honed in on it). And that is the fact that AMD’s next-gen video card is going to be limited to 4GB of HBM, not 6GB or 8GB, and certainly not the 12GB offered by the $1,000 Nvidia GeForce GTX Titan X. AMD promises that drivers will work around the 4GB limit. Uh, huh. We’re still waiting on AMD’s Crossfire/FreeSync driver, two months after it was promised. And that’s setting aside the fact that AMD’s most recent non-beta driver landed on Dec. 8th, over five months ago.

As interesting as HBM is from a technical standpoint, we at TBG like to focus on what really matters to consumers. Results. And we’re a bit concerned that this new foray into HBM by AMD, which already had a huge bandwidth advantage over Nvidia in its 512-bit R9 290 series despite never gaining the performance lead with that series, is a whole lot of smoke and mirrors.

And this is why it matters: VRAM limitations kill video cards. Not in the sense that the cards stop working, but in the sense that long before they run out of sufficient graphics processing power to handle modern games, they run into VRAM limitations that effectively make them obsolete. Here at TBG, we’ve tested video card from just about every generation dating back nearly a decade, and we’ve come to learn that VRAM most definitely matters.

Here are a few examples of cards we’ve owned, tested, found to be excellent products at release, only to see their moment in the spotlight pass a little too soon:

The Nvidia GeForce 8800GT 512GB, released at $300 in October 2007 to much fanfare. Called “the only card that matters“, it nearly equaled the much more expensive 8800 GTX in everything but VRAM, with the GTX having 768MB. Within a few years, its 512GB of VRAM would cripple it in every game.
The AMD Radeon HD 5870 1GB and HD 5850 1GB, released at $380/$260 in September 2009. They easily beat Nvidia’s best offering, the GTX 285, in everything…but VRAM. The 285 had 1GB as well, and Nvidia would counter the 5850/5870 release with the GTX 470 1.25GB and GTX 480 1.5GB about six months later. Too little, too late? Well, guess which cards can still be used in most modern games today?
The GeForce GTX 670 2GB and GTX 680 2GB, released in the Spring of 2012 at $400 and $500 respectively, they originally embarrassed AMD’s earlier $550 Radeon HD 7970 3GB, especially in regard to power use. But you know which of these cards is still around and performing well today? The HD 7970, in the form of the R9 280X 3GB.
The GeForce GTX 780 3GB, released at $650 in May 2013, nearly matching the $1,000 GTX Titan in everything, but you guessed it, VRAM. The Titan had 6GB, twice what the 780 had on offer. Today, 3GB can be crippling at resolutions as low as 2560×1440, which the 780 should otherwise be able to handle quite well. In the meantime, the 780’s archnemesis, the Radeon R9 290 4GB, released in Nov. 2013 at $400, has enjoyed a long and healthy life atop the upper-midrange sector, never bumping into a VRAM when operating in its graphics sweet spot.

So, over the past eight years or so, both Nvidia and AMD have fallen into the trap of underspec’ing their cards in the VRAM department, although it’s fairly clear that Nvidia is the main culprit here. In fact, just a few short months ago, it was forced to admit that its GTX 970 4GB was, in fact, a 3.5GB card. Was it an honest mistake that let false specifications reach consumers, or was it Nvidia’s knowledge that 3.5GB was really, truly, not the equal of 4GB?

Well, now it’s AMD’s turn to play with fire. TechReport has quoted one of AMD’s chief engineers as saying that the VRAM limitation will be dealt with exclusively in software, and that there’s a lot of excess capacity being wasted in current-gen cards.

Count us among the skeptical. 4GB is not enough VRAM for a future-gen card, and AMD’s driver team does not have the requisite track record to pin the next Radeon’s success on software. Folks, this could get ugly.