Timed to coincide with the January 5th opening of CES, AMD made another big announcement, Vega. We were very grateful that AMD's Scott Wasson, of TechReport fame and now head of Technical Marketing at AMD, was willing to take 30 minutes to walk us through the architectural preview. Given our brush with a PC all-star, we just had to take a selfie, so pardon the bit of photographic indulgence. Thank you, Scott!
But here's the thing: the tone AMD took in presenting Vega couldn't have been more different from how it presented Ryzen, and if we were to read between the lines, we'd say this is indicative of AMD's confidence in the product. First, it is now confirmed as a 1H'17 product (not 1Q'17 as originally anticipated). Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11. For example, Vega's next-gen compute engine, called the "NCU" (replacing the "CU"), offers flexible 8-bit, 16-bit, and 32-bit operations, but that these must be coded for directly. Some of the other new features in Vega include a texture culling technique that has the potential to reduce VRAM usage by half (again, requiring a software assist), a new programmable geometry pipeline with 2x throughput per clock, and a more flexible "primitive shader." Vega will also have higher clocks and IPC than Polaris, which itself had 15% higher IPC than Fiji.
Unfortunately, we just don't think all of this will add up to competitive performance. How do we know? Well, AMD had a demo of working Vega silicon running Doom at 4K, using a Ryzen CPU. It was hitting around 73fps. Compare that to framerates shown in the table below, which we put together running on a Core i7-6900K and various Nvidia GPUs for our recent 4K shootout:
Uh, oh, it looks like Vega is going to come in just below GTX 1080-level performance. And take note: we were running OpenGL in the table above, whereas AMD was running the more efficient Vulkan engine. That would be fine if the price was right, but given all the new design work that had to be done to create Vega, we doubt AMD's going to be interested in pricing it at the $500 pricepoint where it would likely sell like hotcakes.
But wait, there's more. We captured a screenshot with Doom's built-in performance overlay set at maximum details. We think you'll find at least one interesting bit of info if you look closely at the specs for the GPU in question:
Yes, it turns out that we may have uncovered what the HBM2 VRAM allocation of the new Vega GPU is going to be. Nothing ground-breaking, to be sure, and given that it appears based on previous Vega die shots that AMD is going to use HBM2's doubling of bandwidth versus HBM1 to allow the number of memory stacks to be cut in half (going from four to two versus Fiji), we're looking at the exact same memory throughput as Fiji, namely 512GB/s. Interestingly, AMD is going to be referring to VRAM by a new name: "high-bandwidth cache". From AMD's point of view, the new architecture handles data so differently from previous GPUs that the label VRAM just doesn't suffice.
In the end, there's no doubt that Vega will be absolutely cutting-edge, but that doesn't necessarily mean it's going to be enough to win over the hearts, minds, and hard-earned dollars of gamers, especially if it's priced as a premium product. We think that in Vega, AMD's trying to reset the goalposts for game design with its radical new approach to GPU architecture. While that may be successful (as the original GCN arguably was 5 years ago, given how it withstood the test of time far better than its GeForce contemporaries), it means things will be rough for a while longer for AMD fans.
All right, that's all for today. Questions or comments? Just follow the link below to post in our forum!