The cards

Introduction

Last year, we published a comprehensive look at 4K gaming performance, using the then-top dog GTX 980 Ti SLI duo. We found that two $650 video card could indeed provide an excellent 4K gaming experience (thank goodness!). We've also published a lot of articles over the years looking at CPU performance in games, such as a showdown between the Core i7-4790K and Core i7-5820K. In that comparison, we found the two CPUs to be quite evenly matched, with the six-core 5820K surprisingly not pulling ahead in most games. And most recently, we published an analysis of how Intel's Core i5-6600K, Core i7-6700K, and Core i7-6900K do when paired up with a hot-clocked GTX 1080 video card.

But this this is not that article. Rather, here we are going to be looking specifically at whether the X99 platform and its 40 PCIe lanes is a superior pick for enthusiasts running dual video cards, as compared to the 20 PCIe lanes of the Z170 platform. Specifically, the X99 chipset allows high-end video cards access to 16 PCIe lanes each, while the Z170 allows a card access to 16 PCIe lanes only in a single-card configuration. Once you add a second card, those PCIe lanes are dividedly evenly between the two cards, giving each card half as much available bandwidth. Based on this fact, we can hypothesize that for dual-card systems, paying for those extra PCIe lanes may well be worth it. For a a single-card system, however, Z170 may be preferable, even for 4K gaming, because you get access to newer-tech CPUs, specifically Skylake and soon enough Kaby Lake. As has been the case for quite some time, Intel's enthusiast-level High-End Desktop (HEDT) platform is typically one to two generations behind the current consumer-level platform. Right now, that means HEDT uses the Broadwell-E design, which is 5-8% slower than Skylake in instructions per clock cycle, and also runs at lower default core clocks due to the heat generated by its larger number of cores.

In a sense, we're building on our findings from several previous articles to provide an even more nuanced look at exactly what determines overall gaming performance in high-end systems. If you're in the market for a high-end gaming system, and specifically one intended to run at 4K, you'll want to keep reading! [Update: we've since gone a step further and pitted 1070 SLI vs. 1080 SLI vs. the Titan X Pascal at 4K - click the link to jump to our latest results!]

Test Setup

Here are the two GTX 1070 models we used, running GeForce driver version 368.69:

  1. Asus GeForce GTX 1070 8GB Founders Edition (running at reference clocks)
  2. EVGA GeForce GTX 1080 8GB Superclocked (detuned to run at reference clocks)

You may ask why we are using unmatched cards here. The answer is two-fold. First, because we buy all of our own video cards at retail to avoid any potential source of bias, it makes sense for us to buy different models, as we'd miss the opportunity to gain exposure to various manufacturers' offerings, packaging, and warranty support. Secondly, we've found that when running two cards in SLI, the ideal cooling setup it to have an open-air card on top, and a blower model on the bottom, so that the top card is not inundated by the lower card's heat. Unfortunately, this does mean that the two cards will be running at different speeds when using default settings, so we've detuned our EVGA Superclocked model with a -88MHz offset to run in step with the Founders Edition card.

Z170

With that explanation out of the way, we can move on to the specs for our two test platforms. First our Z170-based system, using Intel's best quad-core processor:

  1. CPU: Intel Core i7-6700K, overclocked to 4.4GHz
  2. Motherboard: Gigabyte GA-Z170X-Gaming 6
  3. RAM: Geil 2x8GB Super Luce DDR4-3000, 15-17-17-35
  4. SSD #1: Samsung 850 Evo M.2 500GB
  5. SSD #2: Crucial MX200 1TB
  6. Case: Phanteks Enthoo Evolv
  7. Power Supply: EVGA Supernova 850 GS
  8. CPU Cooler: Noctua NH-U14S
  9. Operating System: Windows 10

And second, our X99-based system, using Intel's best eight-core processor:

  1. CPU: Intel Core i7-6900K, overclocked to 4.4GHz
  2. Motherboard: Asus X99-Pro/USB3.1 
  3. RAM: G.Skill 4x8GB Ripjaws4 DDR4-3000, overclocked to DDR4-3200, 16-16-16-36
  4. SSD #1: Samsung 950 Pro M.2 512GB 
  5. SSD #2: Samsung 850 Evo 1TB 
  6. Case: SilverStone Primera PM01 
  7. Power Supply: EVGA Supernova 1000 PS 
  8. CPU Cooler: Corsair Hydro H100i v2 
  9. Operating System: Windows 10

X99

A few comments on overclocking here. First, the 6700K comes from the factory at a much higher clock speed than the 6900K (4.0GHz base vs. 3.2GHz base). In practice, the difference isn't quite that much, because under a full load, the 6700K stays at 4GHz (boost only occurs on single-threaded workloads, and even then it only jumps to 4.2GHz). The 6900K in fact boosts to 3.5GHz with a full load, and 3.7GHz with a single-threaded load. To even out the mismatch as much as possible, we overclocked both CPUs to 4.4GHz, which is pretty easy on the 6700K, and near the limit for the 6900K. Even so, the Skylake architecture used by the 6700K is a bit more efficient per clock cycle than the 6900K's Broadwell-E design, meaning the 6700K has a slight advantage, at least for single-threaded workloads (like some physics routines in games, which cannot be split between cores). Some may argue that an "equal" overclock would have pushed both CPUs by the same percentage or taken each to its maximum overclock, but we had to draw the line somewhere, so we decided to just set them at the same clocks. Another minor point: because the X99 platform can't run DDR4-3000 memory without an oddball 125MHz motherboard strap, we overclocked our RAM to DDR4-3200 on the X99 system to allow it to run at an even 100MHz motherboard strap.

The good news is that all of this really doesn't matter, for two reasons: (1) we're going to be pushing our video cards so hard that CPU limitations will become essentially non-existant, and (2) our core findings will be based on a PCIe scaling analysis, showing how much performance boost each of our platform gets jumping from one card to two. This is based entirely on the platform's PCIe lanes as long as CPU limitations are kept at bay. We'll be doing a CPU shootout soon enough to uncover the value of extra cores and Hyperthreading, but this is not that article!

For our testing, we're using two benchmark tests and four games: 3DMark Fire Strike Ultra, 3DMark Time Spy, Crysis 3, Battlefield 4, Far Cry 4, and The Witcher 3. Fire Strike Ultra is a 4K-specific benchmark, so we didn't need to tweak any settings for it to do the job we needed. Time Spy is actually a 2560x1440 benchmark, but as you'll see, it pushes systems pretty hard by harnessing DirectX 12, and we didn't see a need to change the default setting from 1440p to 4K. As for our four game tests, we pushed every button and toggled every switch, selecting what you might call "ludicrous" quality for each benchmark run. That meant full multi-sampling anti-aliasing (which is overkill at 4K), the highest texture resolutions, and in Far Cry 4 and The Witcher 3, every Nvidia Gameworks feature available. You want max settings, we're giving you max settings! Just one exception to all of this: Crysis 3 is so demanding at maximum 8x MSAA that we chose to instead run 4x MSAA for our benchmarks, as we couldn't actually get a clean run through our real-world benchmark on a single GTX 1070 using 8x MSAA. And to be clear, you most definitely do not need it at 4K anyway.

OK, now that we've explained the method to our madness, it's time to show you the results! 

Next page