Ever since Nvidia launched its SLI technology in 2004, enthusiasts have debated whether the extra performance it promises makes up for its potential flaws, and indeed whether it actually improves the gaming experience at all. We've examined SLI performance a number of times, including our 4K challenge using the GeForce GTX 980 Ti 6GB in SLI, a similar challenge using the GeForce GTX 1070 8GB in SLI, and most recently, an analysis of SLI scaling using dual GeForce GTX 1080 cards and a high-bandwidth SLI bridge. In each of these cases, we found that SLI most definitely boosted performance, providing "next-gen" performance with current-gen hardware.
One factor that has often weighed in favor of investing in SLI is that it's typically taken up to a year for the fully-realized version of Nvidia's current archiecture to materialize. For example, the original Titan debuted in February 2013, but was based on the Kepler architecture first seen in in May 2012. Likewise, the Maxwell-based Titan X debuted in March of 2015, seven months after Maxwell made its first appearance in the GTX 980. But things changed a bit this year, as Nvidia launched an ultra-high-end card in the form of the all-conquering Titan X Pascal just two months after its GTX 1070 and 1080 arrived. Nvidia probably could have stuck to a schedule similar to the ones it had used before and still kept enthusiasts happy, but instead decided to take advantage of early success with the Pascal design by pushing out its fully-spec'd Titan X Pascal right away. Of course, Nvidia was going to make folks pay for the privilege of the extra performance on tap, releasing the Titan X Pascal at a record-breaking $1,200, the most ever for a single-GPU card. That made the new Titan more expensive than any other gaming configuration on the market, save for dual GTX 1080 cards in SLI, and even then, the cost was similar. Our curiousity was peaked: could the new Titan actually render SLI'd card of the same generation obsolete, or was it just an overpriced cash grab? Well, you're about to find out!
As we've done with every previous processor and video card benchmarking article published on this site, we purchased all the GPU hardware for this article at retail in order to eliminate any potential conflict of interest. Furthermore, because we know full well what it feels like to pay for each product, we won't be tempted to dismiss cost as a "theoretical" impediement to consumers. It's a real issue, and gamers looking to maximize performance on a fixed budget must always consider price. In fact, several of the video cards we tested were purchased specifically for this article. If you'd like to support this approach to component testing, please use the product links in any of our articles to make your next tech purchase!
Here are the specs (and a photo) of the system we used for benchmarking:
- CPU: Intel Core i7-6900K, overclocked to 4.3GHz
- Motherboard: Asus X99-Pro/USB3.1
- RAM: Corsair 4x8GB Vengeance LPX DDR4-3200
- SSD #1: Samsung 950 Pro M.2 512GB
- SSD #2: Samsung 850 Evo 1TB
- Case: SilverStone Primera PM01
- Power Supply: EVGA Supernova 1000 PS
- CPU Cooler: Corsair Hydro H100i v2
- Operating System: Windows 10
- Monitor: LG 27UD68-P 27-Inch 4K
And here are the video cards we tested:
- EVGA GeForce GTX 1070 SC 8GB
- Asus GeForce GTX 1070 8GB FE and EVGA GeForce GTX 1070 SC 8GB with EVGA PRO SLI Bridge HB
- EVGA GeForce GTX 1080 SC 8GB
- Dual EVGA GeForce GTX 1080 SC 8GB with EVGA PRO SLI Bridge HB
- Nvidia Titan X Pascal 12GB
Note that while three of our cards were factory-overclocked, all cards were set to run at reference speeds, making this a true apples-to-apples comparison. Any card can be overclocked, and as we've found in our testing, Pascal-based GPUs all tend to have around the same amount of OC headroom: 10-15%.
To eliminate system bottlenecks as much as possible, we used the EVGA PRO SLI Bridge HB, which we found in our most recent in-depth look at SLI scaling provides significantly better performance than the older single-link SLI bridge. Furthermore, we used our X99-based benchmarking system, which we determined in our analysis of SLI performance on the X99 and Z170 platforms can increase SLI scaling by as much as 10%, thanks to its greater number of PCIe lanes. Finally, our eight-core Intel Core i7-6900K processor was overclocked to 4.3GHz, the stability limit for most Broadwell-E chips, and our quad-channel RAM ran at DDR4-3200, providing tremendous memory bandwidth.
For our testing, we're using one synthetic benchmark and eight games, all running at a native 4K resolution: 3DMark Fire Strike Ultra, Crysis 3, Far Cry 4, The Witcher 3, Fallout 4, Rise of the Tomb Raider, DOOM, Battlefield 1, and Watch_Dogs 2. Each game was run with the highest preset available, typically referred to as "Very High" or "Ultra." Note that in every game other than Battlefield 1, this is actually not the highest setting available, as individual parameters may have additional quality settings beyond Ultra, for example DOOM has a few "Nightmare" settings, and a number of games have extra ambient occlusion settings that aren't included in any preset. For the sake of making comparisons easy, we decided it wasn't worth it to max out each individual quality setting. Furthermore, in many cases, doing so would make the games unplayable at 4K.
All game data was collected in actual in-game runs, which often provide totally different (and obviously more relevant) results than canned benchmarks. We used FRAPS to collect data for three 30-second samples of each benchmark on each video card setup, translating to a total of 120 benchmark runs for this article. Trust us when we say that a few were excruciating due to low framerates, particularly our live multiplayer Battlefield 1 runs on a single GTX 1070. Surviving for 30 seconds straight at under 50fps was no mean feat!
OK, now that we've explained the method to our madness, it's time to move on to the results!