Following up on our previous article examining the performance of dual-core and quad-core CPUs in games, with and without Hyperthreading, we thought our readers would appreciate a deeper dive into what happens when you take a quad-core and overclock it to the limit. We're going to do just that, while also looking futher into the effects of Hyperthreading, in order to highlight where CPU bottlenecking comes into play in modern games. Crysis 3 and Battlefield 4, we're looking at you!
These games bring modern systems to their knees, but are they just putting the hurt on your video card, or have these games finally caught up to the CPU power available to the average consumer? Not so long ago, we compared an Intel i5-760 to an i5-4670K and concluded that if you have $200-250 to spend to upgrade an older system, your money is better spent on a video card than a CPU. This time, however, we're going straight to the top to see what it takes to drive an ultra-high-end video card. Read on to find out what we learned!
The effect of hyperthreading (HT) on gaming performance has been much-debated for many years, especially since the advent of HT in quad-cores back in Intel's Nehalem generation of CPUs introduced in 2008. Our previous article on this topic found that HT helps dual-cores so much that you shouldn't even bother with a non-HT dual-core if you intend to play modern games. The results on a quad-core were less clear, however, so we decided we'd take a second look at it, this time adding overclocking into the mix. And as you'll see, overclocking is desperately needed with the latest system-busting games, particularly Crysis 3 and Battlefield 4. We also test a few games we've looked at previously but which are still thoroughly modern - Hitman Absolution, Tomb Raider, and Far Cry 3.
We've upgraded our test rig for this article to make sure we have enough GPU firepower to adequately stress the CPU - we're using an EVGA GeForce GTX 780 3GB Superclocked, with a factory-overclock that makes it roughly the performance equivalent of a stock GTX Titan (or Radeon R9 290x, for those of you more familar with the AMD side of the aisle). We're using an Intel i7-3770K as our CPU, and test it at six different settings - 3.3GHz with and without HT (the latter which closely approximates older quad-cores like the i5-2500K), 3.7GHz (i.e., a stock 3770K's Turboboost) with and without HT, and finally 4.5GHz, again with and without HT. As you'll see, in several of our games, we are CPU-limited even with our maximum overclock, which is frankly pretty amazing. CPUs simply haven't increased in speed all that much over the past few years. Between January 2011 and November 2013 (i.e., since the release of the Sandy Bridge-based 2500K/2600K), CPUs have increased in speed about 15-20 percent, while video cards are now offering over 2x the performance of the former high-end products at that time, the Radeon HD 6970 and GeForce GTX 580. If you'd like to read more about video card performance over time, check out our Video Card Rankings. For the rest of this article, though, all eyes are on our 3770K and how high we can drive its performance using Hyperthreading, overclocking, and yes, even a little core unparking. Prepare to be surprised!
A quick note on our methodology. The test system for this article consisted of an Intel Core i7-3770K, an Asus Maximus V Gene motherboard, an EVGA GeForce GTX 780 3GB Superclocked video card, 16GB of Samsung DDR3@1866, Windows 7 x64, and GeForce Driver Version 331.65. All of our results come from in-game runs at a resolution of 1920x1080, using the maximum graphics settings and the default level of anti-aliasing. We don't use built-in canned benchmarks, which typically put very little load on a CPU, making them ineffective at illustrating the role of CPU power in actual gaming scenarios. We used FRAPS to collect data for 60-second intervals in each of our games, and we did our best to repeat the same sequence as closely as we could for each of the runs we did. Naturally, there's some variability, so we won't make a mountain out of a molehill - we're looking for big, game-changing results. And in at least one game, we're sure you'll agree that we found them!
We start with the oldest and least-demanding game in our suite, Hitman Absolution. Released in November 2012, it still impresseses with its incredible graphics, along with its unique use of large-scale crowds never before seen in a video game. The artificial intelligence required to keep a human population moving requires quite a bit of CPU power, and as you can see, Hitman responds somewhat to overclocking, but moreso to Hyperthreading. All the roaming pedestrians on the Chinatown level require lots of threads to keep in motion, and we pick up about 4 percent in performance at the lower core clocks using HT. At the maximum overclock, we finally become GPU-limited. This game is very interesting in that while it clearly likes more CPU power, it doesn't demand higher clocks - even with a 36 percent difference in the core clock between our lowest- and highest-spec test configuration, we only pick up 6 percent more frames.
Far Cry 3
Far Cry 3 was released just weeks after Hitman: Absolution, and yet its game engine was far more demanding (but in our opinion, not necessarily better-looking). In our previous testing, we found that HT made a dual-core quite playable, but that this game basically won't run on two cores without the benefit of HT. That being said, our previous findings regarding HT on quad-cores are confirmed here - it simply doesn't help. In fact, it takes a big chunk out of the minimums at the lowest clock, and it isn't until we reach our maximum overclock that the HT system catches up to the non-HT system. We can conclusively say, then, that while overclocking does help in this game, HT doesn't. Furthermore, this game's graphics engine is demanding enough that the CPU really isn't that critical in this game - your video card is really going to be the determining factor in this game.
Tomb Raider was released in March 2013 and introduced a new graphics technology, TressFX, which simulates individual hair strands. That graphics tech puts a lot more strain on the GPU than the CPU, so we're fairly GPU-bottlenecked here. Overclocking the CPU does almost nothing, but interestingly, we see that HT has a pretty significant positive effect on minimum frames per second, at least at the lower clock speeds.