The Cards

Introduction

Following up on our VRAM Overclocking article, we take the next obvious step and look at the effects of GPU core overclocking. We'll be following a similar procedure, testing core overclocking at specific increments and tracking the performance gain over the reference baseline. As in our last article, we'll be using two different cards - an Nvidia GeForce GTX670 and an AMD Radeon HD7870, in part to determine whether the architectures of Nvidia and AMD cards respond differently to increasing core frequency. For both cards, we'll be using the highest stable VRAM overclock we achieved in our prior testing, so as to eliminate as much as possible memory bandwitch as a limiting factor. We'll also be testing both cards in our high-end gaming system using an Intel i7-3770K CPU overclocked to 4.4GHz.

For the unitiated, "overclocking" simply means setting a component to operate at a frequency that is above the standard frequency set by the factory. Overclocking is most often applied to CPUs and GPUs, although both system memory and video card memory can also be overclocked (the latter demonstrated in our prior article in this series). There's even talk of "overclocking" monitors recently, but we won't get into that here! For the most part, overclocking can be done relatively safely, as most components have at least a small amount of tolerance for operating at above-stock frequencies, since no manufacturer would take the risk of selling components at their absolute limit, especially given the amount of variability in one sample to the next. As it turns out, both of our video cards in this test have roughly the same amount of overclocking "headroom," about 15 percent, but that is entirely by chance. Even if you took a specific brand and model of video card, you could likely find up to 10 percent variability in overclocking headroom. Ultimately, that is not what this article is about - what we're looking at is what you get for each incremental increase in frequency. Our results can be roughly scaled up or down based on how lucky you are with your particular card.

You may actually wonder why we didn't start with this article...well, the answer is fairly simple: we found that video card performance exhibited unusual behavior with VRAM overclocking, which had been undocumented prior to our identifying it during testing. We wanted to get that article out for readers to see, but ultimately, we always intended for this article on core overclocking to be one of the central articles in our Gamer's Bench series. And now, on to our findings!

Benchmark Results

Test Bench: Intel i7-3770K@4.4GHz, Asus Maximus V Gene Motherboard, 16GB DDR3@1866MHz, NVidia GeForce Driver 314.22, AMD Catalyst Driver 13.4

For each of our video cards, we tested four different core frequencies, at intervals of exactly 5%. For our Nvidia card, we are referring to the "Boost" frequency, which is higher than the published core frequency. We carefully monitored this boost frequency during testing to make sure it remained constant (Nvidia's Boost feature can vary with temperature and load, which makes it somewhat difficult to benchmark). The table below lists the frequencies we tested at - note that for both cards, a VRAM overclock of approximately 15 percent was applied, so as to minimize the extent to which memory bandwidth limited scaling of the core overclock. While both cards were factory-overclocked, we simply ignored those overclocks for purposes of our testing, using a baseline frequency at which reference GTX670 and HD7870 video cards would operate.

Settings

One additional note - every benchmark below was tested three times to minimize the effects of test-to-test variability. We report the averaged results.

3DMark Fire Strike Performance Preset

3DMark670

Our first test is the recently-released 3dMark Fire Strike benchmark. We analyze only the Graphics Score, as the overall score is significantly affected by other components in the system, and here we're trying to isolate the video card as much as possible. This benchmark utilizes all of the latest DX11 features, and runs internally at 1920x1080, and is then scaled to the resolution being used on the test system.

3DMark7870

Perhaps not surprisingly, the synthetic 3DMark Fire Strike benchmark shows some of the best overclocking results of all the benchmarks we ran, with a 12.1 percent boost on the GTX670 and an 8.7 percent boost on the HD7870. Also of note, the scaling curve is relatively linear, indicating that the core is likely the limiting factor in this test, especially given the pre-overclocked VRAM on our test cards.

Metro2033 Frontline Benchmark (1920x1080, 4x Anti-Aliasing, Maximum Settings, No Nvidia-Specific PhysX)

Metro670

For our game tests, we'll go chronologically by age of the game, in case we see a pattern related to the vintage of the game engine being used in relation to core overclock scaling. Metro2033, released in March 2010, happens to be our most taxing benchmark, likely in part due to the inefficiency of its game engine. It uses very complex lighting, blur, and fog effects, but it's possible that its early DX11-based game engine was just slightly ahead of its time. There is no question that it is not the best looking game in this test, despite the strain it puts on our cards.

Metro7870

Metro2033's built-in benchmark demonstrated scaling that was starkly different from 3DMark. It was by far the worst of the six benchmarks we ran, with 6.1 percent faster performance on the GTX670 and 6.0 percent faster performance on the HD7870 when both are overclocked 15 percent. We previously found that this game responded very positively to VRAM overclocks, so our assumption is that the graphics engine is simply more bandwidth-contrained than the typical game engine.

Next page