Who needs sleep when you have caffeine? We take a
second GeForce GTX 680 and run it in SLI against two Radeon HD 7970s.
Then we add 5760x1080 benchmark results. Then we overclock our
single-GPU flagships for a third comparison. Does our story change?
In the last week, I ate terribly, drank horrible, sugary things, and slept just enough to get myself into the lab to run benchmarks. Yesterday, as I polished off the last bit of text for GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation, I truly thought I was in for a relaxing early weekend. But then the FedEx guy knocked. Oh, FedEx guy. The rumble of your truck is a bittersweet symphony.
Sure enough, I had a second GeForce GTX 680 on my hands. And I had already purchased a pair of Radeon HD 7970s. Really, there would be no excuse for not comparing the four boards.
I wanted to test 5760x1080 performance in my original piece, but simply ran out of time after the first three resolutions. Nvidia posted a new driver immediately after the launch, so I could also take the opportunity to verify that its NVEnc/MPEG-2 bug was fixed (it's not) (Update: Nvidia tells us that NVEnc is working, but that its CUDA cores are yielding better performance, masking the impact of its fixed-function hardware). Many folks asked to see a comparison between overclocked Radeon HD 7970 and GeForce GTX 680 cards, and that’d become possible as well.
And oh, there’s the issue of availability. We’ve been quick to slam AMD in the past for paper-launching its products, sending samples out to press with a note to expect hardware weeks later. Nvidia made it a point that GeForce GTX 680 would be available on launch day, and an early leak from Newegg provided confirmation enough. Unfortunately, the boards disappeared within hours of going on sale, and there are no longer any GeForce GTX 680s to buy. Nvidia says that another wave will hit our shores in early April (Update: Nvidia also claims that a quantity of boards are arriving to AIBs every day, so keep checking online. The "early April" guesstimate referred to what the company calls "going virtual," where its partners will start shipping their own customized designs and volume ramps up substantially. We'll be keeping an eye on the availability story in the meantime; as of 3/25, there are none available). But until then, everyone who didn’t set their alarm for the 6:00 AM embargo is just as stuck as the folks who had to wait weeks for Radeon HD 7970s, and that’s no fun.
Lastly, I made a mistake in yesterday’s story. In calculating the performance per watt index of the GeForce GTX 680, a single cell in Excel was reversed, and the 680’s performance ended up getting divided by the 7970’s power. The result was an overstatement of Kepler’s efficiency compared to Fermi, which has already been fixed in the launch story. GK104’s performance per watt outcome compared to GeForce GTX 580 is still significantly better than Tahiti’s—but not to the degree previously reported.
At any rate, today’s update should give you an even more complete picture of the GeForce GTX 680’s behavior and how it compares against AMD’s fastest single-GPU card. Let’s have a look at the card from EVGA enabling our SLI-based benchmarks, and then get on with the numbers!
In the last week, I ate terribly, drank horrible, sugary things, and slept just enough to get myself into the lab to run benchmarks. Yesterday, as I polished off the last bit of text for GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation, I truly thought I was in for a relaxing early weekend. But then the FedEx guy knocked. Oh, FedEx guy. The rumble of your truck is a bittersweet symphony.
Sure enough, I had a second GeForce GTX 680 on my hands. And I had already purchased a pair of Radeon HD 7970s. Really, there would be no excuse for not comparing the four boards.
I wanted to test 5760x1080 performance in my original piece, but simply ran out of time after the first three resolutions. Nvidia posted a new driver immediately after the launch, so I could also take the opportunity to verify that its NVEnc/MPEG-2 bug was fixed (it's not) (Update: Nvidia tells us that NVEnc is working, but that its CUDA cores are yielding better performance, masking the impact of its fixed-function hardware). Many folks asked to see a comparison between overclocked Radeon HD 7970 and GeForce GTX 680 cards, and that’d become possible as well.
And oh, there’s the issue of availability. We’ve been quick to slam AMD in the past for paper-launching its products, sending samples out to press with a note to expect hardware weeks later. Nvidia made it a point that GeForce GTX 680 would be available on launch day, and an early leak from Newegg provided confirmation enough. Unfortunately, the boards disappeared within hours of going on sale, and there are no longer any GeForce GTX 680s to buy. Nvidia says that another wave will hit our shores in early April (Update: Nvidia also claims that a quantity of boards are arriving to AIBs every day, so keep checking online. The "early April" guesstimate referred to what the company calls "going virtual," where its partners will start shipping their own customized designs and volume ramps up substantially. We'll be keeping an eye on the availability story in the meantime; as of 3/25, there are none available). But until then, everyone who didn’t set their alarm for the 6:00 AM embargo is just as stuck as the folks who had to wait weeks for Radeon HD 7970s, and that’s no fun.
Lastly, I made a mistake in yesterday’s story. In calculating the performance per watt index of the GeForce GTX 680, a single cell in Excel was reversed, and the 680’s performance ended up getting divided by the 7970’s power. The result was an overstatement of Kepler’s efficiency compared to Fermi, which has already been fixed in the launch story. GK104’s performance per watt outcome compared to GeForce GTX 580 is still significantly better than Tahiti’s—but not to the degree previously reported.
At any rate, today’s update should give you an even more complete picture of the GeForce GTX 680’s behavior and how it compares against AMD’s fastest single-GPU card. Let’s have a look at the card from EVGA enabling our SLI-based benchmarks, and then get on with the numbers!