SLI. Crossfire. These terms evoke images of Multi-GPU overclockers setting world record benchmarks, gamers going far beyond the boundaries of conventionality and pushing their gaming performance to new heights. Frame rates which the mere mortal single GPU owners can only dream of and if nothing else, the ability to go on to ebay when the next gen came out and get a relatively cheap performance boost by doubling up on an existing card with someone who jumped to the newer gen.
Enthusiasts adored it, comment sections raged with benchmark glory hunters attempting to squeeze another few MHz of overclock out of their chips and everyone looked down on the peasantry of the console world. All was right and good in the world of PC Master Racedom.
The thing is though, there are some of us who missed out entirely on the point of multiple GPU. Yes, for outright performance chasing headline numbers, nothing could touch a system with 2, 3 or even 4 graphics cards, but at what cost?
MGPU: Welcome to the Land of Plenty
Yep, but aside from all the positive connotations associated with the land of plenty, things we associate with benchmark scores and high frame rates, there are plenty of negatives too and these were well publicised and many.
- Games often launched without MGPU support.
- Day 1 drivers often didn’t handle MGPU.
- Scaling ratios were quite variable with 2 cards and significant diminishing returns kicked in above 2.
- MGPU game profiles were often hit or miss being based on the base engine rather than tuned to the specific game.
- If you managed to get everything working, micro-stuttering, flickering and other technical glitches would regularly manifest.
It’s fair to say that there have historically been a lot of issues with multi-GPU implementations, whichever your chosen brand of poison. If you can live with the shortcomings, it’s true that you could often get amazing performance, but we’re definitely seeing a shift away from the MGPU concept even from the manufacturers.
NVIDIA, AMD Taking a Break from Multi-GPU
We’re seeing both manufacturers slowly shift their stance away from multi-GPU, whether it’s the gradual decline in production of dual-GPU on a single card aimed at gamers (they’re still somewhat produced by AMD, but the focus is definitely shifting away from gaming and gamer oriented budgets, NVIDIA hasn’t made one since the disastrous Titan Z), or the slow reduction in supporting technologies such as NVIDIA dropping generic support for up to 4 cards down to 2 and with their new Titan V which launched late last year, dropping SLI and NVLink altogether (although admittedly this is more of a pro card than previous Titans).
Arguments abound, chief among them that the real reason these edge case pushing scenarios are disappearing is that with the advent of DirectX 12 and the API natively supporting the possibility of multi-GPU implementations without the need for the traditional manufacturer backed offerings, the traditional view of multi-GPU is no longer required. A GPU power utopia of logic which just magically matches the relative performance levels of any given combination of cards will just sort out any issues which occur and lead us all to the promised land of consistent and smooth high frame rate gaming performance…
There May be Trouble Ahead
Engine manufacturers are indeed (slowly) updating their engines to allow for DX12 explicit multi-GPU support but something odd strikes me at this point.
If relatively closed ecosystems with tightly controlled hardware specifications can’t achieve the kinds of performance without compromises that we want from multiple graphics cards, why would we imagine that an open API which allows for significant performance variations in graphics cards should be able to do a better job?
In many ways, it’s the classic Apple vs. Android or Apple vs. Windows argument. As much as many of us hate one or the other, each side has its adherents. The point is that the SLI and Crossfire implementations strike me as the Apple approach to MGPU. Controlling the requirements of the hardware and software to attempt to create the best user experience possible and any but the most ardent fanboy will admit there were problems with that approach.
The approach of externalising it to the API simply allows for a less tightly controlled environment which logically will suffer from increasing performance disparity, particularly when AFR which is still the likeliest method of handling MGPU is applied.
The Crux of the Issue
Gamers are a funny bunch. We chase frame rates as the holy grail of our experience. The difficulty is that in many ways, this is fast becoming an outdated mode of thinking. A friend of mine attempts to go multi GPU pretty much every generation and quickly ends up regretting it and returning a card. Another still swears by it. Arguments rage between the two about how much better their given approach is and I was struck by a recent comment where the MGPU fan effectively said “screw you guys, I’m going to game on my awesome dual GPU setup” and the other said “what game are you gonna play, 3D Mark?”
What we really want is the best gaming experience we can get. As the owner of a Titan X (GP102-400 version) card, I get pretty good performance in many games. I sometimes need to compromise on settings at 4k, but it’s not that bad. Even so, I was avidly awaiting the Titan Volta until…
I got a G-Sync monitor. And I have to say, I’m amazed with the results. From almost constantly watching the frame rate in the bottom right hand corner of my monitor when I’m in game, for the first time ever, I’ve actually turned the counter off (OK, that’s a lie, I haven’t really, but I could if I wanted to, honest). The thing is, although I’m a performance geek and chase headline numbers as much as anyone, now it doesn’t matter as much. Sure, I’ll still benchmark a game if it has one and play around with settings but in all honesty, the difference between Assassin’s Creed Origins at 35FPS and 60FPS isn’t particularly noticeable to me anymore.
Visual anomalies associated with low frame rates simply disappear so the question really is, are we witnessing the zenith of the constant chase for more power, the edge case handling for a handful of fans wanting to argue over benchmark scores? I believe we are, and when gamers realise this for real the market for MGPU will continue its inexorable decline and go gentle into that good night.