It's time for me to admit that AI-accelerated frame generation might actually be the way of the future and that's a good thing
This week: I've been trying to make heads or tails of the million-and-one new and exciting products announced at CES, all while soldiering bravely on through a cold. A true martyr.
CES had a lot to offer this year, but the main announcement for us PC gamers has without a doubt been the Nvidia RTX 50-series. It feels like it's been forever and a day since RTX 40-series cards became the best graphics cards, but the RTX 50-series is finally officially here—well, just as soon as the cards actually launch at the end of January and through February, that is.
Apart from the RTX 5070 seeming to have a shockingly reasonable price tag and the RTX 5090 having a downright painful one, the main thing that's struck the heart of many a PC gamer has been Nvidia's claim that the RTX 5070 will deliver "twice the performance of the 4090". And while some have been delighted by that prospect, others have responded with cynicism, pointing out that Nvidia's claim will only be true if DLSS 4 is enabled.
Apart from the urge to express an obvious response to such cynics—"duh, of course that's only with DLSS 4 enabled, Nvidia's been pretty up-front about that"—I think this is the first time I've realised that I don't actually care whether my frames are made by traditional rendering or by some AI-accelerated frame generation magic.
And trust me, that actually kind of pains me to say. For years now, I'd considered myself a staunch enemy of fake frames. Only those sweet real ones for me, thank you—ones borne of the blood and sweat of traditional shader cores.
Why was I so anti-frame gen? Well, after waving through the smokescreen reasons I only ever actually half cared about—latency, artifacts, and so on—the real reason, I must admit, was that something just rubbed me the wrong way about not owning my own GPU power. I thought: "Hey, if I'm paying hundreds for a piece of hardware, I don't want that performance to be reliant on Nvidia's machine learning and the beneficent game devs who decide to implement it. I want raw horsepower."
But now, I'm starting to realise that this argument's not quite right. After all, what performance would I actually own if a GPU was just packed with CUDA cores? Those cores wouldn't mean a damn thing without (at minimum) good drivers and game devs making proper use of them. The GPU cores are nothing in themselves. I'd been reliant on software all along, I just didn't realise it.
What I've come to realise is that AI-accelerated frame generation is just another way of utilising GPU hardware to generate frames. It's no less "local" than CUDA Cores or Stream Processors unless I arbitrarily pick "does not rely on machine learning" as the criterion for "local". But what reason do I have for picking that criterion, given CUDA Cores/SPs also rely on much on the software level, too?
The only real reason for me to pick that criterion is that traditional rendering is what I'm used to. But the future is now, old man. That's what I find myself telling myself when I see Nvidia's RTX 50-series and DLSS 4 performance claims. If AI-accelerated rendering works, maybe it's time I get with the program, especially if the results are as dramatic as Nvidia's claiming.
Maybe those who sneer "the RTX 5070 will only offer double the RTX 4090's performance if it uses DLSS 4" are akin to the luddite saying "the car will only go faster than the horse if it uses wheels." Maybe we need to accept that wheels are the future, and that that's okay.
Of course, all of this depends on whether new frame gen tech can deliver on the quality front. I was sceptical of DLSS 3's frame gen for a very long time, but most of the wrinkles have been smoothed out now. And if initial hands-on reports of FSR 4 are anything to go by, AMD's upcoming frame gen tech seems very impressive.
Ah, but then there's latency. That circle, unfortunately, is harder to square. As our resident cynic Jeremy Laird reminded me earlier today, only a "real" frame can help with latency. AI-generated frames can never improve it, which means at best you're stuck with whatever latency you would have been getting before the extra frames were generated.
One initial response to this is to say that the games where latency matters the most—esports titles—tend to be easier to traditionally render, meaning we might not need to worry too much about them, anyway. But that's a bit of a cop-out, I suppose, because we do also want low latency in non-esports titles.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
So, I'll hold my hands up and say we definitely need to keep some of the old as we hurry in with the new. We're always going to need traditional rendering—even the AI king himself, Nvidia CEO Jen-Hsun Huang, says so—not least because these are the frames that can actually adjust to your input. The frames between are essentially just padding. (Though I do wonder whether there could be a way to change that in the future. For instance, perhaps there'll someday be a way to interject input into the frame generation pipeline, ie, take a control input to guide the next frame's generation.)
Thankfully, it does seem like AMD and Nvidia are keeping the old with the new. We do still see improvements in traditional rendering performance, after all. The problem is that these improvements might be starting to plateau, perhaps as a simple result of Moore's law. (Jeremy the cynic chimes in again here to point out that Nvidia and AMD could be exaggerating the extent to which Moore's law is limiting core density.)
In which case, would we rather GPU companies don't try to give us heaps of extra performance in other ways? Yeah, no. I think I'm finally ready to admit that I like frame gen. Frame gen improvements are perfectly reasonable replacements for traditional rendering improvements, especially given the latter seems like an increasingly low-return proposition compared to the might of AI acceleration.