AI may move at the speed of light but Nvidia's machine learning model for frame generation took 6 years to develop
Earlier today we reported how Nvidia's DLSS upscaling tech went from an idea popping out of CEO Jensen Huang's head to a SIGGRAPH keynote in just two weeks. Now it turns out it took rather longer to develop the AI model for Nvidia's frame generation technology, fully six years in fact.
Again, the revelation comes from a new book on Nvidia, The Nvidia Way: Jensen Huang and the Making of a Tech Giant by Tae Kim. Development of Nvidia's Frame Generation technology, which inserts AI-rendered game frames in between frames rendered in the traditional GPU 3D pipeline, was headed up by Bryan Catanzaro at Nvidia Research.
Author Kim says Catanzaro spent six years developing a sufficiently accurate AI model for the frame generation feature. "While we were working on it, we saw continuous improvement in the quality of results, so we kept working. Most academics don't have the freedom to work on one project for six years because they need to graduate," Catanzaro explains in the book.
Kim says frame generation and DLSS more broadly are poster examples of Nvidia's new approach to gaming graphics. While Nvidia is still committed to rolling out new GPUs on a regular basis, Nvidia Research and other groups within the company would pursue additional "moonshots" in parallel.
As we revealed in the other story, Jensen Huang instantly saw how DLSS could be used to increase the prices of Nvidia GPUs. The clever aspect of DLSS and frame gen in that regard is that it doesn't scale costs.
Nvidia can decide, for instance, to make a more complex GPU. That will not only cost more to develop, but each and every GPU will cost more to manufacture. But with DLSS and its adjacent technologies, for the most part Nvidia only has to pay to develop the algorithms which then work across a suite of different GPUs without the need for a huge investment in hardware features and without dramatically increasing the cost to manufacture GPUs.
It's a very clever approach, both in terms of increasing performance and scaling profit margins up. In that sense, it's the perfect technology. The only downside for us gamers is that we can see all the money Nvidia is making at the same time as only releasing incremental increases in traditional GPU raster performance. In an ideal world, Nvidia would put just as much effort into scaling the traditional rendering hardware as it does into these new AI-accelerated features.
However, the new RTX 50 series hasn't been a huge step forward in raw performance, instead relying on Nvidia's new Multi Frame Generation technology to boost performance using AI. That's good for Nvidia's bottom line, no doubt, but we'd much prefer to see the underlying capability of Nvidia's GPUs scale a bit more, too.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.