Nvidia unveils DLSS 3.5: Better ray tracing not only for RTX 4000

AI Frame Reconstruction: How does it work?

Nvidia has now announced a new iteration of its DLSS AI upscaling technology, following on from the third generation or DLSS 3 from last year. However, the new DLSS 3.5 is somewhat confusingly named, as it is to some extent more of a continuation of DLSS 2.x – this improvement will not depend on DLSS 3 (also referred to as Frame Generation). That means it works on older GeForce RTX 2000 and RTX 3000 generation graphics cards.

The new DLSS 3.5 has one central new feature, which Nvidia has named Ray Reconstruction (we’ll see, maybe in the future this name will be used more than the DLSS 3.5 designation). The goal is to improve the image quality of ray tracing, and what it boils down to is replacing the denoisers that are used during ray tracing.

Denoiser usage during ray tracing

As you probably know, to render a scene using ray tracing, you need to analyze a large number of light rays hitting and getting reflected from objects. The problem in games (but basically also in offline rendering of static scenes and movies computed over a much longer period of time than a game frame) is that there is simply not enough performance to compute as many rays as would be needed.

Therefore, only a relatively small number of rays are analysed. The way you can think of it is that instead of a nice final picture, you get an image with not a continuous image, but just individual points forming a kind of noisy image with gaps between them.

A schematic of traditional raytracing rendering

The game realtime implementation of raytracing in DXR (DirectX Ray Tracing) has used denoiser filters from the beginning to smooth and fill out this image, suppressing those discontinuities and allowing it to be used in the game. There are different types of denoisers and they can use both temporal (multiple frame processing) and spatial (i.e. smoothing only based on data within a single frame) techniques.

How denoisers work in ray tracing

To be honest, I was actuallyunder the impression from previous presentations that Nvidia was already implementing these denoisers for ray tracing with neural network or AI, but now the company says that these denoisers are still being implemented in games as traditional “hand-designed” algorithms, or sometimes combinations of several such algorithms.

Ray Reconstruction using AI

From that startpoint, the DLSS 3.5 or Ray Reconstruction technology does a simple thing. Because this task is one of those for which the “black box” nature of artificial intelligence is well suited, Nvidia has done just that, and DLSS 3.5 provides a special neural network to use at this point in the game rendering, replacing the work of the traditional denoisers. The neural network is trained on a corpus of clean and noisy images for this purpose, similar to how it is trained on pairs of original and downscaled images for upscaling. Once trained, it should perform better than traditional denoisers, according to Nvidia.

This AI de-noising filter is similar in operation to DLSS 2.x – it performs both de-noising and upscaling on the raytracing lighting image data. It uses various data from the game engine to perform enhancement processing on the input rendered frames, but in this case it works on the raytracing lighting image data instead of the final scene’s frames. According to Nvidia’s presentation, the filter is temporal (uses and combines data from multiple frames, like 3D denoisers) and uses motion vectors – it puts together several consecutive past frames for temporal filtering, and by doing this it can also restore some detail that would otherwise be lost in the low resolution processing game use for raytracing effects.

Schematic of raytracing pipeline with Nvidia DLSS 3.5 (in the diagram there is also frame generation alias DLSS 3, which is not part of the DLSS 3.5 process)

Integration with DLSS 2.x

An important detail is that this de-noising AI seems to be one common unit with the upscaling AI used for DLSS 2.x, one single model appears to perform both functions. What this should be helpful with is that the AI has more information available to do its work. If the operation of these two steps was separate, that DLSS 2.x upscaling step could end up doing a worse job of upscaling lighting effects and detail due to the denoiser running before it having deleted (smoothed out) some detail and information from the input. An AI integrated together in this way can however hold on to such information from earlier steps and still be able to apply it as input to its decision making in later processing steps.

Nvidia claims that using this AI within DLSS 3.5 will improve image quality, as the denoiser and its temporal function will be able to preserve some extra detail while preventing some of the artifacts (temporal ghosting, or detail blurring) that current denoisers cause or are unable to prevent.

However, as with other such AI techniques, it should be remembered that we are talking about a rendering technique that works on the principle of approximation and all these AI techinques are largely about “guessing and making up” visual data from limited and missing image information, so it cannot magically deliver a perfect result. The goal of DLSS 3.5, as with other DLSS iterations, is to achieve better visual result within some performance constraints. But still, various artifacts and imperfections can (or rather will) occur in the output. After all, all versions of DLSS have undergone and continue to undergo evolution, which is precisely about incremental improvement and mitigation of various flaws and artifacts.

DLSS 3.5 does not imply DLSS 3

According to Nvidia, DLSS 3.5 as we have just described it should work on all GeForce graphics cards with tensor cores, i.e. on GeForce RTX 2000, 3000 and 4000. It doesn’t require the new dedicated special-purpose units from the Ada Lovelace generation GPUs, unlike DLSS 3. Nevertheless, beware that “DLSS 3.5” in this sense doesn’t mean a fully replacement (or superset) of DLSS 3, although the naming convention implies this.

The Frame Generation technology (that inserts frames interpolated from the game’s frames that aren’t rendered by the actual game), which until now has been referred to as DLSS 3, will still require the dedicated hardware of GeForce RTX 4000 graphics cards. The announcement of DLSS 3.5 does not mean that you will now get Frame Generation with GeForce RTX 2000 and 3000 generation graphics cards. In that regard, the designation chosen by Nvidia isn’t very fortunate.

Více: S RTX 4000 přichází Nvidia DLSS 3. Nová generace AI upscalingu generuje snímky, obchází limit CPU

In fact, DLSS 3.5 in terms of Ray Recontruction does not even require that DLSS 3 image generation be used at the same time, even though Nvidia’s diagram of how it works lists both of these techniques in the overall DLSS 3.5 “flowchart”. However, Ray Reconstruction needs DLSS 2.x to be active at the same time, in order to be able to do the joint AI denoising-upscaling.

In games in the autumn

According to Nvidia, the technology should appear in Cyberpunk 2077, of which the company showed a demo. It should also be seen in the Portal remake with raytracing effects and Allan Wake 2.

For games, that’s it for now, the other software stated to be getting the feature is related to graphics rendering outside of games – Chaos Vantage, D5 Render and the Nvidia Omniverse framework. These titles should be available in the autumn, so we won’t have to wait too long for the chance to test this.

Source: Nvidia

English translation and edit by Jozef Dudáš

Flattr this!

Gigabyte SSD brings back SLC NAND, lasts 109,500 write cycles

The boom (or bubble?) around AI has brought many things, and among them interesting news for those missing SSDs based on MLC and SLC NAND Flash which was more pricy but had better performance and crucially, much longer lifespan so you didn’t have to worry about wearing out the SSD. That said, Gigabyte is launching an SSD that is officially designed for AI applications, but not just for them – its main asset is precisely SLC recording. Read more “Gigabyte SSD brings back SLC NAND, lasts 109,500 write cycles” »


End of DDR3 memory, old PC upgrades to get costlier. Because of AI

HBM2, HBM3 etc. to be rare memory used on expensive server and compute hardware in low volumes. But the boom of AI accelerators (like Nvidia’s GPUs) suddenly catapulted the technology into a highly desirable component, now accounting for large percentages of total DRAM production. This is going to be at the expense of legacy RAM – a large portion of the lines previously producing DDR3 memory have reportedly switched to HBM. Read more “End of DDR3 memory, old PC upgrades to get costlier. Because of AI” »


AMD rebrands CPUs too, mimics Intel’s Core Ultra with Ryzen AI

Last summer, Intel announced a transition to a new era of processors, symbolized by the rebranding to Core Ultra for Meteor Lake and the upcoming Lunar Lake and Arrow Lake processors. AMD has often copied Intel’s branding methods in the past, and it seens it’s going to do it again. The upcoming Zen 5 processors will get their own version of the Ultra gimmick, and ironically, Intel might actually be jealous of the idea, this time. Read more “AMD rebrands CPUs too, mimics Intel’s Core Ultra with Ryzen AI” »


Leave a Reply

Your email address will not be published. Required fields are marked *