Nvidia introduces DLDSR: Dynamic Super Resolution with AI scaling

Deep Learning Dynamic Super Resolution

Last week, Nvidia quietly released GeForce RTX 3080 with 12GB of VRAM, which was strangely buried in an announcement of a driver release for God Of War. But there was yet another new feature hidden in it: Deep Learning Dynamic Super Resolution (DLDSR). It’s a new technology for GeForce RTX cards based on DLSS, but this time it’s not about upscaling, but ironically about downscaling, enhancing the Dynamic Super Resolution feature.

DLDSR is an evolution of the Dynamic Super Resolution feature that Nvidia added to graphics drivers in 2014. It was basically akin to FSAA: when using DSR, the game rendered at (up to) twice the resolution of the monitor – so on a 1920×1080 monitor, the game actually rendered a 4K image (3840×2160 pixels). This higher-resolution rendering was then scaled back to the monitor resolution. Doing this should increase the quality of detail and provide the equivalent of very good anti-aliasing by rendering using a higher sample rate (which is an actual supersampling).

The downside to DSR was, of course, that although you’re playing at a final resolution of 1920×1080, the performance requirements are as high as playing at native 4K. So this image enhancer tool isn’t quite suitable for all games. And this is the very reason that Nvidia is now putting  a neural network to work – similar to DLSS, it will reduce the performance requirement you need for DSR.

DLDSR will use lower virtual resolution for rendering than the original, but should achieve quality comparable to traditional DSR. Where 3840×2160 resolution was previously used (“DSR 4X”, which is a 4X scaling factor because there are 2x more pixels in both dimensions), Nvidia says that supposedly “DLDSR 2.25X”, a scaling factor of 2.25X (which is 1.5X in each dimension), will suffice now. This is because the use of AI should compensate for the lower number of input pixels, due to better quality.

It’s not entirely clear from Nvidia’s description how the neural network is applied – it could either do the actual scaling to a lower resolution, so it would replace the original algorithm in DSR (it was using a 13-tap Gaussian scaling method, according to Nvidia). Alternatively, it is possible that the old downscaling algorithm could still be applied, but a DLSS upscaling pass would be inserted between the natively rendered image and the final downscaling pass. So a lower resolution rendered image would be raised to higher resolution via DLSS and then that DLSS image would be downscaled back to the final resolution.

Comparison of native resolution, conventional DSR 4X and DLDSR 2.25X (Source: Nvidia)

How exactly is Nvidia handling this, we will hopefully find out in the future. If it were the second option, they would probably have to use DLSS 1.0, since the second generation with temporal filtering requires motion vectors and thus every game must have explicitly coded support for it.

DLDSR should not have this requirement, instead it can be freely enabled in the drivers. The simpler first option would seem more logical, i.e. that Nvidia uses a neural network directly for downscaling. Except that in a blogpost on DLDSR, Nvidia has a comparison that says DLDSR 2.25X has almost the performance of native rendering at 1920×1080, even though the it is said to be internally rendering at 2880×1620 pixels. That clearly doesn’t make sense. Again, based on this clue, we’d rather expect the feature to work according to the second theory – unless the game is CPU-limited in this comparison (then that 145 FPS for a native 1080p image would be a misleading cue). We’ll see what further information reveals to us.

New capability for graphics cards with tensor cores

The DLDSR feature can be enabled in the Nvidia Control Panel under the DSR scaling factor setting. For this setting, new factors are now added with the “DL” label, which indicates the use of AI/DLDSR, in addition to the previously available options. In addition to the aforementioned 2.5X DL factor (which uses pre-downscaling internal resolution of 2880×1620 to output to 1920 × 1080), a DLDSR setting of 1.78X is offered, which means the game will render at 2560×1440 pixels before downscaling (when using 1920 × 1080 final output resolution).

You will most likely need to upgrade to the current version of the drivers (511.17) for this new option to be available. This feature also requires a GPU with tensor cores on which the neural network runs, so you must have a GeForce RTX card of some sort, otherwise DLDSR will not work. However, if these requirements are met, DLDSR should work with virtually all games according to Nvidia, so developers don’t need to add explicit support for each specific title like they have to do for DLSS.

How to enable DLDSR v Nvidia Control Panelu (Zdroj: Nvidia)

So DLDSR will again be a thing you can potentially use to improve the image quality of virtually any game. Whether there will be any downsides to using it, we don’t know yet – there could be some artifacts appearing due to the fact that a scaling is performed with a neural network, for example.

Source: Nvidia

English translation and edit by Jozef Dudáš, original text by Jan Olšan, editor for Cnews.cz


  •  
  •  
  •  
Flattr this!

Leave a Reply

Your email address will not be published. Required fields are marked *