Nvidia RTX Video Super Resolution out. What you need to know

Web video upscaling à la DLSS (1.0)

In January, Nvidia announced a new feature for its graphics cards: the RTX Video Super Resolution, upscaling and post-processing videos played in Chrome-based web browsers. Nvidia uses AI upscaling with temporal stabilization in games under the DLSS name, so applying AI to web video is a logical extension. This feature has now been enabled by the current GeForce graphics drivers and the first comparisons and feedback are coming in.

Nvidia released a new driver version 531.18 on February 28th and one of the new features in this update is RTX Video Super Resolution. According to the original announcement, it was supposed to be available in February, so this was a last minute release. The feature require to be explicitly supported by the web browser at the same time, and for now it’s available in Google Chrome and Microsoft Edge as long as you are on the current version of those. After updating your drivers and browser, you’ll be able to use upscaling via RTX Video Super Resolution on sites like YouTube, Twitch, Hulu, and Netflix.

According to the Nvidia documents, a new and different neural network is used than the one in DLSS, and training is done on different data sets. RTX Video Super Resolution works purely with pixels of the image (video frame), unlike DLSS it doesn’t use any “meta” inputs like motion vector and depth information from the game engine. On the one hand, the filter detects and enhances edges, and on the other hand it also tries to remove compression artifacts.

Nvidia does not indicate that RTX Video Super Resolution has a temporal dimension like DLSS 2.x does – that is, the ability to combine details from multiple consecutive frames. It seems to be purely spatial like DLSS 1.0. According to the slide describing how it works, the input image is first scaled to the target resolution by a standard bicubic algorithm, and then the AI generates enhancement on top of that to improve contrast and edge sharpness of the image. Temporal reconstruction does not appear anywhere in the diagram, asi you can see here:

Slide showing the principle of RTX Video Super Resolution (source: Nvidia)

How to enable RTX VSR

This feature must be enabled in the driver settings in the Nvidia Control Panel. Go to the video playback settings tab (Adjust Video Image Settings), where the new feature appears under the label “RTX Video Enhancement”. Here you check “Super Resolution” to activate upscaling. It’s possible to adjust the quality, with the default setting of 1 expected to work on all RTX 3000 and RTX 4000 generation graphics cards, according to Nvidia.

Higher levels need more computing power. Nvidia states that with quality level 4, it should be possible to play “most content” on GeForce RTX 3070/4070-class cards. By most, they probably mean that the GPUs can handle upscaling to some combination of input resolution and frame rate – upscaling at 30fps is obviously less demanding than for some 120fps video.

Turning on RTX Video Super Resolution in the Nvidia Control Panel and setting the quality (source: Nvidia)

RTX Video Super Resolution uses tensor cores in Nvidia GPUs and is now only available on GeForce RTX 3000 or RTX 4000 cards. Eventually, support for the GeForce RTX 2000 is supposed to come as well, but those will reportedly need algorithm adjustments and therefore won’t get the feature until sometime later in the future. A specific date has not been communicated. By the way, the company says they don’t support RTX Video on professional graphics yet, only on GeForce models.

Watch out for the battery with laptops

On laptops, you also need to set the web browser to use high performance in the Windows graphics settings – this is so that Optimus technology turns on the dedicated graphics card when you open the browser.

Unfortunately, this means that opening Chrome/Edge (even without video playback) will increase power draw and reduce battery life. In this case, Nvidia recommends having both Chrome and Edge on the laptop, and using one of them for video playback (and only setting the high graphics performance mode with dedicated GPU for that one), and the other one for other work so that the laptop’s power consumption doesn’t jump up during those activities.

Settings that force the web browser to use a dedicated GPU on a laptop (source: Nvidia)

HDR video is not supported

The filter supports input videos with resolutions from 360p to 1440p. However, HDR content, YouTube Shorts, and some videos with copyright protection (DRM) are not supported. HDR support could perhaps be added eventually. It would probably require adjustments and other tweaks, but we reckon it’s not impossible.

Upscaling works within a web page, in windowed mode as well as when full-screened, and is activated when playing at any resolution higher than the input resolution. RTX Video Super Resolution is not active when playing back at native resolution (i.e., it cannot be used just to remove compression artifacts sans scaling). Nor does it support downscaling. Upscaling is disabled when the browser window is in the background or minimized. And also when the video is paused. This saves power, of which RTX Video Super Resolution can consume quite a bit.

According to the tests, the utilization of tensor cores can significantly increase the power draw compared to the level of traditional video playback. ComputerBase came up with tens of watts higher power draw on the RTX 3050 card, while Tom’s Hardware, on the other hand, reports very low power draw increases (ComputerBase, however, used 60fps video with higher input resolution).

First visual tests

Of course, the most important thing for video playbacktech is what the result looks like. Several sites have looked at the quality achieved by this feature, including Tom’s Hardware, TechPowerUp or ComputerBase. This will probably be a matter of taste, but it seems from these comparisons that the benefit is rather minor or even questionable (and for higher input resolution, it is probably a question whether it is worth using these filters at all). Without the advantage brought by game engine’s cooperation, the AI probably can’t achieve as big improvements as was expected. In particular, the missing temporal reconstruction, which is probably DLSS 2.x’s main trump card, probably hurts a lot. DLSS 2.x can get pretty good quality out of low internal resolution thanks to it, but RTX Video is unable to replicate those results to that extent.

On the other hand, like any post-processing, RTX Video can be a double-edged weapon. For example, it has been pointed out that the filter can suffer from the typical “AI” issue seen in smartphone photos – weird unnatural rendering of bushes and trees. Also, it seems you can get the so-called oil painting effect that is created by a combination of blurring (when removing compression artifacts) and sharpening. This effect used to be seen in DLSS 1.0 (it was pronounced in the Anthem implementation for example).

It’s interesting that Nvidia’s presentation of this feature emphasizes playback of captured gameplay videos a lot, instead of some of the more usual sort of content (movies/TV shows or “live” content shot on camera, or animation). It’s that kind of content on which Super Resolution doesn’t seem to bring much improvement, whereas it seems it is doing better on game footage – that kind of image is probably more suited to and less hurt by the kind of anti-aliasing that’s used by this AI. Therefore, game footage is what Nvidia wants to present.

However, it is true that with further development and training, the rendering of natural video content could be improved a bit. But upscaling is always a struggle with the impossible task of trying to reconstruct information that is no longer available, so perfect results cannot be expected.

Sources: Nvidia (1, 2)

English translation and edit by Jozef Dudáš

Flattr this!

One comment Add comment

Leave a Reply

Your email address will not be published. Required fields are marked *