MSI RTX 4060 Ti Gaming X Trio 8G: Extra efficient and quiet

Methodology: performance tests

A mid-range of graphics cards with Nvidia GPU has finally been added to the Ada Lovelace generation. The intergenerational speed increase is relatively lower with the RTX 4060 Ti, but the power draw has dropped significantly and the overall operating characteristics are pleasing. Especially when it comes to designs similar to the MSI Gaming X Trio with a well oversized cooler. Three fans, power draw under 170W, and gaming performance well above the Radeon RX 6650 XT.

Gaming tests

The largest sample of tests is from games. This is quite natural given that GeForce and Radeons, i.e. cards primarily intended for gaming use, will mostly be tested.

We chose the test games primarily to ensure the balance between the titles better optimized for the GPU of one manufacturer (AMD) or the other one (Nvidia). But we also took into account the popularity of the titles so that you could find your own results in the charts. Emphasis was also placed on genre diversity. Games such as RTS, FPS, TPS, car racing as well as a flight simulator, traditional RPG and sports games are represented by the most played football game. You can find a list of test games in the library of chapters (9–32), with each game having its own chapter, sometimes even two (chapters) for the best possible clarity, but this has its good reason, which we will share with you in the following text.

Before we start the gaming tests, each graphics card will pass the tests in 3D Mark to warm up to operating temperature. That’s good synthetics to start with.

We’re testing performance in games across three resolutions with an aspect ratio of 16:9 – FHD (1920 × 1080 px), QHD (2560 × 1440 px) and UHD (3840 × 2160 px) and always with the highest graphic settings, which can be set the same on all current GeForce and Radeon graphics cards. We turned off proprietary settings for the objectivity of the conclusions, and the settings with ray-tracing graphics are tested separately, as lower class GPUs do not support them. You will find their results in the complementary chapters. In addition to native ray-tracing, also after deploying Nvidia DLSS (2.0) and AMD FidelityFX CAS.

If the game has a built-in benchmark, we use that one (the only exception is Forza Horizon 4, where due to its instability – it used to crash here and there – we drive on our track), in other cases the measurements take place on the games’ own scenes. From those we capture the times of consecutive frames in tables (CSV) via OCAT, which FLAT interprets into intelligible fps speech. Both of these applications are from the workshop of colleagues from the gpureport.cz magazine. In addition to the average frame rate, we also write the minimum in the graphs. That contributes significantly to the overall gaming experience. For the highest possible accuracy, all measurements are repeated three times and the final results form their average value.

Computational tests

Testing the graphics card comprehensively, even in terms of computing power, is more difficult than drawing conclusions from the gaming environment. Just because such tests are usually associated with expensive software that you don’t just buy for the editorial office. On the other hand, we’ve found ways to bring the available computing performance to you. On the one hand, thanks to well-built benchmarks, there are also some freely available and at the same time relevant applications, and thirdly, we have invested something in the paid ones.

The tests begin with ComputeBench, which computes various simulations (including game graphics). Then we move on to the popular SPECviewperf benchmark (2020), which integrates partial operations from popular 2D and 3D applications, including 3Ds max and SolidWorks. Details on this test package can be found at spec.org. From the same team also comes SPECworkstation 3, where GPU acceleration is in the Caffe and Folding@Home tests. You can also find the results of the LuxMark 3.1 3D render in the graphs, and the remarkable GPGPU theoretical test also includes AIDA64 with FLOPS, IOPS and memory speed measurements.

For obvious reasons, 3D rendering makes the largest portion of the tests. This is also the case, for example, in the Blender practical tests (2.91). In addition to Cycles, we will also test the cards in Eevee and radeon ProRender renderers (let AMD have a related test, as most are optimized for Nvidia cards with proprietary CUDA and OptiX frameworks). Of course, an add-on for V-ray would also be interesting, but at the moment the editorial office can’t afford it, we may manage to get a “press” license in time, though, we’ll see. We want to expand application tests in the future. Definitely with some advanced AI testing (we haven’t come up with a reasonable way yet), including noise reduction (there would be some ideas already, but we haven’t incorporated those due to time constraints).

Graphics cards can also be tested well in photo editing. To get an idea of the performance in the popular Photoshop, we’re using a script in PugetBench, which simulates real work with various filters. Among them are those that use GPU acceleration. A comprehensive benchmark suggesting the performance of raster and vector graphics is then also used in alternative Affinity Photo. In Lightroom, there are remarkable color corrections (Enhance Details) of raw uncompressed photos. We apply these in batches to a 1 GB archive. All of these tasks can be accelerated by both GeForce and Radeon.

From another perspective, there are decryption tests in Hashcat with a selection of AES, MD5, NTLMv2, SHA1, SHA2-256/512 and WPA-EAPOL-PBKDF2 ciphers. Finally, in the OBS and XSplit broadcast applications, we measure how much the game performance will be reduced while recording. It is no longer provided by shaders, but by coders (AMD VCE and Nvidia Nvenc). These tests show how much spare performance each card has for typical online streaming.

There are, of course, more hardware acceleration options, typically for video editing and conversion. However, this is purely in the hands of encoders, which are always the same within one generation of cards from one manufacturer, so there is no point in testing them on every graphics card. It is different across generations and tests of this type will sooner or later appear. Just fine-tuning the metric is left, where the output will always have the same bitrate and pixel match. This is important for objective comparisons, because the encoder of one company/card may be faster in a particular profile with the same settings, but at the expense of the lower quality that another encoder has (but may not have, it’s just an example).

New: As of November 18, 2022, we are testing all graphics cards only in Resizable BAR active mode. There are three reasons why we will not continue with measurements without ReBAR.

The main reason is that new motherboards starting with Intel Z790 and AMD X670(E) chipset models already have it enabled, which wasn’t the case before, and the PCIe settings required ReBAR to be enabled manually. So those who don’t turn it off will be running with ReBAR active, which is a good thing from a gaming perspective where it adds performance. This is perhaps to some extent because Intel graphics cards without ReBAR don’t seem to behave correctly, and there will probably be more and more graphics cards that count on it in the future. You already know the number two reason for ReBAR-only tests.

Finally, it is also true that testing all tests twice (with and without ReBAR) with triple repeatability is extremely time consuming. However, it is still true what we have argued many times – a platform with ReBAR is less stable when it comes to measurement results. Over time, some things may change in the debugging process (from driver to driver) and may not “make sense” when compared to each other. So when you see somewhere that in other tests a slower card outperforms a more powerful one in some particular case, remember these words.

The disadvantage of measurements with active ReBAR is, in short, that all comparison tests may not always be perfectly consistent. And it is possible that there will continue to be cases where ReBAR reduces performance rather than adding to it. These are things to be reckoned with when studying results. This applies not only to our tests, but to the tests of all the others who do not retest all the older models in comparison with every new graphics card tested.


  •  
  •  
  •  
Flattr this!

Nvidia DLSS (3.5) in Alan Wake II. When does the game run best?

Alan Wake II is the first game to support Nvidia DLSS 3.5 from the start. In addition to the technological aspect, there is also the high popularity among gamers. This gives us the basic reasons to take a close look at the performance under different settings. In diving in with the gaming performance, we will be interested not only in the visual side, but also the power consumption. Not just of the graphics card, but the CPU as well. Read more “Nvidia DLSS (3.5) in Alan Wake II. When does the game run best?” »

  •  
  •  
  •  

Minitest: Cheap PSUs vs. graphics cards coils whine

You’ve already read the extensive analysis of graphics card coil whine changes depending on the PSU used. One last thing is missing for it to be complete. And that is to add the behavior of cheap PSUs with lower quality components. Otherwise, one could still speculate that across classes the situation could be significantly different. Could it? This is what we will focus on in the measurements with “80 Plus” PSUs, one of which is also already quite old. Read more “Minitest: Cheap PSUs vs. graphics cards coils whine” »

  •  
  •  
  •  

The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS

The GeForce RTX 4070 Ti Super Ventus 3X graphics card came out with a BIOS that MSI (and even Nvidia) wasn’t happy with. After the second revision, there is the third, the last revision of the BIOS. This one increases the power limit to allow higher GPU clock speeds to be achieved. However, this comes at the cost of a bit lower power efficiency. To update or not to update? That’s for everyone to decide for themselves, if they get the chance. Read more “The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS” »

  •  
  •  
  •  

Comments (5) Add comment

  1. What’s the minimum non-zero RPM? I’ve heard that it’s 850-860 for Gaming X Trio and Supreme models. Have you encountered any other models that can spin below 1000 RPM?

    I find it impractical for the GPU to operate below 60 degrees Celsius, especially since NVIDIA GPUs don’t have hotspot temps much higher. Some other models enforce a 1000-1300 RPM minimum and since Navi 3 you can’t edit this value with 3rd party software for any of the manufacturers, which makes the limit a disaster for whoever wants to play a less demanding title in the evening.

    1. The minimum/starting speed of the 4060 Ti Gaming X Trio, as far as the starting limit of the default PWM control curve is concerned, is about 780 rpm. But even at lower loads and low GPU temps, when even hotspots are well below 50 °C (typically the load corresponding to 3D rendering@CUDA in Blender), the speed quickly stabilizes at around 1050 rpm.

      Looking at the logs of the quietest graphics cards we’ve tested, a similar pattern holds true. The speed already exceeds 1000 rpm in lower load (although sometimes only by lower tens of rpm), while the starting speeds start even below 600 rpm. The fans stay at these but only for one or two seconds and stabilize only at speeds easily exceeding 1000 rpm. The final speed of course also depends on the intensity of the system cooling. In PC setups with extremely high air flow or in environments with low room air temperature (significantly below 21 °C) the speed will probably be in triple digits. 🙂

      On the basis of lower starting speeds below 1000 rpm it can be seen that there is room for manual control (with user adjustment of PWM curve) in non-reference cards.

      1. So it’s a 760 RPM sustained speed that can be set as a flat line with Afterburner if I understood correctly. Not bad, not perfect either. Especially if I undervolt, possibly underclock too for less coil whine, and the fan keeps turning on and off due to low temp.

        From GPUs tested so far, do you think that the fan design and quality in MSI Suprim and Gaming Trio cards has any real competition at the low RPM? Based on some reviews, ASUS TUF comes to my mind, but they tend to have a bad coil whine and are usually poorly priced.

        1. Yes, at least at those 760–780 rpm it will be able to be set fixed. Maybe even less, as long as the start-up speed at the boundary between passive and active mode is not the lower limit of what the fans start at in PWM control.

          I really wouldn’t dare claim who has the more efficient cooler. They all try to give key details that increase efficiency at the level of fans and heatsink. Unless someone is using really shorter stiff blades (like Gigabyte does for example) without significant vibration at the tips, the blades are joint at least two together, which is probably enough to significantly reduce vibration and tonal peaks.

          For example this RTX 4060 Ti has an effective cooler, but the overall performance is spoiled by the average noise of the coils. Surely there is a design with quieter coils, which will be acoustically more pleasant even with an eventual lower effectiveness of the used cooler. A good example can be Gigabyte RTX 4090 Gaming OC 24G, which has “quiet” coils and the cooler does not have to be the most powerful at the same noise level and still the result will be perfect. Especially after manual control with speed reduction. Just Gigabyte slightly overdoes the PWM regulation (in order to reduce the number of complaints), although it seems that in this generation it is already a bit softer – RTX 4090 Gaming OC 24G is a significantly quieter graphics card with higher power draw from the fans than Aorus RTX 3080 Xtreme 10G.

  2. Your blog is a much-needed guide for PC enthusiasts seeking clarity on the functionality of GPU fans. The detailed breakdown of why GPU fans are designed to spin and the potential issues users may encounter offers practical insights. This resource is a valuable addition to the toolkit of anyone looking to maximize their graphics card performance.

Leave a Reply

Your email address will not be published. Required fields are marked *