MSI Titan GT77: Extreme performance, smaller then ever

Blender – CPU, CUDA and Optix tests

MSI’s Titans have always represented the absolute most you can get in a laptop. Over time, however, they’ve been shrinking in size (and hand-in-hand with that, weight) in a way that the new GT77 Titan is actually a fairly compact laptop already. Inside it, though, is the most powerful mobile graphics card, the GeForce RTX 3080 Ti, and a Core i9-12900HX processor. Loud noise and high temperatures?

Blender – CPU and GPU comparison

We are introducing a new type of test in which we want to show you the differences between CPU and GPU rendering and at the same time take a closer look at thermal management, clock speed and power draw in practice and not just the maximum or average values as in the previous pages.

So we compare the course of the BMW test in the latest version of Blender, where in addition to the classic CPU and GPU render using CUDA, we also have the opportunity to use the new Nvidia OptiX, which uses new hardware resources for RTX graphics cards. While CUDA works with shaders, OptiX also uses RT cores and tensor cores for acceleration. Such a more complex involvement of computing units brings higher performance and efficiency is at a better level. At the same time, the application support is already quite decent and comprehensive. For an overview of the editors supported by the Optix API, see the Nvidia website. Nvidia is serious about this interface and has been developing “studio” drivers in addition to game-ready drivers for some time now, which are better and faster optimized for changes in supported applications.

In the first graph is a look at the development of CPU clock speeds over the course of the render. The classic CPU mode gradually decreases from 4 to 3.6 GHz. For the combined workload in CUDA mode, we see quite a big drop, which will continue to be of interest to us. In OptiX the clock speeds jump all over the place, but in this mode the CPU isn’t even being used, so this result needn’t worry us.

CPU Package power draw shows that CPU mode under full load can take up to 180 W, but this value drops to 140 W during the test. CUDA also reaches a similar threshold but then drops very quickly to 80 W, from which it then increases slightly but not significantly. OptiX hovers at the 30 W level.

The CPU temperature graph is very important. The maximum load on the processor is reflected in temperatures around 95 °C, which is not low, but the drop in clock speeds was only slight. The combined load is again presented by a large drop to 70 degrees, and a subsequent rise to 95 °C. The OptiX bounces around 60 degrees, which is just a side effect of the interconnected cooling with the graphics.

GPU load in CPU mode is practically minimal, on the contrary, in CUDA and OptiX you can see full load. Note that there is only one high jump in CUDA, which will be the cause of the dips we saw in the CPU graphs.

CPU clock speeds are of course at a minimum, as dedicated graphics are not used. With OptiX again we see a straight line at 2000, CUDA is also fluctuating between 1400-2000 MHz. The OptiX graph shows us excellent stability at high GPU clock speeds.

We see interesting differences in graphics power draw, where OptiX hovers around 145 W and CUDA has a maximum of 168 W. In CPU mode, power draw is practically zero. The measured values are the highest we have seen so far and by a wide margin.

Lastly, a look at the GPU temperatures. For the CPU, there is a visible rising line to 38 °C. A large part of the chip is working in OptiX, yet the temperature only attacks 60 degrees Celsius, which is also the maximum that was reached in CUDA combined mode, while it is CUDA that usually has worse results. Therefore, the laptop handles the load in the test with ease.

As always, we can see the differences between the different rendering modes in Blender. We’ll start with the slowest one, which is CPU mode. It should be noted that this is the fastest result we’ve seen so far in this mode. The Legion 7 16 AMD loses up to 46 seconds on the Titan, so it is almost 2 times slower. In CUDA mode, however, the Titan is already only two seconds faster, and in OptiX the lead grows to three seconds.

  •  
  •  
  •  
Flattr this!

The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS

The GeForce RTX 4070 Ti Super Ventus 3X graphics card came out with a BIOS that MSI (and even Nvidia) wasn’t happy with. After the second revision, there is the third, the last revision of the BIOS. This one increases the power limit to allow higher GPU clock speeds to be achieved. However, this comes at the cost of a bit lower power efficiency. To update or not to update? That’s for everyone to decide for themselves, if they get the chance. Read more “The Ventus 3X (RTX 4070 TiS) case: Final vs. original VBIOS” »

  •  
  •  
  •  

MSI officially about RTX 4070 Ti Super 16G Ventus 3X faults

MSI has released a statement saying that the RTX 4070 Ti Ventus 3X graphics cards did indeed come out with an untweaked BIOS that prevents this graphics card from achieving its maximum performance. However, there seems to be a fix already that could solve everything. Still… let’s revisit this topic and try to sort through the possible technical reasons that cause the significant fluctuation in the performance of the cheapest three-fan MSI RTX 4070 Ti Super. Read more “MSI officially about RTX 4070 Ti Super 16G Ventus 3X faults” »

  •  
  •  
  •  

MSI RTX 4070 Ti Super 16G Ventus 3X: Big cooler w/o a markup

The biggest hardware changes compared to non-Super cards concern the GeForce RTX 4070 Ti Super. What’s different is the GPU, the amount of GDDR6X memory or the width of the memory bus. We have the RTX 4070 Ti Super in one of the cheapest non-reference designs, the Ventus 3X, for analysis and it will be about “reputation repair” as well. MSI has tarnished it a bit in this line of graphics cards in the past, but now it’s a very attractive solution. Read more “MSI RTX 4070 Ti Super 16G Ventus 3X: Big cooler w/o a markup” »

  •  
  •  
  •  

Comments (2) Add comment

  1. wondering how this testing is done when everyone leave in spec 64gb ddr5 4800mhz when this laptop is not able to run 4800mhz with 2 and more inserted sticks. thats only possible with 1 stick. even msi realised this and removed it from spec. the memory is capable of 4800 in jedec 5 and 7 profile but doesnt have cmp profile and also laptop even in advanced menu doesnt have xmp profile listed just default and custom. any attempt to chamge to 4800 results in pc reitraining memory fail.its proven the xmp work if u buy different memory modules like crutial but u should not for pricetag of 5k euros. to be fair the 4000mhz is ok as its CL32 while 4800mhz would be CL 40 also during testing you should notice the gpu is runing pcie 40 x8 by design. thats not causing any performance issue as u r not able to saturate pcie 40×8 anyway but should be mentioned in my opinion.

    1. Hello Martin, you are absolutely right. When checking the screenshots from tests I can confirm the GPU to be running PCIe 4.0 x8 and also the memory config to be 4000 MHz and CL32. We’ll check with the manufacturer why the 4800 MHz was advertised at it’s still shown at lot of eshops even at this moment.

Leave a Reply

Your email address will not be published. Required fields are marked *