Methodology: performance tests
Not only maximum speed, but also maximum efficiency among AMD’s single-chiplet CPUs. That’s the essential characteristic of the Ryzen 7 9700X. While the speed difference from the last generation (and the Ryzen 7 7700X processor) is negligible, zero, or even negative in places, it comes with significantly lower power consumption. And for those who don’t appreciate it, BIOSes with higher TDP are available less than a month from release.
Gaming tests
We test performance in games in four resolutions with different graphics settings. To warm up, there is more or less a theoretical resolution of 1280 × 720 px. We had been tweaking graphics settings for this resolution for a long time. We finally decided to go for the lowest possible (Low, Lowest, Ultra Low, …) settings that a game allows.
One could argue that a processor does not calculate how many objects are drawn in such settings (so-called draw calls). However, with high detail at this very low resolution, there was not much difference in performance compared to FHD (which we also test). On the contrary, the GPU load was clearly higher, and this impractical setting should demonstrate the performance of a processor with the lowest possible participation of a graphics card.
At higher resolutions, high settings (for FHD and QHD) and highest (for UHD) are used. In Full HD it’s usually with Anti-Aliasing turned off, but overall, these are relatively practical settings that are commonly used.
The selection of games was made considering the diversity of genres, player popularity and processor performance requirements. For a complete list, see Chapters 7–16. A built-in benchmark is used when a game has one, otherwise we have created our own scenes, which we always repeat with each processor in the same way. We use OCAT to record fps, or the times of individual frames, from which fps are then calculated, and FLAT to analyze CSV. Both were developed by the author of articles (and videos) from GPUreport.cz. For the highest possible accuracy, all runs are repeated three times and the average values of average and minimum fps are drawn in the graphs. These multiple repetitions also apply to non-gaming tests.
Computing tests
Let’s start lightly with PCMark 10, which tests more than sixty sub-tasks in various applications as part of a complete set of “benchmarks for a modern office”. It then sorts them into fewer thematic categories and for the best possible overview we include the gained points from them in the graphs. Lighter test tasks are also represented by tests in a web browser – Speedometer and Octane. Other tests usually represent higher load or are aimed at advanced users.
We test the 3D rendering performance in Cinebench. In R20, where the results are more widespread, but mainly in R23. Rendering in this version takes longer with each processor, cycles of at least ten minutes. We also test 3D rendering in Blender, with the Cycles render in the BMW and Classroom projects. You can also compare the latter with the test results of graphics cards (contains the same number of tiles).
We test how processors perform in video editing in Adobe Premiere Pro and DaVinci Resolve Studio 17. We use a PugetBench plugin, which deals with all the tasks you may encounter when editing videos. We also use PugetBench services in Adobe After Effects, where the performance of creating graphic effects is tested. Some subtasks use GPU acceleration, but we never turn it off, as no one will do it in practice. Some things don’t even work without GPU acceleration, but on the contrary, it’s interesting to see that the performance in the tasks accelerated by the graphics card also varies as some operations are still serviced by the CPU.
We test video encoding in HandBrake and benchmarks (x264 HD and HWBot x265). The x264 HD benchmark works in 32-bit mode (we did not manage to run 64-bit consistently on W10 and in general on newer OSs it may be unstable and show errors in video). In HandBrake we use the x264 processor encoder for AVC and x265 for HEVC. Detailed settings of individual profiles can be found in the corresponding chapter 25. In addition to video, we also encode audio, where all the details are also stated in the chapter of these tests. Gamers who record their gameplay on video can also have to do with the performance of processor encoders. Therefore, we also test the performance of “processor broadcasting” in two popular applications OBS Studio and Xsplit.
We also have two chapters dedicated to photo editing performance. Adobe has a separate one, where we test Photoshop via PugetBench. However, we do not use PugetBench in Lightroom, because it requires various OS modifications for stable operation, and overall we rather avoided it (due to the higher risk of complications) and create our own test scenes. Both are CPU intensive, whether it’s exporting RAW files to 16-bit TIFF with ProPhotoRGB color space or generating 1:1 thumbnails of 42 lossless CR2 photos.
However, we also have several alternative photo editing applications in which we test CPU performance. These include Affinity Photo, in which we use a built-in benchmark, or XnViewMP for batch photo editing or ZPS X. Of the truly modern ones, there are three Topaz Labz applications that use AI algorithms. DeNoise AI, Gigapixel AI and Sharpen AI. Topaz Labs often and happily compares its results with Adobe applications (Photoshop and Lightroom) and boasts of better results. So we’ll see, maybe we’ll get into it from the image point of view sometime. In processor tests, however, we are primarily focused on performance.
We test compression and decompression performance in WinRAR, 7-Zip and Aida64 (Zlib) benchmarks, decryption in TrueCrypt and Aida64, where in addition to AES there are also SHA3 tests. In Aida64, we also test FPU in the chapter of mathematical calculations. From this category you may also be interested in the results of Stockfish 13 and the number of chess combinations achieved per unit time. We perform many tests that can be included in the category of mathematics in SPECworkstation 3.1. It is a set of professional applications extending to various simulations, such as LAMMPS or NAMD, which are molecular simulators. A detailed description of the tests from SPECworkstation 3.1 can be found at spec.org. We do not test 7-zip, Blender and HandBrake from the list for redundancy, because we test performance in them separately in applications. A detailed listing of SPECWS results usually represents times or fps, but we graph “SPEC ratio”, which represents gained points—higher means better.
Processor settings…
We test processors in the default settings, without active PBO2 (AMD) or ABT (Intel) technologies, but naturally with active XMP 2.0.
… and app updates
The tests should also take into account that, over time, individual updates may affect performance comparisons. Some applications are used in portable versions, which are not updated or can be kept on a stable version, but this is not the case for some others. Typically, games update over time. On the other hand, even intentional obsolescence (and testing something out of date that already behaves differently) would not be entirely the way to go.
In short, just take into account that the accuracy of the results you are comparing decreases a bit over time. To make this analysis easier for you, we indicate when each processor was tested. You can find this in the dialog box, where there is information about the test date of each processor. This dialog box appears in interactive graphs, just hover the mouse cursor over any bar.
- Contents
- AMD Ryzen 7 9700X in detail
- Methodology: performance tests
- Methodology: how we measure power draw
- Methodology: temperature and clock speed tests
- Test setup
- 3DMark
- Assassin’s Creed: Valhalla
- Borderlands 3
- Counter-Strike: GO
- Cyberpunk 2077
- DOOM Eternal
- F1 2020
- Metro Exodus
- Microsoft Flight Simulator
- Shadow of the Tomb Raider
- Total War Saga: Troy
- Overall gaming performance
- Gaming performance per euro
- PCMark and Geekbench
- Web performance
- 3D rendering: Cinebench, Blender, ...
- Video 1/2: Adobe Premiere Pro
- Video 2/2: DaVinci Resolve Studio
- Graphics effects: Adobe After Effects
- Video encoding
- Audio encoding
- Broadcasting (OBS and Xsplit)
- Photos 1/2: Adobe Photoshop and Lightroom
- Photos 2/2: Affinity Photo, Topaz Labs AI Apps, ZPS X, ...
- (De)compression
- (De)encryption
- Numerical computing
- Simulations
- Memory and cache tests
- Processor power draw curve
- Average processor power draw
- Performance per watt
- Achieved CPU clock speed
- CPU temperature
- Conclusion











Article hint: AMD ryzen cpus have taken power also from atx24pin for some non-core rails. iirc memory controller. This is basicly ignored by all reviewers, by hwinfo, by amd ryzen master, by cpu’s own power tracking. I do not know if this is still a case in am5 socket. This has made amd look much more efficient than it has been. Could you do investigating test?
We had measurements of the ATX connector in the in-depth tests of motherboards, but we eventually removed it from them. From my point of view, it didn’t provide all that useful information that would have been helpful for the evaluation. What materials say that the processors are partially powered from the 24-pin connector? Personally, it doesn’t make much sense to me (for something in modern CPUs to be powered by such weak wires), even from the EMC point of view. Rather, I’m worried that it could lead to possibly unnecessary instability. But that’s just a feeling, a layman’s view.
Then there is the other thing, namely that there are other devices on each rail of the ATX connector. For example, PCIe slots (and typically a graphics card) on 12 V, DDR5 memory on 5 V, and 3.3 V should be used to power M.2 SSDs? Well, it probably doesn’t have to be on all motherboards (some may use VRM to change higher voltage from another rail?), but even if we have information about the current drawn through the 24-pin ATX connector, it will be quite difficult to separate which part of it belongs to which device/component within the motherboard. Or? How would you design a methodology for such a test?
Trace out where from VDDIO_MEM3 pins get their power. https://cdn.hackaday.io/files/1733807417889920/AM4%20Pinout%20Diagram.pdf
Thanks for the very nice diagram. When there’s space, we’ll try to study it. In any case, I’m worried about how this would be handled, since the power supply from the 24-pin wires is shared for multiple devices on each rail… I can’t think of a way to separate the devices. Then, with the help of a tool, you can also measure the current directly on the pins of the socket, but this can probably distort the characteristics of the processor as such to a certain extent.
Many reviewers publish “power-at-wall” figures instead of cpu power. In some sense, it is a more relevant measure. What I can remember, Ryzens tend to be more efficient than intels when measured at the outlet too.
Maybe take a closer look one day and compare the power efficiency according to the different measures? Do those mostly agree or not?
Intel is more efficient at idle and low load use **if** a system (firmware/bios) has powersavings configured correctly. AMD is more efficient at full load use – and by a lot.
” **if** a system (firmware/bios) has powersavings configured correctly. AMD is more efficient at full load use”
you can apply the same logic here too: if you powerlimit an intel down to the level of amd it will also be more efficient, for example the apparently most efficient 7800X3D got 17492 points in CB R23 at 83.78W@EPS. My 14700K when limited to 40W got 19266 points, so even if the VRMs wasted 50% it would be still more efficient at that level
Yes, we’ll definitely be looking at the relationship of isolated measurements (only CPU and motherboard VRMs on EPS cables) and system power consumption at some point. It’s a very complex issue. Most reviewers probably measure system power consumption mainly because it’s technically easier to use, but it doesn’t take into account, for example, that different equipped boards have different power consumption of components not related to the CPU per se. When judging the measured values based on system power consumption, it is also important to note that with different CPUs the power consumption of the same graphics card may be different, which is also one of the factors that distort the results. Personally, I find it useful to eliminate these factors. However, it may be interesting to investigate the dependency of system power consumption and isolated power consumption (purely CPU), if in both cases a larger number of model situations with different motherboards and different graphics cards are created.
— „What I can remember, Ryzens tend to be more efficient than intels when measured at the outlet too.“
They are arguably more effective in our tests as well, aren’t they? 🙂
Whether it’s relevant depends on what we want to compare and evaluate. If this was really with a 125W power limit https://www.hwcooling.net/wp-content/uploads/2024/08/gigabyte-b650e-aorus-pro-x-usb4-g262.html it suggests the differences in VRM efficiency can be quite huge. So it would be relevant with regards to motherboards, but deceptive when comparing cpus.