AMD Radeon RX 6800 test: More affordable “Big Navi”

Methodology: performance tests

The Radeon RX 6800 is the cheapest graphics card equipped with a Navi 21 graphics chip. That is naturally slower in this card than in the RX 6800 XT and RX 6900 XT, but the RX 6800 clearly offers the most attractive price/performance ratio of the three. And the power draw is also not bad, the efficiency is remarkably very decent here, despite the large, partially deactivated core. However, some flaws can be found on the AMD reference card.

Gaming tests

The largest sample of tests is from games. Given that GeForce and Radeon, i.e. cards primarily intended for gaming use, will mostly be tested, this is quite natural.

We chose the test games primarily considering the balance between the titles better optimized for the GPU of one manufacturer (AMD) or the other one (Nvidia). But we also took into account the popularity of the titles so that you could find your favored results in the charts. Emphasis was also placed on genre diversity. Games such as RTS, FPS, TPS, car racing as well as a flight simulator, traditional RPG and the most played football game. You can find a list of test games in the library of chapters (9–32), each game having its own, sometimes two (chapters) for the best possible clarity, but this is for a good reason, which we will share with you in the following text.

Before we start the game tests, each graphics card will be tested in 3DMark to warm up to the operating temperature. That’s good synthetics for the beginning.

We’re testing performance in games across three resolutions with an aspect ratio of 16:9 – FHD (1920 × 1080 px), QHD (2560 × 1440 px) and UHD (3840 × 2160 px) and always with the highest graphic settings, which can be set the same on all current GeForce and Radeon graphics cards. We turned off proprietary settings for the objectivity of the conclusions, and the settings with ray-tracing graphics are tested separately, as lower class GPUs do not support them. You will find their results in the complementary chapters. In addition to native ray-tracing, also after deploying Nvidia DLSS (2.0) and AMD FidelityFX CAS.

If the game has a built-in benchmark, we use that one (the only exception is Forza Horizon 4, where due to its instability – it used to crash here and there – we drive on our track), in other cases the measurements take place on the games’ own scenes. From those we capture the times of consecutive frames in tables (CSV) via OCAT, which FLAT interprets into intelligible fps speech. Both of these applications are from the workshop of colleagues from the gpureport.cz magazine. In addition to the average frame rate, we also write the minimum in the graphs. That contributes significantly to the overall gaming experience. For the highest possible accuracy, all measurements are repeated three times and the final results form their average value.

Tests with active AMD Smart Access Memory will not be part of the standard methodology yet. Of course, we will focus on SAM, but for better orientation, we will include these tests in a separate article. But we’re doing this just temporarily, until GeForce graphics supports it as well. Then we switch to the opposite model and all cards will be tested with SAM turned on. Until then, however, SAM will be turned off in standard tests, and we will publish the performance increase under its influence separately. No one will be cut short by anything (neither those who have pure AMDs in their cases, nor the owners of Intel platforms) and the clarity of the results will be nicely preserved. Still, putting multiple modes of one card into the same chart (or having 500 charts per article instead of 300) would no longer do any good.

We plan to do one more thing – once a quarter to measure the impact of various updates (drivers, OS, games, BIOS) on performance. This will result in percentage increases or drops in performance that you can work with when studying older tests. It’s a bit of a compromise, but it’s definitely a better option than releasing new tests with out-of-date software. Of course, it would be ideal to test all previous cards before doing every new test, but this is unrealistic. But we believe that you will also appreciate the continuous measurement with one GeForce graphics card and one Radeon and the inclusion of the appropriate coefficient in the criteria of interactive graphs.

Computing tests

Testing the graphics card comprehensively, even in terms of computing power, is more difficult than drawing conclusions from the gaming environment. Just because such tests are usually associated with expensive software that you don’t just buy for the editorial office. On the other hand, we’ve found ways to bring the available computing performance to you. On the one hand, thanks to well-built benchmarks, on the other hand, there are also some freely available and at the same time relevant applications, and thirdly, we have invested something in the paid ones.

The tests begin with ComputeBench, which computes various simulations (including game graphics). Then we move on to the popular SPECviewperf benchmark (2020), which integrates partial operations from popular 2D and 3D applications, including 3Ds max and SolidWorks. Details on this test package can be found at spec.org. From the same team also comes SPECworkstation 3, where GPU acceleration is in the Caffe and Folding@Home tests. You can also find the results of the LuxMark 3.1 3D render in the graphs, and the remarkable GPGPU theoretical test also includes AIDA64 with FLOPS, IOPS and memory speed measurements.

For obvious reasons, 3D rendering makes the largest portion of the tests. This is also the case, for example, in the Blender practical tests (2.91). In addition to Cycles, we will also test the cards in Eevee and radeon ProRender renderers (let AMD have a related test, as most are optimized for Nvidia cards with proprietary CUDA and OptiX frameworks). Of course, an add-on for V-ray would also be interesting, but at the moment the editorial office can’t afford it, we may manage to get a “press” license in time, though, we’ll see. We want to expand application tests in the future. Definitely with some advanced AI testing (we haven’t come up with a reasonable way yet), including noise reduction (there would be some ideas already, but we haven’t incorporated those due to time constraints).

Graphics cards can also be tested well with photo editing. There is already a large number of various filters with GPU acceleration support, but the possibility of convenient repeated measurements is important. This quite well allows blur in Photoshop and in the cheaper Affinity. We implement it on a large photo with a resolution of 62 Mpx, to which we apply a script via Macro Recorder with a high frequency of steps there (250 px) and back (0 px), while recording average fps. In Lightroom, there are notable color corrections (Enhance Details) of raw uncompressed photos. We apply these in batches to a 1 GB archive. All of these tasks can be accelerated by both GeForce and Radeon.

From another perspective, there are decryption tests in Hashcat with a selection of AES, MD5, NTLMv2, SHA1, SHA2-256/512 and WPA-EAPOL-PBKDF2 ciphers. Finally, in the OBS and XSplit broadcast applications, we measure how much the game performance will be reduced while recording. It is no longer provided by shaders, but by coders (AMD VCE and Nvidia NVENC). These tests show how much spare performance each card has for typical online streaming.

There are, of course, more hardware acceleration options, typically for video editing and conversion. However, this is purely in the hands of encoders, which are always the same within one generation of cards from one manufacturer, so there is no point in testing them on every graphics card. It is different across generations and tests of this type will sooner or later appear. Just fine-tuning the metric is left, where the output will always have the same bitrate and pixel match. This is important for objective comparisons, because the encoder of one company/card may be faster in a particular profile with the same settings, but at the expense of the lower quality that another encoder has (but may not have, it’s just an example).


  •  
  •  
  •  
Flattr this!

Windows 11 stops working on more processors, requires SSE4.2

This year, the vague uncertainty about Windows 11 not supporting older computers turned into reality, as the OS began using the POPCNT instruction, causing it to stop working on many processors. However, this was not all and the requirements may increase further. In fact, now Windows 11 is starting to require additional instruction set extensions that will shut down more processor families, including Phenoms and the first APUs. Read more “Windows 11 stops working on more processors, requires SSE4.2” »

  •  
  •  
  •  

RDNA 4 Radeon GPUs: specs and performance of both chips leaked

Previously, new GPU generations were coming in 2-year cycles, which would mean a launch this fall. However, Nvidia’s roadmap has put the GeForce RTX 5000 launch into 2025 some time ago. AMD is still unclear on the launch date of Radeon RX 8000s, but there’s some chance it’s within this year. The specs of these GPUs using RDNA 4 architecture have now surfaced on the internet. If they are real, it might even point to a release relatively soon. Read more “RDNA 4 Radeon GPUs: specs and performance of both chips leaked” »

  •  
  •  
  •  

AMD to produce lowend CPUs and GPUs using Samsung’s 4nm node

Back when the groundbreaking Ryzen processors launched, AMD was still manufacturing almost all of its products at GlobalFoundries, with the exception of chipsets designed by ASMedia. But now, by contrast, it is almost fully tied to the fortunes of TSMC. However, it looks like there could soon be some diversification in place. Samsung-made chips are coming to low-cost processors and they’ll also appear in Radeon graphics cards later. Read more “AMD to produce lowend CPUs and GPUs using Samsung’s 4nm node” »

  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *