Nvidia surprised many by changing the boost limits on the new GeForce RTX 50 series, removing the option to set a temperature limit during overclocking. Now, only the power limit is adjustable, and specs list only the GPU’s maximum temperature. We’ll look at how this affects GPU Boost compared to the RTX 30 series, what happens to clock speeds and performance when limits are hit, and how to tell if your GeForce is overheating.
With the new GeForce RTX 50 series, Nvidia made a change to the GPU Boost automatic overclocking system that it didn’t elaborate on—the adjustable temperature limit has been removed.
Older GeForce cards had two adjustable target limits for automatic GPU Boost clock speed management—a temperature limit and a power limit. Based on these, the automatic control adjusted the GPU clock and cooler fan speeds. Both could be set within the ranges stored in the card’s BIOS using overclocking software.
The easiest way to check their default, minimum, and maximum values is with GPU-Z. On the Advanced tab under General, you’ll find, for example, that the GeForce RTX 3080 Founders Edition has a minimum temperature limit of 65 °C, a default of 83 °C, and a maximum of 90 °C. For power, under the Power Limit section, you’ll see an adjustable range from -69 % to +16 %.


You can find the absolute power limit values by switching to the information stored in the NVIDIA BIOS. For the RTX 3080 Founders Edition, the minimum is 100 W, the standard value is 320 W, and the maximum is 370 W.
The real maximum is elsewhere
If you check the GeForce RTX 3080 and RTX 3080 Ti specifications on Nvidia’s website, you’ll see a different value listed for maximum temperature—93 °C.

Nvidia’s original intent was that the adjustable temperature limit would serve as a target value around which the GPU temperature could hover under sustained load. However, other components typically treat temperature limits differently: as a hard ceiling. If a component reaches that limit, it signals that the cooling is insufficient and performance must be reduced (thermal throttling).
This is why both manufacturers and users often treated the adjustable temperature limit as a value the card should never hit, because it meant throttling would start. Consequently, the system would often keep fan speeds unnecessarily high to stay far away from that limit—making the card louder than needed.
The real thermal limit, where the card begins to aggressively cut performance and clock speeds—the point at which thermal throttling truly kicks in—is actually the maximum temperature stated in the specs, such as 93 °C for the RTX 3080 and RTX 3090 Ti.
In the next sections, we’ll show how the card’s behavior changes when the GPU temperature hits the adjustable limit versus the maximum spec temperature.
Only one temperature limit with RTX 50
With the latest GeForce generation, Nvidia dropped the adjustable temperature limit. Whether this was because it considered the separate temperature target unnecessary and confusing, or if it relates to the removal of hotspot temperature reporting on RTX 50, is unclear. Most manufacturers tuned their cooling so the GPU ran between 70–75 °C under load anyway, and it was rare to see a custom card reaching the previous 83 °C target.
On the GeForce RTX 50 series, you can no longer adjust the temperature limit—only the power limit remains. The maximum GPU temperature is still defined in the specs but set slightly lower: 90 °C for the RTX 5090 and 88 °C for the RTX 5080.
GPU Boost also considers other limits, which together determine the active clock speed and fan RPM. You can check which limit is currently in effect using monitoring tools.
The simplest way is in GPU-Z: switch to the Sensors tab and scroll down to PerfCap Reasons. This field shows which limit is active, displayed as a color in the graph. Hovering your mouse over the history graph reveals which limit was active at that moment. Blue and orange indicate VRel and VOp, green means Pwr, and purple is Thrm.

By checking the “log to file” option, you can save monitoring data to a file, where the active limits are recorded as bit values representing individual constraints.
- 1 = NV_GPU_PERF_POLICY_ID_SW_POWER
- Pwr (Power) —Indicating performance is limited by total power limit.
- 2 = NV_GPU_PERF_POLICY_ID_SW_THERMAL
- Thrm (Thermal) — Indicates that performance is limited by the temperature limit.
- 4 = NV_GPU_PERF_POLICY_ID_SW_RELIABILITY
- VRel (Reliability Voltage) — Indicates that performance is limited by the voltage required for reliable (safe) operation.
- 8 = NV_GPU_PERF_POLICY_ID_SW_OPERATING
- VOp (Operating Voltage) — Indicates that performance is limited by the maximum operating voltage.
- 16 = NV_GPU_PERF_POLICY_ID_SW_UTILIZATION
- Utilization — Indicates that performance is limited by current GPU usage (e.g., low GPU load, or the GPU is being bottlenecked by other components).
If multiple limits are active at the same time, the log will show their sum. For example, a value of 5 means that both the Power limit and the Reliability Voltage limit were active (1 + 4).
In MSI Afterburner logs, the individual limits are recorded separately—inactive limits have a value of 0, active ones have a value of 1. On RTX 30, the following flags are avaliable:
- Temp limit
- Power limit
- Voltage limit
- No load limit
In HWiNFO, you can find these flags both in the sensor readout and in the log. They’re under the GPU section, in the subsection for performance limits. In the log, they are recorded as Yes or No depending on whether the limit was active.
NVIDIA GeForce RTX 3080 Ti
- GPU Performance Limiters (avg) [Yes/No]
- Performance Limit – Power [Yes/No]
- Performance Limit – Thermal [Yes/No]
- Performance Limit – Reliability Voltage [Yes/No]
- Performance Limit – Max Operating Voltage [Yes/No]
- Performance Limit – Utilization [Yes/No]
- Performance Limit – SLI GPUBoost Sync [Yes/No]

In GPU testing, I usually use HWiNFO’s reporting, where the individual flags are displayed as a point graph. For example, in the graph below you can see that the card was initially not fully utilized (purple), then briefly hit the maximum voltage limit when the load started (brown dot), after which the power limit (green) restricted the clocks, and once the card heated up, the power limit alternated with the thermal limit (red).

Today, we’ll focus primarily on the thermal limit and its relationship with the power limit.
Card manufacturers have taken their own approach to these temperature and power limits. They treat the power limit as a real target value—the cards raise GPU clocks until they hit the power limit or the voltage limit. Nobody complains if boost performance is capped by the power limit. In fact, this happens quite often, even on factory-overclocked models at reference values. What’s worse, on cheaper models you’ll sometimes find that the manufacturer won’t let you raise the power limit above the reference level at all, despite the card being marketed with an OC badge.
Temperature limits were a different story. Ever since components started using thermal limits, hitting that limit has been seen as a sign of insufficient cooling. So, GPU makers always left a comfortable gap from those thermal limits. Even if that wasn’t strictly necessary, most users dislike seeing temperatures around 80 °C on any component. This was true for GPUs as well, even though nothing catastrophic happened if a GeForce reached its adjustable thermal limit.
In the following sections, we’ll look at how three different cards behave inside our test setup under different conditions and configurations, focusing on three Nvidia-made graphics cards:
- GeForce RTX 3080 Founders Edition
- GeForce RTX 3080 Ti Founders Edition
- GeForce RTX 5080 Founders Edition
I skipped the RTX 4080 because there wouldn’t be much to see—its oversized cooler keeps it far from hitting thermal limits. I added results from the RTX 3080 Ti because its power limit is set 30 W higher than the RTX 3080, which means it actually hits the thermal limit under load, unlike the 3080. This is even more noticeable when the power limit is increased to 400 W. That makes it easier to observe how regulation behaves when a card normally runs into the power limit.
⠀






