

AMD's Vega cards are particularly good candidates for undervolting in my experience. After that, most GPUs can be safely tweaked up or down about 0.1V, maybe 0.2V, but if you're trying to reduce noise and temperatures, undervolting is the way to go. With Afterburner, you'll need to check the option to unlock voltage adjustments and restart the utility. However, power use increases linearly with clockspeed, but with the square of the voltage, so a 10 percent drop in voltage will be much more beneficial than a 10 percent drop in clockspeed. Voltage and clockspeed both determine power use and how much heat needs to be dissipated. Thankfully, it's possible to customize your card so that it runs the way you prefer, rather than what the manufacturer thinks is best. Graphics card manufacturers do their best to deliver an 'ideal' experience, but there's no single solution that will please everyone-some people prefer silence, others efficiency, and others performance. To keep those temperatures down, the fan speed can be cranked up, but some graphics cards can get very loud if the fans run at higher RPMs.

Clock a GPU faster and performance will improve, but temperatures will also increase. The problem is that graphics cards need to strike a balance between performance, temperature, and noise. Those worked fine when they were new, but six months or a year later? Not so much. Anecdotally, I've had a few graphics cards that allowed the GPU to hit 90C or more. Let's start with the first part: is 80C really too hot for a GPU? According to manufacturer specs from AMD and Nvidia, the answer is generally no-in the past, we've seen GPUs even rated to run as hot as 92C.
