xPU Thermals. Whats burning?

After a small re-amp of a spark (or more like a nuke going off..) for my interest in hardware and tinkering – especially the build and just going expensive LEGO all the over things, I remembered when I once upon a time started, actual heat output wasn’t really an issue – things didn’t even need a heatsink… It was coooool.

Now, when I for the first time ever own _three_  too many AMD Ryzen systems, and since starting to overclock (started with my i5 system from way back (actually, back with my Prescott P4, but nevermind), and non-stock coolers before that ofc.. ), I realized I had never really given it that much thought, just bump some settings and let it roll.
I even been using stock cooling solutions primarily (remember kids, in the server farm, in the way-back machine – stock coolers where cool, and for Intel PII/Celeron with the fancy Slot A config, there were not that many options without a hack-saw). The turn was here – cooling solutions for our rigs with our biiig 80mm chassis fans.. we pushed them hard – dustfilters? Tsk!

They started burning up the place at the same time as (to use a now over used term – VIRAL – ) a video-clip or an actual link to toms hardware where they tested how the stuff worked without a cooler – AMD CPU’s turned into ovens, (Someone got the link on youtube). Fun times!

So, onwards to – graphics.. , NVIDIA and ATI here in the ring already boxing – well, that’s a separate story I can’t really remember, graphics where expensive back then too (right now of the time of writing, it’s silly..). But we started to have quite more umpf being crammed in, and the hype-plane was flying.

So, we are starting to produce more and more heat, and to be honest – airflow was a concept more thought of by putting a big desk fan next to your rig, or pick parts from housing AC equipment.. Around this time (of the Slot A Intel setups), I built my first fan-less rig – as my second server – living under a bed 24/7.. It lived on long – but damn, it was a hard thing to do even back then.

Now however, we have auto-adjusting CPU’s and GPU’s – throttling down clockspeeds when we hit a thermal limit. That works, in a fashion. But we also have power saving modes that clocks down from the get go when you don’t demand that much – we are no longer locked to a specific speed, and we might have a boost on a few cores (yes, this I love) because even tho each core might not displace that much heat, once you cram a few of them together – we have a toasty situation.

Our software, handles this either with some small help. The OS layer just kicks around and becomes the kid that keeps asking for money (or clock-power) – but also the responsible adult that says “that’s it, thanks for the loan”.

The application layer however.. Not that thinking. It’s like the stuckup teenager that just does what ever it wants. Well, usually per instructions of the code and function. But some does concern themselves with battery mode or not – so they do think a tiny bit.

But it leaves an impact window that for us that remember an overall slower process of doing things, at least I get the feeling that I have to wait until it fires off and performs what I asked after all the chains has
(Yes, I have lost the rant-concept right now..).

So, what is really burning? Seems nothing is – we are clocking down, calling things turbo boosting and the clock values are often the baseline where equipment keep the thermal/performance balance the best – even memory does this.

How much impact does these power saving plan have? Does it save us a few bucks on a workstation, or is it that the laptop segment features and trickery simply have ported over?

I don’t know right now. Because this rant is over for now.

Message after the beep

This site uses Akismet to reduce spam. Learn how your comment data is processed.