Here’s another advantage to delivering DC to the back of the computer: overcapacity.
A couple of years ago, I moved a client’s IT estate from its in-house server rooms to a colo operator. The in-house server rooms were not separately metered and the client had large call centers, so overall high electricity consumption. There was no way of establishing the power consumption, but the colo operator needed to know how much power to reserve for the base load. In the end, I got hold of some spec sheets for “typical servers” in their estate and made an educated guess of 2kW / rack. I always specify power-consumption be monitored at the rack level, so when we’d moved them in, I added up the numbers. The core switches, which came with 3kW PSUs, were pulling 1.2kW for the entire rack; the single biggest consumer was just over 4kW. The average was 1.3kW – I’d oversized by 30%.
When an engineer specifies a PSU for a computer, he does so based on maxima: the maximum number of processors, disks, RAM, etc. He then allows a safety margin. PSUs are manufactured in standard sizes, so he chooses the next standard size up. (HP’s tool is here.) He also assumes that the computer will run flat-out. Put all this together, and a computer that ticks over on 200W for most of its life will have a 500W PSU. Yet data centers are obliged to design a supply that can deliver this peak theoretical load. What we end up designing for is the sum of the maxima, rounded up.
If we deliver DC direct to the computers, we can eliminate the over-capacity due to (a) the rounding up and (b) using a sum when an average would suffice.
These can add up to big numbers. A server that requires 330W ends up with a 500W supply, and the data center provider ends up sizing for the 500W, not the 330W. Add that up across 5,000 servers, and we’re overdesigning by 850kW just because we’re rounding up.
The difference between the average peak demand and total peak demand is more difficult to put a number to, but the idea is that not every computer is going to run flat-out at the same time. At any given time, some computers will be running flat-out consuming all 330W, many will be rumbling along at, say, 200W, and a few will be fast asleep at basically 0W. If I assume a (slightly skewed) normal distribution, we end up with the difference between 5,000*330W and 5,000*200W = 650kW.
Add these two numbers together, and based on the 5,000*500W that we started with, and we have the difference between 2.5MW and 1MW. Yes, that’s a 60% reduction in the power we design for. And it’s not only the electricity supply, but the cooling too ends up over-sized. This all has a carbon footprint: we’re buying batteries, invertors, flywheels, gen sets, cooling, the whole lot, that will never be used.
Of course, in the real world we’d have to allow for various other factors, and we’d need actual data. But the point remains that by centralizing our PSUs into a couple of industrial-scale PSUs and distributing DC, we can come up with a much leaner design.
Leave a Reply
You must belogged in to post a comment.