Working Notes of Distributed Computing Group
Comparison of traditional and GPU-based
HPC solutions from Power/Performance
point of view
The issue of power consumption in modern HPC data centres is quickly moving very high on the agenda. One
of the main reasons for that is the rising price of electricity. The cost of running a data centre through the
typical three year life cycle of hardware is fast becoming comparable to the cost of the data centre itself
prompting for more frequent hardware refresh upgrades. Another reason is the overall power budget which is
available to a data centre. Obviously the power which can be supplied to a data centre is not unlimited. It is
worth noting that the performance of supercomputers in the Top500 list grew quicker than the Moore‟s law
simply because the data centres consumed more and more power. The reported power of the two fastest
supercomputers in the world, Roadrunner and Jaguar, which are currently number one and two in the Top500,
is enormous 2.5 and 7.0 MW respectively. Clearly the current rate at which the electrical power consumed by
data centres grew in recent years is unsustainable and will have to be capped. It is believed that the power limit
for a data centre is on the order of 10 MW and the top data centres have already reached the limit. Therefore
new alternative technologies need to be sought in order to avoid slowdown in computer performance growth.
Having built a first supercomputer capable of 1 Petaflops, manufactures and researchers are already thinking
about an Exaflops system and electrical power is firmly on the agenda. The most obvious and straightforward
way of reducing power is to improve Power Usage Effectiveness (PUE) of the data centre which is the ratio of
total facility power to the power of computer equipment. This is mostly about very efficient cooling methods
and power supplies. Improving PUE from unoptimised 1.9 to state-of-the-art 1.1-1.2 can nearly halve the power
consumption. In order to reduce the power c