US department of energy to cool exascale HPC with local weather
San Francisco bay cool air could meet 95% of data centre cooling
By Patrick Thibodeau | Computerworld US | Published: 13:33, 08 February 2012
In a picturesque spot overlooking San Francisco Bay, the US Department of Energy's Berkeley Lab has begun building a new computing centre that will one day house exascale systems.
The DOE doesn't know what an exascale system will look like. The types of chips, the storage, the networking and programming methods that will go into these systems are all works in progress. DOE is expected to deliver a report by the end of this week outlining a plan for reaching exascale computing by 1919-1920 and its expected cost.
But what the DOE does have an idea about it is how to cool these systems.
Related Articles on Techworld
The Computational Research and Theory (CRT) Facility at Berkeley will use outside air cooling. It can rely on the Bay area's cool temperatures to meet its needs about 95% of the time, said Katherine Yelick, associate lab director for computing sciences at the lab. If computer makers raise the temperature standards of systems, "we can use outside cooling all year round," she said.
The 140,000 square feet building will be nestled in a hillside with an expansive and unobstructed view of the Bay. It will allow Berkeley Lab to combine offices that are split between two sites. It will also be large enough to house two supercomputers, including exascale-sized systems. "We think we can actually house two exaflop systems in it," said Yelick. The building will be completed in 2014.
Supercomputers use liquid cooling, but this building will also use evaporative cooling. Under this process, hot water goes up into a cooling tower where evaporation helps to cool it. The lowest level of the Berkeley building is a mechanical area that will be covered by a gradient that is used to pull in outside air, said Yelick.
An exascale system will be able to reach 1 quintillion (or 1 million trillion) floating point operations per second, which is roughly 1,000 more times powerful than a petaflop. The government has already told vendors that an exascale system won't be able to use more than 20 megawatts of power. To put that in perspective, a 20 petaflop system today is expected to use somewhere in the range of 7 MWs. There are large commercial data centres, with multiple tenants, that are now being built able to support 100 MWs and more.
The idea of using climate, or what is often called free cooling, is a major trend in data centre design.
Google has a data centre in Hamina, Finland, using Baltic Sea water to cool systems instead of chillers. Last October, Facebook announced that it had begun construction of a data centre in Lulea, Sweden, near the Arctic Circle, to take advantage of the cool air. HP built a facility that relies on cold sea air just off the North Sea coast in the UK.
One project that is carbon free is a data centre built by Verne Global in Keflavik, Iceland. The power supply comes from a combination of hydro and geothermal sources.
The cool temperatures in Keflavik allow the data centre to make use of outside air for cooling. The company has two modes of operation; one is direct free cooling, which means air is taken directly from the outside and put into the data centre. The company can "remix" the returning hot air to have "tight temperature controls," said Tate Cantrell, the chief technology officer. The air is also filtered.
The data centre also has the ability to switch to a recirculation mode where no outside air goes into the data centre. Instead, a heat exchanger with a cold coil and a hot coil is used. The cold coil cools the air in the data centre stream, and the hot coil is cooled directly from outside, Cantrell said.
The Keflavik data centre will use the heat exchanger in two situations. The first is to conserve moisture in the air when the dew point is low, meaning there is a low percentage of water in the airstream. The data centre also has humidifiers. Below a certain level of humidity there is a possibility of introducing static into an environment. The other reason for switching to a heat exchanger is to protect the filters in the event that a strong storm kicks up a lot of dust, said Cantrell.
The groundbreaking of the Berkeley facility last week included Steve Chu, the US energy secretary and a former Berkeley Lab director. He said the computational facility, "is very representative of what we have that's best in the United States in research, in innovation." Computation will be "a key element in helping further the innovation and the industrial competitiveness of the United States," he said.