Cray's next XC30 supercomputer has speedy new interconnect
Cray's XC line of supercomputers will rely on Intel's Aries high-speed interconnect
By Joab Jackson | Published: 12:00, 08 November 2012
For its next generation of supercomputers, Cray has focused on radically improving the I/O (input/output) of individual nodes. The new XC30 supercomputer will feature a new interconnect, called Aries, and a new routing topology that together promise to dramatically improve internal bandwidth.
Boosting I/O is important because "the faster processors get, the more data must go in and out of processors to keep them busy," said Barry Bolding, Cray vice president of corporate marketing. "The rate that data needs to move in and out of the node has gone up, and the network must be able to sustain that."
Due early next year, Cray's XC30 will be the first in a line of XC computers to be released over the next five years. A focus for the new architecture, which has been in development for more than six years, has been improving global bandwidth, which Bolding described as "the ability of the machine, when all of the processors are talking to all the other processors, to sustain a heavy workload."
Related Articles on Techworld
Thanks to the new interconnect, each node is capable of 120 million gets/puts per second, which compares to 30 to 40 million gets/puts per second for the previous generation of Cray systems, the XK line.
Each node in the new systems gets its own 48-port Aries interconnect. Aries is unique in that it performs the duties of both a network card and a router, providing each node direct access to every other node in the system, Bolding said.
Cray sold its interconnect hardware development program, including much of the intellectual property behind Aries, to Intel in May for US$140 million. Aries relies on the PCIe-3 bus, favored by Intel.
The XC architecture also has a new network topology, called Dragonfly, that connects all the nodes. Dragonfly is considered a direct network topology, one that requires fewer optical links and no external top switches. Dragonfly was based on research conducted by a networking group at Stanford University, along with engineers at Cray.
In addition to the usual Defense Department and academic consumers of supercomputers, Cray is marketing the machines to oil companies, large equipment manufacturers and other large enterprise customers that need heavy-duty machines for research.
The XC30 starts at US$500,000. Next year, Cray will also offer a smaller XC model for business use, Bolding said.
Many of the technologies in the XC30, including Aries, originate in part from an award given to Cray by the Defense Advanced Research Projects Agency's (DARPA's) High Productivity Computing System program. "The program was designed to produce the next generation of productive supercomputers," Bolding said.
Each XC30 cabinet will provide 66 teraflops (or 66 trillion floating-point operations per second) and can be aggregated into a system offering more than 100 PFLOPS (petaFLOPS, or thousand trillion floating point operations per second). Each cabinet will consist of three chassis, with each chassis having 16 computer blades, and each blade comprising four nodes. The system can be built with as many as 92,544 nodes, each with up to 128GB of DDR3 memory.
Cray will initially use Intel Xeon E5-2600 processors for the system. Eventually, XC systems will also be able to use Intel Xeon Phi co-processors and Nvidia Tesla GPUs, which are based on the next-generation Kepler architecture.
The XC supercomputers also feature a novel form of cooling, one that uses both air and water, and that does not require the system to be set up in a hot aisle/cold aisle configuration.
The systems will run the Cray Linux Environment, a software package that includes the Linux operating-system kernel, the Lustre file system, support for the Chapel parallel programming language, and SLURM (Simple Linux Utility for Resource Management).
Cray has already sold six XC30s, to organisations such as the Swiss National Supercomputing Centre, the Finnish IT Center for Science, the Department of Energy's National Energy Research Scientific Computing Center and the University of Stuttgart's High Performance Computing Center in Germany.