IBM BlueGene/L - Frost

Main content

IBM Bluegene/L Supercomputer

IBM


In use: March 15, 2005 - May 31, 2012


Experimental and production use


Peak teraflops: 22.94


Processors: 8,192


Clock speed: 0.70GHz


Memory (terabytes): 4.19TB


Electrical power consumption: 83.10 kW

NCAR's IBM BlueGene/L system was delivered on March 15, 2005. It went online on March 25 after passing five days of acceptance testing on the first try. It was named "Frost" because it ran cooler than most microprocessor-based, high-end systems. While the original plan was for it to be in operation for three years, the system ultimately served NCAR, CU-Boulder, and TeraGrid project users for more than seven years.

Initially comprising just one Blue Gene/L rack, Frost had 1,024 dual-processor compute nodes and 32 I/O nodes. Each processor ran at 0.7 GHz. While they were relatively slow compared to Intel Pentium processors of that time ran at 3.2 GHz, they produced less heat and could be more tightly packed. Thus, the Frost system:

  • Took up just 3% of the floor space of NCAR's flagship computer at the time, Bluesky, an IBM p690.
  • Used 6% of the electricity required by Bluesky.
  • Delivered 69% of Bluesky's peak computational power.

With one cabinet containing 2,048 processors, Frost was far more compact than Bluesky, which had 50 cabinets that each contained 32 processors. By one key measure, Frost was also more powerful. Although Bluesky had a top performance of 8.3 teraflops, it achieved just 4.2 teraflops on the Linpack benchmark. Frost ran the Linpack benchmark at 4.6 teraflops.

Frost appeared to be tilted because a triangular-shaped air duct, or plenum, was attached to it on either side. The plenum on the left directed cold air from below the floor through the machine. The plenum on the right directed hot exhaust air upward.

Because Frost originally was an experimental system and lacked a complete computational environment for users, it was not used for production computing initially. NCAR's Computational and Information Systems Laboratory (CISL) first used it for computer science research: running performance I/O tests, repartitioning the architecture into smaller blocks to accommodate a variety of users, setting up job queues, and writing tools to support the machine. CISL also collaborated with researchers at the University of Colorado, IBM, Argonne National Laboratory, Lawrence Livermore National Laboratory, and the San Diego Supercomputer Center to see how the Blue Gene/L architecture could best be used for atmospheric science. Joint projects included testing applications, debugging software, developing new system configurations, and evaluating job schedulers.

CISL joined the Blue Gene/L Consortium, an association of laboratories, universities, and industrial partners working to develop scientific and technical applications for Blue Gene/L. CISL made the machine available to select users in May 2005 for experimental purposes.

In July 2007, Frost entered service as a production computer as a TeraGrid resource, while it continued to serve as a production resource for NCAR-CU collaborations. In 2009, Frost was quadrupled in size with three additional Blue Gene/L racks from the San Diego Supercomputer Center. In this final configuration, with 8,192 processors, Frost sustained 22 teraflops on the Linpack benchmark.

Frost ended four years of TeraGrid service when the TeraGrid program ended on July 31, 2011. CISL was able to keep the system running to support a few NCAR and CU-Boulder collaborations, as well as to support the Asteroseismic Modeling Portal gateway. Notably, after its TeraGrid retirement, use of the six-year-old Frost increased, and it delivered more than 3 million core-hours per month on average to its small set of devoted users.

In total, Frost delivered more than 126 million core-hours to TeraGrid and non-TeraGrid users over the 58 months between July 2007 and May 2012.

Previous Page
IBM Linux Cluster
Next Page
Aspen Nocona