Pleiades (supercomputer)

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise (formerly Silicon Graphics International) and Intel.

As of November 2019 it is ranked the 32nd most powerful computer on the TOP500 list with a LINPACK rating of 5.95 petaflops (5.95 quadrillion floating point operations per second) and a peak performance of 7.09 petaflops from its most recent hardware upgrade. The system serves as NASA's largest supercomputing resource, supporting missions in aeronautics, human spaceflight, astrophysics, and Earth science.

History
Built in 2008 and named for the Pleiades open star cluster, the supercomputer debuted as the third most powerful supercomputer in the world at 487 teraflops. It originally contained 100 SGI Altix ICE 8200EX racks with 12,800 Intel Xeon quad-core E5472 Harpertown processors connected with more than 20 miles of InfiniBand double data rate (DDR) cabling.

With the addition of ten more racks of quad-core X5570 Nehalem processors in 2009, Pleiades ranked sixth on the November 2009 TOP500 with 14,080 processors running at 544 teraflops. In January 2010, the scientists and engineers at NAS successfully completed a "live integration" of another ICE 8200 rack by connecting the new rack's InfiniBand dual port fabric via 44 fibre cables while the supercomputer was still running a full workload, saving 2 million hours in productivity that would otherwise have been lost.

Another expansion in 2010 added 32 new SGI Altix ICE 8400 racks with Intel Xeon six-core X5670 Westmere processors, bringing up to 18,432 processors (81,920 cores in 144 racks) at a theoretical peak of 973 teraflops and a LINPACK rating of 773 teraflops. NASA also put an emphasis on keeping Pleiades energy efficient, increasing the power efficiency with each expansion so that in 2010 it was three times more power-efficient than the original 2008 components, which were the most power-efficient at the time. The integration of the six-core Westmere nodes also required new quad data rate (QDR) and hybrid DDR/QDR InfiniBand cabling, making the world's largest InfiniBand interconnect network with more than 65 miles of cable.

After another 14 ICE 8400 racks containing Westmere processors were added in 2011, Pleiades ranked seventh on the TOP500 list in June of that year at a LINPACK rating of 1.09 petaflops, or 1.09 quadrillion floating point operations per second.

InfiniBand DDR and QDR fiber cables are used to connect all of the nodes to each other, as well as to the mass storage systems at NAS and the hyperwall visualization system, creating a network made up of more than 65 miles of InfiniBand fabric, the largest of its kind in the world. Pleiades is built in a partial 11-D hypercube topology, where each node has eleven connections to eleven other nodes, with some making up to twelve connections to form a 12-D hypercube.

In 2012, NASA and partners SGI and Intel began working on the integration of 24 new Altix ICE X racks with Intel Xeon eight-core E5-2760 Sandy Bridge processors to replace 27 of the original Alitx 8200 racks containing quad-core Harpertown processors. With a total of 126,720 processor cores and over 233 terabytes of RAM across 182 racks, the expansion increased Pleiades' available computing capacity 40 percent. Each new Sandy Bridge node has four networking links using fourteen data rate (FDR) InfiniBand cable for a total transfer bandwidth of 56 gigabits (about 7 gigabytes) per second.

In early 2013, work began on a larger hardware refresh for Pleiades, ultimately removing all of the original 4-core Harpertown processors and adding 46 SGI ICE X racks with 10-core Intel Xeon E5-2680V2 (Ivy Bridge) processors. When installation was complete in August 2013, the system's overall peak performance increased 62% from 1.78 petaflops to 2.87 petaflops. The system was slowly upgraded again between January and April 2014, adding another 29 racks of Ivy Bridge nodes and increasing the system's theoretical computational capability to 3.59 petaflops. To make room for the expansion, all of the system's remaining Nehalem nodes and 12 Westmere nodes were removed.

In late 2014, more Westmere nodes were removed to make room for new Intel Xeon Haswell processors, increasing the theoretical processing power by one petaflop to 4.49 petaflops. In January 2015, additional Haswell nodes were installed and released to users, giving Pleiades a new peak theoretical processing capacity of 5.35 petaflops. An upgrade, completed in June 2016, replaced all remaining racks containing nodes with six-core Intel Xeon X5670 (Westmere) processors with racks containing nodes using 14-core Intel Xeon E5-2680v4 (Broadwell) processors. This improved the theoretical peak performance to 7.25 petaflops.

Role at NASA
Pleiades is part of NASA's High-End Computing Capability (HECC) Project and represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling NASA scientists and engineers to conduct high-fidelity modeling and simulation for NASA missions in Earth studies, space science, aeronautics research, as well as human and robotic space exploration.

Some of the scientific and engineering projects run on Pleiades include:
 * The Kepler Mission, a space observatory launched in March 2009 to locate Earth-like planets, monitors a section of space containing more than 200,000 stars and takes high-resolution images every 30 minutes. After the operations center gathers this data, it is pipelined to Pleiades in order to calculate the size, orbit, and location of the planets surrounding these stars. As of February 2012, the Kepler mission has discovered 1,235 planets, 5 of which are approximately Earth-sized and orbit within the "habitable zone" where water can exist in all three forms (solid, liquid, gas). After setbacks following the failure of two of Kepler's four reaction wheels, responsible for keeping the spacecraft pointed in the correct direction, in 2013, the Kepler team moved the entire data pipeline to Pleiades, which continues to run light curve analyses from the existing Kepler data.
 * Research and development of next generation space launch vehicles is done on Pleiades using cutting-edge analysis tools and computational fluid dynamics (CFD) modeling and simulation in order to create more efficient and affordable space launch system and vehicle designs. Research has also been done on reducing noise created by the landing gear of aircraft using CFD code application to detect where the sources of noise are within the structures.
 * Astrophysics research into the formation of galaxies is run on Pleiades to create simulations of how our own Milky Way Galaxy was formed and what forces might have caused it to form in its signature disk-shape. Pleiades has also been the supercomputing resource for dark matter research and simulation, helping to discover gravitationally bound "clumps" of dark matter within galaxies in one of the largest simulations ever done, in terms of particle numbers.
 * Visualization of the Earth's ocean currents using a NASA-built data synthesis model for the Estimating the Circulation and Climate of the Ocean (ECCO) Project between MIT and the NASA Jet Propulsion Laboratory in Pasadena, California. According to NASA, the "ECCO model-data syntheses are being used to quantify the ocean's role in the global carbon cycle, to understand the recent evolution of the polar oceans, to monitor time-evolving heat, water, and chemical exchanges within and between different components of the Earth system, and for many other science applications."

In popular culture

 * In the 2015 film The Martian, astrodynamicist Rich Purnell uses the Pleiades supercomputer to verify calculations for a gravity assist maneuver for a spacecraft, in order to rescue an astronaut stranded on Mars. Unlike what is shown in the movie, one need not be physically present inside the racks to run the computations; to submit jobs a user can connect from a remote location via ssh, while employing a SecurID.