User:Tukulti65/sandbox2

From Wikipedia, the free encyclopedia

Tukulti65/sandbox2
Cineca University Consortium in Casalecchio di Reno (BO)
Activeoperational 2012
SponsorsMinistry of Education, Universities and Research (Italy)
OperatorsThe Members of the Consortium [1]
LocationCineca, Casalecchio di Reno, Italy
ArchitectureIBM BG/Q
5D Torus Interconnect configuration
10,240 Intel PowerPC A2
1.6 GHz with 16 cores each
163.840 cores
Power822 KW
Operating systemCNK[2]
Memory16 GB/node, 1GB/core; 160 TiB
Storage2PByte of scratch space
Speed2.097 PFLOPS
RankingTOP500: 37, 2015-11
PurposeMaterial science, Weather, Climatology, Seismology, Biology, Computational chemistry, Computer science
LegacyRanked 7 on TOP500 when built. [3]
Websitehpc.cineca.it/hardware/fermi

Fermi is a 2.097 -petaFLOPS supercomputer located in Cineca.[4]

History[edit]

Supercomputer Fermi BlueGene/Q at Cineca

The development of Fermi was sponsored by the Ministry of Education, Universities and Research (Italy)

In June 2012, Fermi reached the seventh position on the TOP500 list of fastest supercomputers in the world. [5]

In the Graph500 list of top supercomputers.[6] Fermi reached the fifth position in their benchmark, the system tested at 2,567 gigaTEPS (traversed edges per second).

In the Green500 list of top supercomputers.[7] Fermi reached the fifty-ninth position in their benchmark, the system tested at 2,176.57 MFLOPS/W (Performance per watt).

Specifications[edit]

  • Each Compute Card (which we call a "compute node") features an IBM PowerA2 chip with 16 cores working at a frequency of 1.6 GHz, 16 GB of RAM and the network connections. A total of 32 compute nodes are plugged into a so-called Node Card. Then 16 Node Cards are assembled in one midplane which is combined with another midplane and two I/O drawers to give a rack with a total of 32x32x16 = 16K cores for each rack.
  • In our BG/Q configuration there are 10 racks for a total of 160 K cores.
  • In addition to the compute nodes there are also front-end nodes (or login nodes) running Linux for interactive access and the submission of batch jobs. Parallel applications have to be cross-compiled on the front-end nodes and can only be executed on the partition defined on the compute nodes. Access to the compute nodes are mediated by I/O nodes, since only them are able to interact with the file systems.
  • On the login nodes there is a complete Red-Hat Enterprise Linux (6.2) distribution. On the compute nodes instead there is a light Linux-like kernel called Compute Node Kernel (CNK). Compute nodes are diskless and I/O functionalities are provided by dedicated I/O nodes. I/O Nodes run Linux and provide a more complete range of OS services, e.g.: files, sockets, process launch, signaling, debugging, and termination. Service Nodes perform system management services (e.g., partitioning, heart beating, monitoring errors) and can be used only by system administrators.

The CINECA configuration is made of 10 racks as follows[edit]

  • 2 racks: 16 I/O nodes per rack, implying a minimum job allocation of 64 nodes (1024 cores).
  • 8 racks: 8 I/O nodes per rack, implying a minimum job allocation of 128 nodes (2048 cores).
  • The minimum allocations result from the fact that a job must have at least one I/O node allocated to it. Therefore, even if your job asks for an arbitrary number of nodes or cores, a minimum partition of 64 or 128 nodes (or multiples) will be allocated and accounted!

See also[edit]

References[edit]

  1. ^ "Consortium of universities". Retrieved 9 March 2016.
  2. ^ "IBM System Blue Gene Solution Blue Gene/Q Application Development". IBM. Retrieved 9 March 2016.
  3. ^ "Jun 2012". TOP500. Retrieved 9 March 2016.
  4. ^ "Nov 2015". TOP500. Retrieved 9 March 2016.
  5. ^ "FERMI". TOP500. Retrieved 9 March 2016.
  6. ^ "The Graph 500 List: November 2015". Graph 500. Retrieved 9 March 2016.
  7. ^ "The Green 500 List: November 2015". Graph 500. Retrieved 9 March 2016.

Articles about Fermi and its network[edit]

Il Sole 24 ore - in Italian

Datacenter Knowledge

Category:Power Architecture Category:Supercomputers Category:IBM_supercomputers