Arctic Region Supercomputing Center



The Arctic Region Supercomputing Center (ARSC) was from 1993 to 2015 a research facility organized under the University of Alaska Fairbanks (UAF). Located on the UAF campus, ARSC offered high-performance computing (HPC) and mass storage to the UAF and State of Alaska research communities.

In general, the research supported with ARSC resources focused on the Earth's arctic region. Common projects included arctic weather modeling, Alaskan summer smoke forecasting, arctic sea ice analysis and tracking, Arctic Ocean systems, volcanic ash plume prediction, and tsunami forecasting and modeling.

ARSC was a Distributed Center (DC), an Allocated Distributed Center (ADC) and then one of six DoD Supercomputing Resource Centers (DSRCs) of the Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) from 1993 through 2011.

History
ARSC hosted a variety of HPC systems many of which were listed as among the Top 500 most powerful in the world. For more than 10 years ARSC maintained the standing of at least one system on the Top 500 list. Funding for ARSC operations was primarily supplied by the DoD HPCMP, with augmentation through UAF and external grants and contracts from various sources such as the National Science Foundation. In December 2010, the Fairbanks Daily News-Miner reported probable layoffs for most of Arctic Region Supercomputer Center's 46 employees with the loss of its Department of Defense contract in 2011. The article reported that 95 percent of ARSC funding comes from the Department of Defense. When that DoD funding source was lost ARSC could no longer afford computers that could be listed on the Top 500 List.

The following timeline includes various HPC systems acquired by ARSC and a Top 500 list standing when appropriate:


 * 1993 - Cray Y-MP named Denali with 4 CPUs and 1.3 GFLOPS, StorageTek 1.1 TB Silo. With this system the Arctic Region Supercomputing Center (ARSC) was #251 on the first Top 500 Supercompter list in June 1993 published on the first day of the 8th Mannheim Supercomputer Seminar. This Cray Y-MP M98/41024 system remained on the list the next two times while dropping to position #302 then #405 before falling off the list.
 * 1994 - Cray T3D named Yukon with 128 CPUs and 19.2 GFLOPS. ARSC had two of the computers on the June 1994 Top 500 list with the new Cray T3D MC128-2 getting #58 on the list and the previous computer Denali was still on the list at position #405. The Cray T3D MC128-2 was #55 on the November 1994 Top 500 list.
 * 1995 - ARSC nabbed the #83 spot on the June 1995 Top 500 list by upgrading to a Cray T3D MC128-8 which maintained a spot on the Top 500 list through 1997.
 * 1997 - ARSC got position #70 on the June 1997 Top 500 list by upgrading Yukon to a Cray T3E with 88 CPUs and 50 GFLOPS. Position #62 on the November 1997 Top 500 list was obtained with another upgrade to the T3E900 with 96 cores. HPC Wire mentions the Cray Y-MP Denali, the visualization labs, the ARSC Video Production Lab in an article about the Cray T3E installation at the Arctic Region Supercomputing Center.
 * 1998 - Cray J90 named Chilkoot with 12 CPUs and 2.4 GFLOPS, Expanded StorageTek to 330+ TB. ARSC gets #74 on the November 1998 Top 500 list with another upgrade of the T3E900 to 100 cores.
 * 1999 - ARSC got spot #44 on the June 1999 Top 500 list after Yukon was upgraded again to a Cray T3E900 with 268 cores and was able to stay on the Top 500 list through June 2002.
 * 2000 - Updated Chilkoot to a Cray SV1 with 32 CPUs and 38.4 GFLOPS, Doubled StorageTek Hardware.
 * 2001 - An IBM SP named Icehawk with 200 CPUs and 276 GFLOPS got ARSC the #117 spot on the June 2001 Top 500 list and stayed on the list through 2002. This gave ARSC the status of having a system on the Top 500 list every single time in its first decade of existence.
 * 2002 - Cray SX-6 named Rime with 8 CPUs and 64 GFLOPS, IBM P690 Regatta named Iceflyer with 32 POWER4 CPUs and 166.4 GFLOPS. Cray tells CNET that Arctic Region Supercomputer Center got the first Cray SV1ex upgrade.
 * 2003 - ARSC got spot #116 on the June 2003 Top 500 list with a Cray X1 named Klondike with 60 cores and then spot #71 on the November 2003 Top 500 list by upgrading the Cray X1 to 124 cores for a system that stayed on the Top 500 list through June 2005. IBM told C|Net the Iceberg system was worth more than $15 million and cited $16.4 million going to Cray for the X1.
 * 2004 - ARSC got spot #56 on the June 2004 Top 500 list with its IBM System p named Iceberg. In 2003 IBM had told InfoWorld that this system would put ARSC in sixth place on the Top 500 list. This RISC/UNIX-based Power4+ eServer with 672 cores remained on the list through June 2006. ARSC also acquired two Sun Fire 6800 Storage Servers and a Mechdyne MD Flying Flex 4 projector to set up a Cave automatic virtual environment.
 * 2005 - Cray XD1 named Nelchina with 36 CPUs.
 * 2007 - A Sun Opteron Cluster named Midnight with 2,236 cores and 12 TFLOPS got ARSC the #206 spot on the November 2007 Top 500 list. ARSC also installed a StorageTek SL8500 Robotic Tape Library with 3+ PetaByte capacity.
 * 2008 - A Cray XT5 name Pingo with 3,456 cores got ARSC the #109 spot on the November 2008 Top 500 list and stayed on the list through June 2010.
 * 2009 - IBM BladeCenter H QS22 Cluster with 5.5 TFLOPS and 12 TB Filesystem.
 * 2010 - Penguin Computing Cluster named Pacman with 2080 CPUs and 89 TB Filesystem, Sun SPARC Enterprise T5440 Server named Bigdipper with 7 Petabyte Storage Capacity, Cray XE6 named Chugach with 11648 CPUs and 330 TB Filesystem, Sun SPARC Enterprise T5440 Server named Wiseman with 7 Petabyte Storage Capacity, Cray XE6 named Tana with 256 CPUs and 2.36 TFLOPS HPC Wire talks of ARSC, one of six HPCMP centers, losing DoD funding and Chugach, a Cray XE6 ‘Baker’ supercomputer, which was part of a recent big procurement under HPCMP originally for ARSC being moved to Vicksburg and being run remotely. The Chugach Cray XE6 was on the November 2010 Top 500 list in position #83 and stayed on the list until June 2013 but was credited without a machine name and as an ERDC DSRC system.
 * 2011 - Expanded Pacman to 3256 CPUs and 200 TB Filesystem. Penguin Computing issued a press release about Pacman (Pacific Area Climate Monitoring and Analysis Network) Although the majority of the HPCMP funding has ended by the end of 2011, ARSC is still operating the Chugach machine remotely during a transition period. and the HPCMP Quick Links, HPCMP User Support (CCAC), and HPCMP User Accounts links are still prominently displayed on the ARSC website as well as the Chugach Cray XE6 system.
 * 2012 - ARSC is down to half the staff of when they were a HPCMP DSRC.
 * 2013 - ARSC's last Top 500 Supercomputer Chugach, a Cray XE6, was upgraded to 23,296 cores for slot #130 in November 2012 and then slot #183 in June 2013 on the Top 500 List. After 2011 and the transition period operations were transferred to a new Open Research Systems (ORS) unit of the HPCMP at the ERDC DSRC.
 * 2014 - ARSC is down to 20% of the staff of when they were a HPCMP DSRC by the end of 2014.
 * 2015 - Arctic Region Supercomputing Center ceased to exist on September 1, 2015. Former ARSC systems were acquired by the Research Computing Systems unit at University of Alaska Fairbanks's Geophysical Institute. The original website is now a dead URL.