NCAR / UCAR a research site with a long association with Cray supercomputers.

The National Centre for Atmospheric Research NCAR/UCAR in Colorado has a long association with Cray supercomputers. Starting in 1977 with the Cray-1 SN3 the site invested in some the best supercomputers available.  More recently a Cray Petaflop class machines will be used to explore climate research.

In 2024 the site has commissioned Derecho a 19 Peta Flop Cray system from HPC.  This system will replace Cheyenne a 5.5 Peta Flop system with 313 Tbyte of working memory.

After announcing the move of the venerable Cray-1 SN3 system from Mesa lab to the Wyoming supercomputer centre, on 7th January 2021 the site hosted a virtual tour with some of the folks involved in the long history of using and caring for the Cray systems at the site. <<Link to call recording when available.>>. SN3 will be sitting along side Derecho and visible from the visitor centre when it reopens.

Here are some further links related to NCAR/UCAR. The centre has excellent web and visitor information well worth a visit.

Derecho hardware

323,712 processor cores 3rd Gen AMD EPYC™ 7763 Milan processors
2,488 CPU-only computation nodes Dual-socket nodes, 64 cores per socket
256 GB DDR4 memory per node
82 GPU nodes Single-socket nodes, 64 cores per socket
512 GB DDR4 memory per node
4 NVIDIA 1.41 GHz A100 Tensor Core GPUs per node
600 GB/s NVIDIA NVLink GPU interconnect
328 total A100 GPUs 40GB HBM2 memory per GPU
600 GB/s NVIDIA NVLink GPU interconnect
6 CPU login nodes Dual-socket nodes with AMD EPYCâ„¢ 7763 Milan CPUs
64 cores per socket
512 GB DDR4-3200 memory
2 GPU development and testing nodes Dual-socket nodes with AMD EPYCâ„¢ 7543 Milan CPUs
32 cores per socket
2 NVIDIA 1.41 GHz A100 Tensor Core GPUs per node
512 GB DDR4-3200 memory
692 TB total system memory 637 TB DDR4 memory on 2,488 CPU nodes
42 TB DDR4 memory on 82 GPU nodes
13 TB HBM2 memory on 82 GPU nodes
HPE Slingshot v11 high-speed interconnect Dragonfly topology, 200 Gb/sec per port per direction
1.7-2.6 usec MPI latency
CPU-only nodes – one Slingshot injection port
GPU nodes – 4 Slingshot injection ports per node
~3.5 times Cheyenne computational capacity Comparison based on the relative performance of CISL’s High Performance Computing Benchmarks run on each system.
> 3.5 times Cheyenne peak performance 19.87 peak petaflops (vs 5.34)

 

NCAR also hosted the single delivered Cray-3 system.

A few photos of systems at NCAR – Pictures and text from NCAR and other sources.

 

Table of Cray system specifications
NCAR systems table ( the good bits ) : from  HPC at NCAR Past Present and Future. 

 

 

Entries for articles in Cray Channels about NCAR

 

Showing 1 to 19 of 19 entries (filtered from 1,408 total entries)

 

Scroll to Top