Skip to content
Snippets Groups Projects
Commit 7fe932be authored by Garhan Attebury's avatar Garhan Attebury
Browse files

Additional updates to facilities doc

parent addb85eb
No related branches found
No related tags found
No related merge requests found
......@@ -4,9 +4,9 @@ title: "Facilities of the Holland Computing Center"
This document details the equipment resident in the Holland Computing Center (HCC) as of June 2022.
HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as a 100 Gbps backup path. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as a 100 Gbps geographically diverse backup path. HCC uses multiple data transfer nodes as well as a FIONA (Flash IO Network Appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's main resources at UNL include Red, a high throughput cluster for high energy physics, and hardware supporting the PATh, PRP, and OSG NSF projects. The largest machine on the Lincoln campus is Red, with 15,984 job slots interconnected by a mixture of 1, 10, 25, 40, and 100 Gbps Ethernet. Red serves up over 11 PB of storage using the CEPH filesystem. Red primarily serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid) and is integrated with the Open Science Grid (OSG).
HCC's main resources at UNL include Red, a high throughput cluster for high energy physics, and hardware supporting the PATh, PRP, and OSG NSF projects. Red is the largest machine on the Lincoln campus with 15,984 job slots interconnected by a mixture of 1, 10, 25, 40, and 100 Gbps Ethernet. Red serves up over 11 PB of storage using the CEPH filesystem. Red primarily serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid) and is integrated with the Open Science Grid (OSG).
Other resources at UNL include hardware supporting the PATh, PRP, and OSG projects as well as the off-site replica of the Attic archival storage system.
......@@ -18,7 +18,7 @@ Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS
Anvil is an OpenStack cloud environment consisting of 1,520 cores and 400TB of CEPH storage all connected by 10 Gbps networking. The Anvil cloud exists to address needs of NU researchers that cannot be served by traditional scheduler-based HPC environments such as GUI applications, Windows based software, test environments, and persistent services.
Attic and Silo form a near line archive with 1.0 PB of usable storage. Attic is located at PKI in Omaha, while Silo acts as an online backup located in Lincoln. Both Attic and Silo are connected with 10 Gbps network connections.
Attic and Silo form a near line archive with 3PB of usable storage. Attic is located at PKI in Omaha, while Silo acts as an online backup located in Lincoln. Both Attic and Silo are connected with 10 Gbps network connections.
In addition to the cluster specific Lustre storage, a shared storage space known as Common exists between all HCC resources with 1.9PB capacity.
......@@ -60,11 +60,13 @@ These resources are detailed further below.
# 2. HCC at PKI Resources
## 2.1 Swan
* 144 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM
* 12 PowerEdge R650 2-soxcket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM and 2x T4 GPUs
* 2 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 2TB RAM
* 158 worker nodes
* 144 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM
* 12 PowerEdge R650 2-soxcket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM and 2x T4 GPUs
* 2 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 2TB RAM
* Mellanox HDR100 InfiniBand
* 25Gb networking with 4x Dell N5248F-ON switches
* Management network with 6x Dell N3248TE-ON switches
......@@ -126,8 +128,9 @@ These resources are detailed further below.
## 2.3 Attic
* 1 Mercury RM216 2U Rackmount Server 2-socket Xeon E5-2630 (6-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
* 1 2-socket AMD EPYC 7282 (16-core, 2.8GHz) Head
* 7 Western Digital Ultrastar Data60 JBOD with 60x 18TB NL SAS HDD
## 2.4 Anvil
......@@ -147,6 +150,7 @@ These resources are detailed further below.
* 10 GbE networking
* 6x Dell S4048-ON switches
## 2.5 Shared Common Storage
* Storage service providing 1.9PB usable capacity
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment