From abcb4d5a92b2d0800bc58dadc84ffff260b0878e Mon Sep 17 00:00:00 2001
From: Derek J Weitzel <dweitzel@unl.edu>
Date: Wed, 23 Aug 2023 14:10:47 -0500
Subject: [PATCH] Updating facilities docs

- Removing Mentions of Crane
- Update the validity date
- Remove the detailed hardware list of resources at the bottom.
---
 content/facilities.md | 136 +-----------------------------------------
 1 file changed, 2 insertions(+), 134 deletions(-)

diff --git a/content/facilities.md b/content/facilities.md
index ec8ee4d2..ff783a20 100644
--- a/content/facilities.md
+++ b/content/facilities.md
@@ -2,7 +2,7 @@
 title: "Facilities of the Holland Computing Center"
 ---
 
-This document details the equipment resident in the Holland Computing Center (HCC) as of December 2022.
+This document details the equipment resident in the Holland Computing Center (HCC) as of September 2023.
 
 HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as a 100 Gbps geographically diverse backup path. HCC uses multiple data transfer nodes as well as a FIONA (Flash IO Network Appliance) to facilitate end-to-end performance for data intensive workflows.
 
@@ -10,145 +10,13 @@ HCC's main resources at UNL include Red, a high throughput cluster for high ener
 
 Other resources at UNL include hardware supporting the PATh, NRP, and OSG projects as well as the off-site replica of the Attic archival storage system.
 
-HCC's resources at PKI (Peter Kiewit Institute) in Omaha include the Swan, Crane and Anvil clusters along with the Attic and Common storage services.
+HCC's resources at PKI (Peter Kiewit Institute) in Omaha include the Swan and Anvil clusters along with the Attic and Common storage services.
 
 Swan is the newest HPC resource and currently contains 8,848 modern CPU cores with high speed Mellanox HDR100 interconnects and 5.3PB of scratch lustre storage. Swan additionally contains 24x NVIDIA T4 GPUs and will be expanded over time as HCC's primary HPC system.
 
-Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 43 GPU nodes with 110 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
-
 Anvil is an OpenStack cloud environment consisting of 1,520 cores and 400TB of CEPH storage all connected by 10 Gbps networking. The Anvil cloud exists to address needs of NU researchers that cannot be served by traditional scheduler-based HPC environments such as GUI applications, Windows based software, test environments, and persistent services.
 
 Attic and Silo form a near line archive with 3PB of usable storage. Attic is located at PKI in Omaha, while Silo acts as an online backup located in Lincoln. Both Attic and Silo are connected with 10 Gbps network connections.
 
 In addition to the cluster specific Lustre storage, a shared storage space known as Common exists between all HCC resources with 1.9PB capacity.
 
-These resources are detailed further below.
-
-# 1. HCC at UNL Resources
-
-## 1.1 Red
-
-* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
-* 18 2-socket Xeon Gold 6248R (3.00GHz) (96 slots per node)
-* 1x 2-socket AMD EPYC 7402 (2.8GHz) with 1x V100S GPU (96 slots)
-* 46 2-socket Xeon Gold 6126 (2.6GHz) (48 slots per node)
-* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
-* 16 2-socket Xeon E5-2640 v3 (2.6GHz) (32 slots per node)
-* 40 2-socket Xeon E5-2650 v3 (2.3GHz) (40 slots per node)
-* 28 2-socket Xeon E5-2650 v2 (2.6GHz) (32 slots per node)
-* 48 2-socket Xeon E5-2660 v2 (2.2GHz) (32 slots per node)
-* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
-* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
-* 24 2-socket Xeon E5520 (2.27GHz) (16 slots per node)
-* 1 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
-* 40 2-socket Opteron 6128 (2.0GHz) (32 slots per node)
-* 40 4-socket Opteron 6272 (2.1GHz) (64 slots per node)
-* 11 PB CEPH storage
-* Mix of 1, 10, 25, 40, and 100 GbE networking
-    * 2x Dell Z9264F-ON switches
-    * 1x Dell S5248F-ON switch
-    * 1x Dell S6000-ON switch
-    * 3x Dell S4048-ON switch
-    * 5x Dell S3048-ON switches
-    * 2x Dell S4810 switches
-    * 5x Dell N3048 switches
-
-## 1.2 Silo (backup mirror for Attic)
-
-* 1 2-socket AMD EPYC 7282 (16-core, 2.8GHz) Head
-* 7 Western Digital Ultrastar Data60 JBOD with 60x 18TB NL SAS HDD
-
-# 2. HCC at PKI Resources
-
-
-## 2.1 Swan
-
-* 158 worker nodes
-    * 144 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM
-    * 12 PowerEdge R650 2-soxcket Xeon Gold 6348 (28-core, 2.6GHz) with 256GB RAM and 2x T4 GPUs
-    * 2 PowerEdge R650 2-socket Xeon Gold 6348 (28-core, 2.6GHz) with 2TB RAM
-* Mellanox HDR100 InfiniBand
-* 25Gb networking with 4x Dell N5248F-ON switches
-* Management network with 6x Dell N3248TE-ON switches
-* 10TB NVMe backed /home filesystem
-* 5.3PB Lustre /work filesystem
-* 3.5TB local flash scratch per node
-
-
-## 2.2 Crane
-
-* 452 Relion 2840e systems from Penguin
-    * 452x with 64 GB RAM
-    * 2-socket Intel Xeon E5-2670 (8-core, 2.6GHz)
-    * Intel QDR InfiniBand
-* 96 nodes from multiple vendor
-    * 59x with 256 GB RAM
-    * 37x with 512 GB RAM
-    * 2-socket Intel Xeon E5-2697 v4 (18-core, 2.3GHz)
-    * Intel Omni-Path
-* 1 and 10 GbE networking
-    * 4x 10 GbE switch
-    * 14x 1 GbE switches
-* 1500 TB Lustre storage over InfiniBand
-* 3 Supermicro SYS-6016GT systems
-    * 48 GB RAM
-    * 2-socket Intel Xeon E5620 (4-core, 2.4GHz)
-    * 2 Nvidia M2070 GPUs
-* 2 Supermicro SYS-5018GR-T systems
-    * 64 GB RAM
-    * 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz)
-    * 2 Nvidia P100 GPUs
-* 4 Lenovo SR630 systems 
-    * 1.5 TB RAM
-    * 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
-    * 3.84TB NVME Solid State Drive
-    * Intel Omni-Path
-* 21 Supermicro SYS-1029GP-TR systems
-    * 192 GB RAM
-    * 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
-    * 2 Nvidia V100 GPUs
-    * Intel Omni-Path
-* 4 Supermicro SYS-2029GP-TR systems
-    * 192 GB RAM
-    * 2-socket Intel Xeon Gold 6248R (24-core, 3.0GHz)
-    * 2 Nvidia V100S GPUs
-
-
-## 2.3 Attic
-
-* 1 2-socket AMD EPYC 7282 (16-core, 2.8GHz) Head
-* 7 Western Digital Ultrastar Data60 JBOD with 60x 18TB NL SAS HDD
-
-
-## 2.4 Anvil
-
-* 76 PowerEdge R630 systems
-    * 76x with 256 GB RAM
-    * 2-socket Intel Xeon E5-2650 v3 (10-core, 2.3GHz)
-    * Dual 10Gb Ethernet
-* 12 PowerEdge R730xd systems
-    * 12x with 128 GB RAM
-    * 2-socket Intel Xeon E5-2630L v3 (8-core, 1.8GHz)
-    * 12x 4TB NL SAS Hard Disks and 2x200 GB SSD
-    * Dual 10 Gb Ethernet
-* 2 PowerEdge R320 systems
-    * 2x with 48 GB RAM
-    * 1-socket Intel E5-2403 v3 (4-core, 1.8GHz)
-    * Quad 10Gb Ethernet
-* 10 GbE networking
-    * 6x Dell S4048-ON switches
-
-
-## 2.5 Shared Common Storage
-
-* Storage service providing 1.9PB usable capacity
-* 6 SuperMicro 1028U-TNRTP+ systems
-    * 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
-    * 256 GB RAM
-    * 120x 4TB SAS Hard Disks
-* 2 SuperMicro 1028U-TNRTP+ systems
-    * 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
-    * 128 GB RAM
-    * 6x 200 GB SSD
-* Intel Omni-Path
-* 10 GbE networking
-- 
GitLab