Skip to content
Snippets Groups Projects
Commit 7fb06b67 authored by Garhan Attebury's avatar Garhan Attebury
Browse files

Merge branch 'facdocup' into 'master'

updates to red and DC descriptions and such

See merge request !155
parents bb01e934 6ef53ff3
No related branches found
No related tags found
1 merge request!155updates to red and DC descriptions and such
......@@ -4,11 +4,11 @@ title: "Facilities of the Holland Computing Center"
This document details the equipment resident in the Holland Computing Center (HCC) as of October 2019.
HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as backup 10 Gbps paths. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Rhino and Red. Rhino is a linux cluster dedicated to general campus usage with 7,040 compute cores interconnected by low-latency Mellanox QDR InfiniBand networking. 360 TB of BeeGFS storage is complemented by 50 TB of NFS storage and 1.5 TB of local scratch per node. Each compute node is a Dell R815 server with at least 192 GB RAM and 4 Opteron 6272 / 6376 (2.1 / 2.3 GHz) processors.
The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
The largest machine on the Lincoln campus is Red, with 14,160 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 11 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
......@@ -38,20 +38,22 @@ These resources are detailed further below.
## 1.2 Red
* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 16 2-socket Xeon E5520 (2.27 GHz) (16 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 46 2-socket Xeon Gold 6126 (2.6GHz) (48 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 16 2-socket Xeon E5-2640 v3 (2.6GHz) (32 slots per node)
* 40 2-socket Xeon E5-2650 v3 (2.3GHz) (40 slots per node)
* 24 4-socket Opteron 6272 (2.1 GHz) (64 slots per node)
* 28 2-socket Xeon E5-2650 v2 (2.6GHz) (32 slots per node)
* 48 2-socket Xeon E5-2660 (2.2GHz) (32 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 2 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 10.8 PB HDFS storage
* 48 2-socket Xeon E5-2660 v2 (2.2GHz) (32 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 24 2-socket Xeon E5520 (2.27GHz) (16 slots per node)
* 1 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 40 2-socket Opteron 6128 (2.0GHz) (32 slots per node)
* 40 4-socket Opteron 6272 (2.1GHz) (64 slots per node)
* 11 PB HDFS storage
* Mix of 1, 10, and 40 GbE networking
* 1x Dell S6000-ON switch
* 2x Dell S4048-ON switch
* 3x Dell S4048-ON switch
* 5x Dell S3048-ON switches
* 2x Dell S4810 switches
* 2x Dell N3048 switches
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment