Skip to content
Snippets Groups Projects
Verified Commit 3e4707a1 authored by Adam Caprez's avatar Adam Caprez
Browse files

Merge branch 'master' into rework

parents 0dfde4c0 abce1e0f
Branches
No related tags found
1 merge request!167Rework
......@@ -32,7 +32,7 @@ and named `anvil_key`.  Depending on which Linux OS you're using in your
instance, the username to use will be different. See the
[Available Images]({{< relref "available_images" >}})
page for a table with the username to use for each OS.
In the *Terminal* application, run the commandi
In the *Terminal* application, run the command:
{{< highlight bash >}}ssh -i ~/Desktop/anvil_key centos@<ip address> {{< /highlight >}}
......
......@@ -2,17 +2,17 @@
title: "Facilities of the Holland Computing Center"
---
This document details the equipment resident in the Holland Computing Center (HCC) as of November 2018.
This document details the equipment resident in the Holland Computing Center (HCC) as of October 2019.
HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as backup 10 Gbps paths. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Rhino and Red. Rhino is a linux cluster dedicated to general campus usage with 7,040 compute cores interconnected by low-latency Mellanox QDR InfiniBand networking. 360 TB of BeeGFS storage is complemented by 50 TB of NFS storage and 1.5 TB of local scratch per node. Each compute node is a Dell R815 server with at least 192 GB RAM and 4 Opteron 6272 / 6376 (2.1 / 2.3 GHz) processors.
The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
The largest machine on the Lincoln campus is Red, with 14,160 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 11 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 21 GPU nodes with 57 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 43 GPU nodes with 110 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
Anvil is an OpenStack cloud environment consisting of 1,520 cores and 400TB of CEPH storage all connected by 10 Gbps networking. The Anvil cloud exists to address needs of NU researchers that cannot be served by traditional scheduler-based HPC environments such as GUI applications, Windows based software, test environments, and persistent services. In addition, a project to expand Ceph storage by 1.1 PB is in progress.
......@@ -38,20 +38,22 @@ These resources are detailed further below.
## 1.2 Red
* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 16 2-socket Xeon E5520 (2.27 GHz) (16 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 46 2-socket Xeon Gold 6126 (2.6GHz) (48 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 16 2-socket Xeon E5-2640 v3 (2.6GHz) (32 slots per node)
* 40 2-socket Xeon E5-2650 v3 (2.3GHz) (40 slots per node)
* 24 4-socket Opteron 6272 (2.1 GHz) (64 slots per node)
* 28 2-socket Xeon E5-2650 v2 (2.6GHz) (32 slots per node)
* 48 2-socket Xeon E5-2660 (2.2GHz) (32 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 2 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 10.8 PB HDFS storage
* 48 2-socket Xeon E5-2660 v2 (2.2GHz) (32 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 24 2-socket Xeon E5520 (2.27GHz) (16 slots per node)
* 1 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 40 2-socket Opteron 6128 (2.0GHz) (32 slots per node)
* 40 4-socket Opteron 6272 (2.1GHz) (64 slots per node)
* 11 PB HDFS storage
* Mix of 1, 10, and 40 GbE networking
* 1x Dell S6000-ON switch
* 2x Dell S4048-ON switch
* 3x Dell S4048-ON switch
* 5x Dell S3048-ON switches
* 2x Dell S4810 switches
* 2x Dell N3048 switches
......@@ -98,6 +100,17 @@ These resources are detailed further below.
* 64 GB RAM
* 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz)
* 2 Nvidia P100 GPUs
* 4 Lenovo SR630 systems
* 1.5 TB RAM
* 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
* 3.84TB NVME Solid State Drive
* Intel Omni-Path
* 21 Supermicro SYS-1029GP-TR systems
* 192 GB RAM
* 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
* 2 Nvidia V100 GPUs
* Intel Omni-Path
## 2.2 Attic
......
......@@ -129,6 +129,11 @@ from the window.
{{< figure src="/images/7274511.png" height="450" >}}
If you run into issues with two-factor authentication, try the command below for a quick fix:
{{< highlight bash >}}
$ rm -rf ~/Library/ApplicationSupport/Cyberduck
{{< /highlight >}}
Mac Tutorial Video
------------------
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment