| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
You may only request the following amount of RAM: <br>
@@ -6,13 +6,11 @@ This document details the equipment resident in the Holland Computing Center (HC
HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Sandhills and Red. Sandhills is a linux cluster dedicated to general campus usage with 5,472 compute cores interconnected by low-latency InfiniBand networking. 175 TB of Lustre storage is complemented by 50 TB of NFS storage and 3 TB of local scratch per node. Tusker offers 3,712 cores interconnected with Mellanox QDR InfiniBand along with 523TB of Lustre storage. Each compute node is a Dell R815 server with at least 256 GB RAM and 4 Opteron 6272 (2.1 GHz) processors.
HCC's resources at UNL include two distinct offerings: Rhino and Red. Rhino is a linux cluster dedicated to general campus usage with 7,040 compute cores interconnected by low-latency Mellanox QDR InfiniBand networking. 360 TB of BeeGFS storage is complemented by 50 TB of NFS storage and 1.5 TB of local scratch per node. Each compute node is a Dell R815 server with at least 192 GB RAM and 4 Opteron 6272 / 6376 (2.1 / 2.3 GHz) processors.
The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
Tusker and Sandhills are currently decommissioned. These resources will be combined into a new cluster called Rhino which will be available at a future date.
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 21 GPU nodes with 57 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
...
...
@@ -26,17 +24,16 @@ These resources are detailed further below.
@@ -4,15 +4,15 @@ description = "How to activate HCC endpoints on Globus"
weight = 20
+++
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), Rhino, (`hcc#rhino`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
1.[Sign in](https://www.globus.org/SignIn) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar.
{{<figuresrc="/images/Glogin.png">}}
{{<figuresrc="/images/endpoints.png">}}
2. Find the endpoint you want by entering '`hcc#crane`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
2. Find the endpoint you want by entering '`hcc#crane`', '`hcc#rhino`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
{{<figuresrc="/images/activateEndpoint.png">}}
{{<figuresrc="/images/EndpointContinue.png">}}
{{<figuresrc="/images/EndpointContinue.png">}}
3. You will be redirected to the HCC Globus Endpoint Activation page. Enter your *HCC* username and password (the password you usually use to log into the HCC clusters).