-
Garhan Attebury authoredGarhan Attebury authored
- HCC Documentation
- New Users Sign Up
- Quick Start Guides
- Which Cluster to Use?
- User Login
- Duo Security
- Resources
- Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
- Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
- Red - This cluster is the resource for UNL's USCMS Tier-2 site.
- Anvil - HCC's cloud computing cluster based on Openstack
- Glidein - A gateway to running jobs on the OSG, a collection of computing resources across the US.
- Resource Capabilities
title = "HCC Documentation"
description = "HCC Documentation Home"
weight = "1"
HCC Documentation
The Holland Computing Center supports a diverse collection of research computing hardware. Anyone in the University of Nebraska system is welcome to apply for an account on HCC machines.
Access to these resources is by default shared with the rest of the user community via various job schedulers. These policies may be found on the pages for the various resources. Alternatively, a user may buy into an existing resource, acquiring 'priority access'. Finally, several machines are available via Condor for opportunistic use. This will allow users almost immediate access, but the job is subject to preemption.
New Users Sign Up
Quick Start Guides
Which Cluster to Use?
Crane: Crane is the newest and most powerful HCC resource . If you are new to using HCC resources, Crane is the recommended cluster to use initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
Rhino: Rhino is intended for large memory (RAM) computing needs. Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per node in the default partition. For extremely large RAM needs, there is also a 'highmem' partition with 2 x 512GB and 2 x 1TB nodes.
User Login
For Windows users, please refer to this link For Windows Users. For Mac or Linux users, please refer to this link For Mac/Linux Users.
Logging into Crane or Rhino
{{< highlight bash >}} ssh @crane.unl.edu {{< /highlight >}}
or
{{< highlight bash >}} ssh @rhino.unl.edu {{< /highlight >}}
Duo Security
Duo two-factor authentication is required for access to HCC resources. Registration and usage of Duo security can be found in this section: Setting up and using Duo
Important Notes
- The Crane and Rhino clusters are separate. But, they are similar enough that submission scripts on whichever one will work on another, and vice versa (excluding GPU resources and some combinations of RAM/core requests).
- The worker nodes cannot write to the
/home
directories. You must use your/work
directory for processing in your job. You may access your work directory by using the command: {{< highlight bash >}} $ cd $WORK {{< /highlight >}}
Resources
-
-
-
USCMS Tier-2 site.
Red - This cluster is the resource for UNL's -
-
Resource Capabilities
Cluster | Overview | Processors | RAM | Connection | Storage |
---|---|---|---|---|---|
Crane | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node 116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node ("CraneOPA") |
452 nodes @ *64GB 79 nodes @ ***256GB 37 nodes @ ****512GB |
QDR Infiniband EDR Omni-Path Architecture |
~1.8 TB local scratch per node ~4 TB local scratch per node ~1452 TB shared Lustre storage |
Rhino | 110 node Production-mode LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB**/256GB*** 2 nodes @ 512GB**** 2 nodes @ 1024GB***** |
QDR Infiniband | ~1.5TB local scratch per node ~360TB shared BeeGFS storage |
Red | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
Anvil | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
You may only request the following amount of RAM:
*62.5GB
**187.5GB
***250GB
****500GB
*****1000GB