_index.md 5.76 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
HCC Documentation
============================


The Holland Computing Center supports a diverse collection of research
computing hardware.  Anyone in the University of Nebraska system is
welcome to apply for an account on HCC machines.

Access to these resources is by default shared with the rest of the user
community via various job schedulers. These policies may be found on the
pages for the various resources. Alternatively, a user may buy into an
existing resource, acquiring 'priority access'. Finally, several
machines are available via Condor for opportunistic use. This will allow
users almost immediate access, but the job is subject to preemption.

#### <a href="http://hcc.unl.edu/new-user-request" class="external-link">New Users Sign Up</a>

#### [Quick Start Guides](/quickstarts)

Which Cluster to Use?
---------------------

**Crane**: Crane is the newest and most powerful HCC resource . If you
are new to using HCC resources, Crane is the recommended cluster to use
initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
node. If your job requires more than 16 cores per node or you need more
than 64GB of memory, consider using Tusker instead.

**Tusker**: Similar to Crane, Tusker is another cluster shared by all
campus users. It has 4 CPU/ 64 cores and 256GB RAM per nodes. Two nodes
have 512GB RAM for very large memory jobs. So for jobs requiring more
than 16 cores per node or large memory, Tusker would be a better option.

**Sandhills**: Sandhills is a condominium-style cluster, and the
majority is owned by various research groups on campus. Jobs from
resource owners have first priority in their owned partitions. Users
that do not own resources (Guests) can do opportunistic computation, but
we would recommend using Crane or Tusker.

User Login
----------

For Windows users, please refer to this link [For Windows
Users](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users). For
Mac or Linux users, please refer to this link [For Mac/Linux
Users](https://hcc-docs.unl.edu/pages/viewpage.action?pageId=2851290).

**Logging into Crane or Tusker**

{{< highlight bash >}}
ssh crane.unl.edu -l <username>
or
ssh tusker.unl.edu -l <username>
or
ssh sandhills.unl.edu -l <username>
{{< /highlight >}}

Duo Security
------------

Duo two-factor authentication is **required** for access to HCC
resources. Registration and usage of Duo security can be found in this
section: [Setting up and using
Duo](https://hcc-docs.unl.edu/display/HCCDOC/Setting+up+and+using+Duo)

**Important Notes**

-   The Crane, Tusker and Sandhills clusters are separate. But, they are
    similar enough that submission scripts on whichever one will work on
    another, and vice versa.    
     
-   The worker nodes cannot write to the `/home` directories. You must
    use your `/work` directory for processing in your job. You may
    access your work directory by using the command:
{{< highlight bash >}}
$ cd $WORK
{{< /highlight >}}

Resources
---------

-   <span style="color: rgb(0,0,0);">Crane</span> - HCC's newest
    machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM
    per node.

-   Tusker - consists of 106 AMD Interlagos-based nodes (6784 cores)
    interconnected with Mellanox QDR Infiniband.

-   Sandhills - has 1440 AMD cores housed in 42 nodes with 128GB per
    node and 2 nodes with 256GB per node.

-   <a href="http://hcc.unl.edu/red/index.php" class="external-link">Red </a>-
    This cluster is the resource for UNL's US CMS Tier-2 site.

    -   <a href="http://www.uscms.org/" class="external-link">CMS</a>
    -   <a href="http://www.opensciencegrid.org/" class="external-link">Open Science Grid</a>
    -   <a href="https://myosg.grid.iu.edu/" class="external-link">MyOSG</a>

-   [Glidein](The-Open-Science-Grid_11635314.html) - A gateway to
    running jobs on the OSG, a collection of computing resources across
    the US.
-   Anvil - HCC's cloud computing cluster based on Openstack

Resource Capabilities
---------------------

| Cluster | Overview | Processors | RAM | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane**   | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Tusker**  | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*1 Node with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
| **Sandhills** | 108 node Production-mode LINUX cluster (condominium model) | 62 4-socket Opteron 6376 (2.3 Ghz, 64 cores/node)<br>44 4-socket Opteron 6128 (2.0 Ghz, 32 cores/node)<br>2 4-socket Opteron 6168 (1.9 Ghz, 48 cores/node) | 62 nodes @ 192GB<br>44 nodes @ 128GB<br>2 nodes @ 256GB | QDR Infiniband<br>Gigabit Ethernet | 175 TB shared Lustre storage<br>~1.5TB per node
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |

You may only request the following amount of RAM: <br>
\*62.5GB <br>
\*\*250GB <br>
\*\*\*500GB <br>
\*\*\*\*1000GB