Skip to content
Snippets Groups Projects
Commit 4e4cb1ef authored by Adam Caprez's avatar Adam Caprez
Browse files

Merge branch 'remove-sh-refs' into 'master'

Remove Sandhills references from docs.

See merge request !33
parents 3992f8a8 2337bf69
No related branches found
No related tags found
1 merge request!33Remove Sandhills references from docs.
......@@ -48,11 +48,9 @@ Users](https://hcc-docs.unl.edu/pages/viewpage.action?pageId=2851290).
**Logging into Crane or Tusker**
{{< highlight bash >}}
ssh crane.unl.edu -l <username>
ssh <username>@crane.unl.edu
or
ssh tusker.unl.edu -l <username>
or
ssh sandhills.unl.edu -l <username>
ssh <username>@tusker.unl.edu
{{< /highlight >}}
Duo Security
......@@ -65,7 +63,7 @@ Duo](https://hcc-docs.unl.edu/display/HCCDOC/Setting+up+and+using+Duo)
**Important Notes**
- The Crane, Tusker and Sandhills clusters are separate. But, they are
- The Crane and Tusker clusters are separate. But, they are
similar enough that submission scripts on whichever one will work on
another, and vice versa.
......@@ -100,7 +98,6 @@ Resource Capabilities
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*1 Node with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
| **Sandhills** | 108 node Production-mode LINUX cluster (condominium model) | 62 4-socket Opteron 6376 (2.3 Ghz, 64 cores/node)<br>44 4-socket Opteron 6128 (2.0 Ghz, 32 cores/node)<br>2 4-socket Opteron 6168 (1.9 Ghz, 48 cores/node) | 62 nodes @ 192GB<br>44 nodes @ 128GB<br>2 nodes @ 256GB | QDR Infiniband<br>Gigabit Ethernet | 175 TB shared Lustre storage<br>~1.5TB per node
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
......@@ -22,7 +22,7 @@ For access, contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
and include your HCC username. After your account has been added to the
group "*gauss*", here are four simple steps to run Gaussian 09 on
Sandhills, Tusker, and Crane:
Tusker and Crane:
**Step 1:** Copy **g09** sample input file and SLURM script to your
"g09" test directory on the `/work` filesystem:
......
......@@ -4,9 +4,9 @@ description = "How to submit jobs to HCC resources"
weight = "10"
+++
Crane, Sandhills and Tusker are managed by
Crane and Tusker are managed by
the [SLURM](https://slurm.schedmd.com) resource manager.
In order to run processing on Crane, Sandhills or Tusker, you
In order to run processing on Crane or Tusker, you
must create a SLURM script that will run your processing. After
submitting the job, SLURM will schedule your processing on an available
worker node.
......
......@@ -28,8 +28,7 @@ Access to HCC Supercomputers
-------------------------------
Here we use the HCC cluster **Tusker** for demonstration. To use the
**Crane** or **Sandhills** clusters, replace tusker.unl.edu with
crane.unl.edu or sandhills.unl.edu.
**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host Name, then click
**Open**.
......@@ -88,8 +87,8 @@ and the HCC supercomputers through a Graphic User Interface (GUI).
Download and install the third party application **WinSCP**
to connect the file systems between your personal computer and the HCC supercomputers.
Below is a step-by-step installation guide. Here we use the HCC cluster **Tusker**
for demonstration. To use the **Sandhills** cluster, replace `tusker.unl.edu`
with `sandhills.unl.edu`.
for demonstration. To use the **Crane** cluster, replace `tusker.unl.edu`
with `crane.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host name, enter your
HCC account username and password for User name and Password. Then
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment