Skip to content
Snippets Groups Projects
Commit bdb42076 authored by Caughlin Bohn's avatar Caughlin Bohn
Browse files

Merge branch 'swan-update' into 'master'

Removed Rhino and Added Swan

See merge request !312
parents 5f8fb9ef 097d2b2c
Branches
No related tags found
1 merge request!312Removed Rhino and Added Swan
Showing
with 35 additions and 35 deletions
......@@ -36,14 +36,14 @@ are new to using HCC resources, Crane is the recommended cluster to use
initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
**Rhino**: Rhino is intended for large memory (RAM) computing needs.
Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
**Swan**: Swan is intended for large memory (RAM) computing needs.
Swan has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
node in the default partition. For extremely large RAM needs, there is also
a 'highmem' partition with 2 x 512GB and 2 x 1TB nodes.
**Important Notes**
- The Crane and Rhino clusters are separate. But, they are
- The Crane and Swan clusters are separate. But, they are
similar enough that submission scripts on whichever one will work on
another, and vice versa (excluding GPU resources and some combinations of
RAM/core requests).
......@@ -58,9 +58,9 @@ $ cd $WORK
Resources
---------
- ##### Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
- ##### Crane - Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
- ##### Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
- ##### Swan - HCC's newest Intel-based cluster, intended for large RAM computing needs.
- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
......@@ -74,7 +74,7 @@ Resource Capabilities
| Cluster | Overview | Processors | RAM\* | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 62.5GB<br><br>79 nodes @ 250GB<br><br>37 nodes @ 500GB<br><br>4 nodes @ 1500GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Rhino** | 110 node LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 187.5GB/250GB <br><br> 2 nodes @ 500GB<br><br> 2 nodes @ 994GB | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
| **Swan** | 168 node LINUX cluster | 168 Intel Xeon Gold 6348 CPU, 2 CPU/56 cores per node | 168 nodes @ 256GB <br><br> 2 nodes @ 2000GB | HDR100 Infiniband | 3.5TB local scratch per node <br><br> ~5200TB shared Lustre storage |
| **Red** | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
......@@ -20,7 +20,7 @@ the following instructions to work.**
- [Tutorial Video](#tutorial-video)
Every HCC user has a password that is same on all HCC machines
(Crane, Rhino, Anvil). This password needs to satisfy the HCC
(Crane, Swan, Anvil). This password needs to satisfy the HCC
password requirements.
### HCC password requirements
......
......@@ -7,7 +7,7 @@ weight = "52"
+++
HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane and Rhino.
HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane and Swan.
In order to use these resources, the "**biodata**" module needs to be loaded first.
For how to load module, please check [Module Commands]({{< relref "/applications/modules/_index.md" >}}).
......@@ -89,4 +89,4 @@ cp /scratch/blast_nucleotide.results .
The organisms and their appropriate environmental variables for all genomes and chromosome files, as well as indices are shown in the table below.
{{< table url="http://rhino-head.unl.edu:8192/bio/data/json" >}}
{{< table url="http://swan-head.unl.edu:8192/bio/data/json" >}}
+++
title = "Available Software for Rhino"
description = "List of available software for rhino.unl.edu."
title = "Available Software for Swan"
description = "List of available software for swan.unl.edu."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++
......@@ -42,4 +42,4 @@ If you are using custom GPU Anaconda Environment, the only module you need to lo
`module load anaconda`
{{% /panel %}}
{{< table url="http://rhino-head.unl.edu:8192/lmod/spider/json" >}}
{{< table url="http://swan-head.unl.edu:8192/lmod/spider/json" >}}
......@@ -21,7 +21,7 @@ should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu))
or go to https://hcc.unl.edu/newusers.
To use the **Rhino** cluster, replace crane.unl.edu with with rhino.unl.edu.
To use the **Swan** cluster, replace crane.unl.edu with with swan.unl.edu.
{{< figure src="/images/moba/session.png" height="450" >}}
Select OK. You will be asked to enter your password and to authenticate with duo.
......
......@@ -41,8 +41,8 @@ Once you have PuTTY installed, run the application and follow these steps:
{{% notice info %}}
**Note that the example below uses the `Crane` cluster.
Replace all instances of `crane` with `rhino` if
you want to connect to the `Rhino` cluster.
Replace all instances of `crane` with `swan` if
you want to connect to the `Swan` cluster.
{{% /notice %}}
1. On the first screen, type `crane.unl.edu` for Host Name, then click
......
......@@ -51,8 +51,8 @@ For example, to connect to the Crane cluster type the following in your terminal
$ ssh <username>@crane.unl.edu
{{< /highlight >}}
where `<username>` is replaced with your HCC account name. To use the **Rhino** cluster,
replace crane.unl.edu with rhino.unl.edu.
where `<username>` is replaced with your HCC account name. To use the **Swan** cluster,
replace crane.unl.edu with swan.unl.edu.
The first time you connect to one of our clusters from a computer, you will be prompted to verify the connection:
......
......@@ -4,7 +4,7 @@ description = "Guidelines for good HCC practices"
weight = "95"
+++
Crane and Rhino, our two high-performance clusters, are shared among all our users.
Crane and Swan, our two high-performance clusters, are shared among all our users.
Sometimes, some users' activities may negatively impact the clusters and the users.
To avoid this, we provide the following guidelines for good HCC practices.
......
......@@ -36,7 +36,7 @@ environmental variable (i.e. '`cd $COMMON`')
The common directory operates similarly to work and is mounted with
**read and write capability to worker nodes all HCC Clusters**. This
means that any files stored in common can be accessed from Crane and Rhino, making this directory ideal for items that need to be
means that any files stored in common can be accessed from Crane and Swan, making this directory ideal for items that need to be
accessed from multiple clusters such as reference databases and shared
data files.
......
......@@ -33,7 +33,7 @@ cost, please see the
The easiest and fastest way to access Attic is via Globus. You can
transfer files between your computer, our clusters ($HOME, $WORK, and $COMMON on
Crane or Rhino), and Attic. Here is a detailed tutorial on
Crane or Swan), and Attic. Here is a detailed tutorial on
how to set up and use [Globus Connect]({{< relref "/handling_data/data_transfer/globus_connect" >}}). For
Attic, use the Globus Endpoint **hcc\#attic**. Your Attic files are
located at `~, `which is a shortcut
......
......@@ -7,7 +7,7 @@ weight = 30
### Quick overview:
- Connected read/write to all HCC HPC cluster resources – you will see
the same files "in common" on any HCC cluster (i.e. Crane and Rhino).
the same files "in common" on any HCC cluster (i.e. Crane and Swan).
- 30 TB Per-group quota at no charge – larger quota available for
$105/TB/year
- No backups are made! Don't be silly! Precious data should still be
......
......@@ -45,9 +45,9 @@ data transfers should use CyberDuck instead.
6. After logging in, a new explorer window will appear and you will be in your personal directory. You can transfer files or directories by dragging and dropping them to or from your local machine into the window.
{{< figure src="/images/30442927.png" class="img-border" height="450" >}}
### Using the iRODS CLI tools from Crane/Rhino
### Using the iRODS CLI tools from Crane/Swan
The iRODS icommand tools are available on Crane and Rhino to use for data transfer to/from the clusters.
The iRODS icommand tools are available on Crane and Swan to use for data transfer to/from the clusters.
They first require creating a small json configuration file. Create a directory named `~/.irods` first by running
{{< highlight bash >}}
......
......@@ -34,7 +34,7 @@ To add an HCC machine, in the bookmarks pane click the "+" icon:
{{< figure src="/images/7274500.png" height="450" >}}
Ensure the type of connection is SFTP. Enter the hostname of the machine
you wish to connect to (crane.unl.edu, rhino.unl.edu) in the **Server**
you wish to connect to (crane.unl.edu, swan.unl.edu) in the **Server**
field, and your HCC username in the **Username** field. The
**Nickname** field is arbitrary, so enter whatever you prefer.
......
......@@ -8,7 +8,7 @@ weight = 5
a fast and robust file transfer service that allows users to quickly
move large amounts of data between computer clusters and even to and
from personal workstations. This service has been made available for
Crane, Rhino, and Attic. HCC users are encouraged to use Globus
Crane, Swan, and Attic. HCC users are encouraged to use Globus
Connect for their larger data transfers as an alternative to slower and
more error-prone methods such as scp and winSCP.
......@@ -16,7 +16,7 @@ more error-prone methods such as scp and winSCP. 
### Globus Connect Advantages
- Dedicated transfer servers on Crane, Rhino, and Attic allow
- Dedicated transfer servers on Crane, Swan, and Attic allow
large amounts of data to be transferred quickly between sites.
- A user can install Globus Connect Personal on his or her workstation
......
......@@ -4,13 +4,13 @@ description = "How to activate HCC endpoints on Globus"
weight = 20
+++
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), Rhino, (`hcc#rhino`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), Swan, (`hcc#swan`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
1. [Sign in](https://app.globus.org) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar.
{{< figure src="/images/Glogin.png" >}}
{{< figure src="/images/endpoints.png" >}}
2. Find the endpoint you want by entering '`hcc#crane`', '`hcc#rhino`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
2. Find the endpoint you want by entering '`hcc#crane`', '`hcc#swan`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
{{< figure src="/images/activateEndpoint.png" >}}
{{< figure src="/images/EndpointContinue.png" >}}
......
......@@ -5,7 +5,7 @@ weight = 50
+++
If you would like another colleague or researcher to have access to your
data, you may create a shared endpoint on Crane, Rhino, or Attic. You can personally manage access to this endpoint and
data, you may create a shared endpoint on Crane, Swan, or Attic. You can personally manage access to this endpoint and
give access to anybody with a Globus account (whether or not
they have an HCC account). *Please use this feature responsibly by
sharing only what is necessary and granting access only to trusted
......
......@@ -7,7 +7,7 @@ weight = 30
To transfer files between HCC clusters, you will first need to
[activate]({{< relref "/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints" >}}) the
two endpoints you would like to use (the available endpoints
are: `hcc#crane` `hcc#rhino`, and `hcc#attic`). Once
are: `hcc#crane` `hcc#swan`, and `hcc#attic`). Once
that has been completed, follow the steps below to begin transferring
files. (Note: You can also transfer files between an HCC endpoint and
any other Globus endpoint for which you have authorized access. That
......
......@@ -28,7 +28,7 @@ endpoints.
From your Globus account, select the 'File Manager' tab
from the left sidebar and enter the name of your new endpoint the 'Collection' text box. Press 'Enter' and then
navigate to the appropriate directory. Select "Transfer of Sync to.." from the right sidebar (or select the "two panels"
icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, `hcc#rhino`, or `hcc#attic`),
icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, `hcc#swan`, or `hcc#attic`),
type or navigate to the desired directory, and initiate the file transfer by clicking on the blue
arrow button.
{{< figure src="/images/PersonalTransfer.png" >}}
......
......@@ -112,8 +112,8 @@ command:
All transfers must take place between Globus endpoints. Even if you are
transferring from an endpoint that you are already connected to, that
endpoint must be activated in Globus. Here, we are transferring between
Crane and Rhino. We have activated the Crane endpoint and saved its
UUID to the variable `$tusker` as we did for `$crane` above.
Crane and Swan. We have activated the Crane endpoint and saved its
UUID to the variable `$swan` as we did for `$crane` above.
To transfer files, we use the command `globus transfer`. The format of
this command is `globus transfer <endpoint1>:<file_path>
......
......@@ -4,7 +4,7 @@ description = "How to transfer files directly from the transfer servers"
weight = 40
+++
Crane, Rhino, and Attic each have a dedicated transfer server with
Crane, Swan, and Attic each have a dedicated transfer server with
10 Gb/s connectivity that allows
for faster data transfers than the login nodes. With [Globus
Connect]({{< relref "globus_connect" >}}), users
......@@ -18,7 +18,7 @@ using these dedicated servers for data transfers:
Cluster | Transfer server
----------|----------------------
Crane | `crane-xfer.unl.edu`
Rhino | `rhino-xfer.unl.edu`
Swan | `swan-xfer.unl.edu`
Attic | `attic-xfer.unl.edu`
{{% notice info %}}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment