diff --git a/content/_index.md b/content/_index.md index 188f97b7c332b36e31e42e70177aaf8c319da689..0e48f5f0849ed734db63eb35ebc9e23575b07eb6 100644 --- a/content/_index.md +++ b/content/_index.md @@ -30,13 +30,6 @@ Which Cluster to Use? are new to using HCC resources, Crane is the recommended cluster to use initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node. -If your job requires more than 36 cores per node or you need more than -512GB of memory, consider using Tusker instead. - -**Tusker**: Similar to Crane, Tusker is another cluster shared by all -campus users. It has 4 CPU/ 64 cores and 256GB RAM per node. Two nodes -have 1024GB RAM for very large memory jobs. So for jobs requiring more -than 36 cores per node or large memory, Tusker would be a better option. User Login ---------- @@ -44,12 +37,11 @@ User Login For Windows users, please refer to this link [For Windows Users]({{< relref "for_windows_users" >}}). For Mac or Linux users, please refer to this link [For Mac/Linux Users]({{< relref "for_maclinux_users">}}). -**Logging into Crane or Tusker** +**Logging into Crane** {{< highlight bash >}} ssh <username>@crane.unl.edu -or -ssh <username>@tusker.unl.edu + {{< /highlight >}} Duo Security @@ -60,10 +52,6 @@ resources. Registration and usage of Duo security can be found in this section: [Setting up and using Duo]({{< relref "setting_up_and_using_duo">}}) **Important Notes** - -- The Crane and Tusker clusters are separate. But, they are - similar enough that submission scripts on whichever one will work on - another, and vice versa. - The worker nodes cannot write to the `/home` directories. You must use your `/work` directory for processing in your job. You may @@ -77,8 +65,6 @@ Resources - ##### Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node. -- ##### Tusker - consists of 106 AMD Interlagos-based nodes (6784 cores) interconnected with Mellanox QDR Infiniband. - - ##### Red - This cluster is the resource for UNL's US CMS Tier-2 site. - [CMS](http://www.uscms.org/) @@ -95,7 +81,6 @@ Resource Capabilities | Cluster | Overview | Processors | RAM | Connection | Storage | ------- | ---------| ---------- | --- | ---------- | ------ | **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage -| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*2 Nodes with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch | | **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space | | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) | diff --git a/content/facilities.md b/content/facilities.md index ebf8f7efa70ce1212c6a976364e8ebe8720620a6..88fbabdf02b6d7e728c6787e47e71655722cc551 100644 --- a/content/facilities.md +++ b/content/facilities.md @@ -6,11 +6,13 @@ This document details the equipment resident in the Holland Computing Center (HC HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows. -HCC's resources at UNL include two distinct offerings: Sandhills and Red. Sandhills is a linux cluster dedicated to general campus usage with 5,472 compute cores interconnected by low-latency InfiniBand networking. 175 TB of Lustre storage is complemented by 50 TB of NFS storage and 3 TB of local scratch per node. +HCC's resources at UNL include two distinct offerings: Sandhills and Red. Sandhills is a linux cluster dedicated to general campus usage with 5,472 compute cores interconnected by low-latency InfiniBand networking. 175 TB of Lustre storage is complemented by 50 TB of NFS storage and 3 TB of local scratch per node. Tusker offers 3,712 cores interconnected with Mellanox QDR InfiniBand along with 523TB of Lustre storage. Each compute node is a Dell R815 server with at least 256 GB RAM and 4 Opteron 6272 (2.1 GHz) processors. The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid). -HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Tusker, Crane, Anvil, Attic, and Common storage. Tusker offers 3,712 cores interconnected with Mellanox QDR InfiniBand along with 523TB of Lustre storage. Each compute node is a Dell R815 server with at least 256 GB RAM and 4 Opteron 6272 (2.1 GHz) processors. Tusker and Sandhills are currently being retired and will be moved to the Walter Scott Engineering Center located in Lincoln consolidated into one, new Tusker cluster. +Tusker and Sandhills are currently decommissioned. These resources will be combined into a new cluster called Rhino which will be available at a future date. + +HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage. Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 21 GPU nodes with 57 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning. @@ -36,7 +38,7 @@ These resources are detailed further below. * 175TB shared scratch storage (Lustre) -> /work * 3TB local scratch -# 1.2 Red +## 1.2 Red * USCMS Tier-2 resource, available opportunistically via the Open Science Grid * 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node) @@ -57,14 +59,12 @@ These resources are detailed further below. * 2x Dell S4810 switches * 2x Dell N3048 switches -# 1.3 Silo (backup mirror for Attic) +## 1.3 Silo (backup mirror for Attic) * 1 Mercury RM216 2U Rackmount Server 2 Xeon E5-2630 (12-core, 2.6GHz) * 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks -# 2. HCC at PKI Resources - -## 2.1 Tusker +## 1.4 Tusker * 58 PowerEdge R815 systems * 54x with 256 GB RAM, 2x with 512 GB RAM, 2x with 1024 GB RAM @@ -74,7 +74,9 @@ These resources are detailed further below. * 3x Dell Powerconnect 6248 switches * 523TB Lustre storage over InfiniBand -# 2.2 Crane +# 2. HCC at PKI Resources + +## 2.1 Crane * 452 Relion 2840e systems from Penguin * 452x with 64 GB RAM @@ -110,12 +112,12 @@ These resources are detailed further below. * 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz) * 2 Nvidia P100 GPUs -# 2.3 Attic +## 2.2 Attic * 1 Mercury RM216 2U Rackmount Server 2-socket Xeon E5-2630 (6-core, 2.6GHz) * 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks -# 2.4 Anvil +## 2.3 Anvil * 76 PowerEdge R630 systems * 76x with 256 GB RAM @@ -133,7 +135,7 @@ These resources are detailed further below. * 10 GbE networking * 6x Dell S4048-ON switches -# 2.5 Shared Common Storage +## 2.4 Shared Common Storage * Storage service providing 1.9PB usable capacity * 6 SuperMicro 1028U-TNRTP+ systems diff --git a/content/guides/handling_data/_index.md b/content/guides/handling_data/_index.md index 418038d0b55710fdc6d7e964e1fc7b6ed83fe5ae..690e67ecc43bd834581e50b8075fd651b273de04 100644 --- a/content/guides/handling_data/_index.md +++ b/content/guides/handling_data/_index.md @@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`') The common directory operates similarly to work and is mounted with **read and write capability to worker nodes all HCC Clusters**. This -means that any files stored in common can be accessed from Crane and Tusker, making this directory ideal for items that need to be +means that any files stored in common can be accessed from Crane, making this directory ideal for items that need to be accessed from multiple clusters such as reference databases and shared data files. diff --git a/content/guides/handling_data/data_for_unmc_users_only.md b/content/guides/handling_data/data_for_unmc_users_only.md index e22b259214d1b06f86427fcc7dfe6b3576b26605..b386148c1fb09891377739e13d29d6935d160cd0 100644 --- a/content/guides/handling_data/data_for_unmc_users_only.md +++ b/content/guides/handling_data/data_for_unmc_users_only.md @@ -27,7 +27,7 @@ allowed to write files there. For Windows, learn more about logging in and uploading files [here](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users). -Using your uploaded files on Tusker or Crane. +Using your uploaded files on Crane. --------------------------------------------- Using your diff --git a/content/guides/handling_data/globus_connect/_index.md b/content/guides/handling_data/globus_connect/_index.md index 3047ae1fed8644052bb16c7cc5b6e659f44249a1..a47315beda543250bef35aee89243895ab46a5a9 100644 --- a/content/guides/handling_data/globus_connect/_index.md +++ b/content/guides/handling_data/globus_connect/_index.md @@ -7,7 +7,7 @@ description = "Globus Connect overview" a fast and robust file transfer service that allows users to quickly move large amounts of data between computer clusters and even to and from personal workstations. This service has been made available for -Tusker, Crane, and Attic. HCC users are encouraged to use Globus + Crane, and Attic. HCC users are encouraged to use Globus Connect for their larger data transfers as an alternative to slower and more error-prone methods such as scp and winSCP. @@ -15,7 +15,7 @@ more error-prone methods such as scp and winSCP. ### Globus Connect Advantages -- Dedicated transfer servers on Tusker, Crane, and Attic allow +- Dedicated transfer servers on Crane, and Attic allow large amounts of data to be transferred quickly between sites. - A user can install Globus Connect Personal on his or her workstation @@ -38,7 +38,7 @@ the <a href="https://www.globus.org/SignUp" class="external-link">Globus Connec Accounts are free and grant users access to any Globus endpoint for which they are authorized. An endpoint is simply a file system to or from which a user transfers files. All HCC users are authorized to -access their own /home, /work, and /common directories on Tusker and Crane via the Globus endpoints (named: `hcc#tusker` and `hcc#crane`). Those who have purchased Attic storage space can +access their own /home, /work, and /common directories on Crane via the Globus endpoints (named: `hcc#crane`). Those who have purchased Attic storage space can access their /attic directories via the Globus endpoint hcc\#attic. To initialize or activate the endpoint, users will be required to enter their HCC username, password, and Duo credentials for authentication. diff --git a/content/guides/handling_data/globus_connect/activating_hcc_cluster_endpoints.md b/content/guides/handling_data/globus_connect/activating_hcc_cluster_endpoints.md index eb49a04353b7a4dd556fbad70036938a054985fa..6a59bc9a3fd754565152411cd68ee6d19216daaf 100644 --- a/content/guides/handling_data/globus_connect/activating_hcc_cluster_endpoints.md +++ b/content/guides/handling_data/globus_connect/activating_hcc_cluster_endpoints.md @@ -4,13 +4,13 @@ description = "How to activate HCC endpoints on Globus" weight = 20 +++ -You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Tusker (`hcc#tusker`), Crane (`hcc#crane`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers. +You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers. 1. [Sign in](https://www.globus.org/SignIn) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar. {{< figure src="/images/Glogin.png" >}} {{< figure src="/images/endpoints.png" >}} -2. Find the endpoint you want by entering '`hcc#tusker`', '`hcc#crane`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'. +2. Find the endpoint you want by entering '`hcc#crane`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'. {{< figure src="/images/activateEndpoint.png" >}} {{< figure src="/images/EndpointContinue.png" >}} diff --git a/content/guides/handling_data/globus_connect/file_sharing.md b/content/guides/handling_data/globus_connect/file_sharing.md index 8402c209b9607b8a61a82c5134d1fe5f1421c804..05169a27df213122f1e7c02fe8b753fca1a5d18f 100644 --- a/content/guides/handling_data/globus_connect/file_sharing.md +++ b/content/guides/handling_data/globus_connect/file_sharing.md @@ -5,7 +5,7 @@ weight = 50 +++ If you would like another colleague or researcher to have access to your -data, you may create a shared endpoint on Tusker, Crane, or Attic. You can personally manage access to this endpoint and +data, you may create a shared endpoint on Crane, or Attic. You can personally manage access to this endpoint and give access to anybody with a Globus account (whether or not they have an HCC account). *Please use this feature responsibly by sharing only what is necessary and granting access only to trusted @@ -20,7 +20,7 @@ writable shared endpoints in your `work` directory (or `/shared`). 1. Sign in to your Globus account, click on the 'Endpoints' tab and search for the endpoint that you will use to host your shared endpoint. For example, if you would like to share data in your - Tusker `work` directory, search for the `hcc#tusker` endpoint. Once + Crane `work` directory, search for the `hcc#crane` endpoint. Once you have found the endpoint, it will need to be activated if it has not been already (see [endpoint activation instructions here]({{< relref "activating_hcc_cluster_endpoints" >}})). diff --git a/content/guides/handling_data/globus_connect/file_transfers_between_endpoints.md b/content/guides/handling_data/globus_connect/file_transfers_between_endpoints.md index 3504f0c46bfeee1dce42b9cf313b0acf94de7444..32355e68744d367a6944ea8e72caf5c674699dc0 100644 --- a/content/guides/handling_data/globus_connect/file_transfers_between_endpoints.md +++ b/content/guides/handling_data/globus_connect/file_transfers_between_endpoints.md @@ -7,7 +7,7 @@ weight = 30 To transfer files between HCC clusters, you will first need to [activate]({{< relref "activating_hcc_cluster_endpoints" >}}) the two endpoints you would like to use (the available endpoints -are: `hcc#tusker,` `hcc#crane, and `hcc#attic)`. Once +are: `hcc#crane` and `hcc#attic`). Once that has been completed, follow the steps below to begin transferring files. (Note: You can also transfer files between an HCC endpoint and any other Globus endpoint for which you have authorized access. That @@ -30,7 +30,7 @@ purposes we use two HCC endpoints.) 2. Enter the names of the two endpoints you would like to use, or select from the drop-down menus (for - example, `hcc#tusker` and `hcc#crane`). Enter the + example, `hcc#attic` and `hcc#crane`). Enter the directory paths for both the source and destination (the 'from' and 'to' paths on the respective endpoints). Press 'Enter' to view files under these directories. Select the files or directories you would diff --git a/content/guides/handling_data/globus_connect/file_transfers_to_and_from_personal_workstations.md b/content/guides/handling_data/globus_connect/file_transfers_to_and_from_personal_workstations.md index b956af045a25d5916100c99cd030cab1d34af750..aad26730d30337c68887abdd001d9bf81974d3f4 100644 --- a/content/guides/handling_data/globus_connect/file_transfers_to_and_from_personal_workstations.md +++ b/content/guides/handling_data/globus_connect/file_transfers_to_and_from_personal_workstations.md @@ -28,7 +28,7 @@ endpoints. From your Globus account, select the 'File Manager' tab from the left sidebar and enter the name of your new endpoint the 'Collection' text box. Press 'Enter' and then navigate to the appropriate directory. Select "Transfer of Sync to.." from the right sidebar (or select the "two panels" - icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, `hcc#tusker`, or `hcc#attic`), + icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, or `hcc#attic`), type or navigate to the desired directory, and initiate the file transfer by clicking on the blue arrow button. {{< figure src="/images/PersonalTransfer.png" >}} diff --git a/content/guides/handling_data/high_speed_data_transfers.md b/content/guides/handling_data/high_speed_data_transfers.md index 95cce2a440ce9853445a855c682cc1c5f6b3f96d..82c7a11e7454aa2d5ac805563df4f67fefad9421 100644 --- a/content/guides/handling_data/high_speed_data_transfers.md +++ b/content/guides/handling_data/high_speed_data_transfers.md @@ -4,7 +4,7 @@ description = "How to transfer files directly from the transfer servers" weight = 10 +++ -Crane, Tusker, and Attic each have a dedicated transfer server with +Crane and Attic each have a dedicated transfer server with 10 Gb/s connectivity that allows for faster data transfers than the login nodes. With [Globus Connect]({{< relref "globus_connect" >}}), users @@ -18,11 +18,10 @@ using these dedicated servers for data transfers: Cluster | Transfer server ----------|---------------------- Crane | `crane-xfer.unl.edu` -Tusker | `tusker-xfer.unl.edu` Attic | `attic-xfer.unl.edu` {{% notice info %}} Because the transfer servers are login-disabled, third-party transfers -between `crane-xfer`, `tusker-xfer,` and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}). +between `crane-xfer`, and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}). {{% /notice %}} diff --git a/content/guides/handling_data/using_attic.md b/content/guides/handling_data/using_attic.md index ae1b5cac483e8760bea73aeb338a2c80f187f21d..22757590c6775323875f3b051b05d50d89d83c84 100644 --- a/content/guides/handling_data/using_attic.md +++ b/content/guides/handling_data/using_attic.md @@ -33,7 +33,7 @@ cost, please see the The easiest and fastest way to access Attic is via Globus. You can transfer files between your computer, our clusters ($HOME, $WORK, and $COMMON on -Crane and Tusker), and Attic. Here is a detailed tutorial on +Crane), and Attic. Here is a detailed tutorial on how to set up and use [Globus Connect]({{< relref "globus_connect" >}}). For Attic, use the Globus Endpoint **hcc\#attic**. Your Attic files are located at `~, `which is a shortcut diff --git a/content/guides/running_applications/allinea_profiling_and_debugging/allinea_performance_reports/_index.md b/content/guides/running_applications/allinea_profiling_and_debugging/allinea_performance_reports/_index.md index 8a342ebf7bec73df2c06f754bccfc0ee63690120..aa90760502ae33aad065c3fd6c969ab593bf9bd2 100644 --- a/content/guides/running_applications/allinea_profiling_and_debugging/allinea_performance_reports/_index.md +++ b/content/guides/running_applications/allinea_profiling_and_debugging/allinea_performance_reports/_index.md @@ -26,11 +26,9 @@ Using Allinea Performance Reports on HCC ---------------------------------------- The Holland Computing Center owns **512 Allinea Performance Reports -licenses** that can be used to evaluate applications executed on Tusker -and Crane. +licenses** that can be used to evaluate applications executed on the clusters. In order to use Allinea Performance Reports on HCC, the appropriate -module needs to be loaded first. To load the module on Tusker or Crane, -use +module needs to be loaded first. To load the module on, use {{< highlight bash >}} module load allinea/5.0 @@ -57,7 +55,7 @@ application `hello_world`: {{% panel theme="info" header="perf-report example" %}} {{< highlight bash >}} -[<username>@login.tusker ~]$ perf-report ./hello-world +[<username>@login.crane ~]$ perf-report ./hello-world {{< /highlight >}} {{% /panel %}} @@ -71,7 +69,7 @@ to read from a file, you must use the `--input` option to the {{% panel theme="info" header="perf-report stdin redirection" %}} {{< highlight bash >}} -[<username>@login.tusker ~]$ perf-report --input=my_input.txt ./hello-world +[<username>@login.crane ~]$ perf-report --input=my_input.txt ./hello-world {{< /highlight >}} {{% /panel %}} @@ -81,7 +79,7 @@ More **perf-report** options can be seen by using: {{% panel theme="info" header="perf-report options" %}} {{< highlight bash >}} -[<username>@login.tusker ~]$ perf-report --help +[<username>@login.crane ~]$ perf-report --help {{< /highlight >}} {{% /panel %}} diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md index deb9548ce639c06043447c45cc82884cdff53932..3fbde63b1ca76c9f80c6c17ded9cb85aaf1d38e9 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md @@ -12,5 +12,5 @@ The following pages, [Create Local BLAST Database]({{<relref "create_local_blast ### Useful Information -In order to test the BLAST (blast/2.2) performance on Tusker, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below: +In order to test the BLAST (blast/2.2) performance on Crane, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below: {{< readfile file="/static/html/blast.html" >}} diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md index 13f38d3be773fd75dcd573748ea004c8bfa41d95..33f16752c511b85472b95b61067a8893d1cf0e0d 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md @@ -12,7 +12,7 @@ $ makeblastdb -in input_reads.fasta -dbtype [nucl|prot] -out input_reads_db where **input_reads.fasta** is the input file containing all sequences that need to be made into a database, and **dbtype** can be either `nucl` or `prot` depending on the type of the input file. -Simple example of how **makeblastdb** can be run on Tusker using SLURM script and nucleotide database is shown below: +Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below: {{% panel header="`blast_db.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md index babb649362326e24e93e20d7a42a70545a34b837..4024fe76fff77a3f8b5e21ef732fad3c8297e642 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md @@ -28,7 +28,7 @@ $ blastn -help These BLAST alignment commands are multi-threaded, and therefore using the BLAST option **-num_threads <number_of_CPUs>** is recommended. -HCC hosts multiple BLAST databases and indices on both Tusker and Crane. In order to use these resources, the ["biodata" module] ({{<relref "/guides/running_applications/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases: +HCC hosts multiple BLAST databases and indices on Crane. In order to use these resources, the ["biodata" module] ({{<relref "/guides/running_applications/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases: - **16SMicrobial** - **env_nt** diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md index 19d29a5a97e97803437dd42467e181310c543a76..5f2c3fb4bb819ac548711b715d28921a01328580 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md @@ -21,7 +21,7 @@ $ blat {{< /highlight >}} -Running BLAT on Tusker with query file `input_reads.fasta` and database `db.fa` is shown below: +Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below: {{% panel header="`blat_alignment.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md index 8fe470b147f5f5e819a8f2a8b3b3f8fed0ff31e0..7a0670a6d438724d1dfa38f0c8a44ff26cc1c40e 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md @@ -25,7 +25,7 @@ manual] (http://bowtie-bio.sourceforge.net/manual.shtml). Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using the following flags: **-q** (fastq files), **-f** (fasta files), **-r** (raw one-sequence per line), or **-c** (sequences given on command line). -An example of how to run Bowtie alignment on Tusker with single-end fastq file and `8 CPUs` is shown below: +An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below: {{% panel header="`bowtie_alignment.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md index afc54f5056dc385a0e9e42be448fbc680aa1b138..2fcb2817b484dbf562e4f2f2321b54ea69af9414 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md @@ -31,7 +31,7 @@ $ bowtie2 -x index_prefix [-q|--qseq|-f|-r|-c] [-1 input_reads_pair_1.[fasta|fas where **index_prefix** is the generated index using the **bowtie2-build** command, and **options** are optional parameters that can be found in the [Bowtie2 manual] (http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml). Bowtie2 supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using one of the following flags: **-q** (fastq files), **--qseq** (Illumina's qseq format), **-f** (fasta files), **-r** (raw one sequence per line), or **-c** (sequences given on command line). -An example of how to run Bowtie2 local alignment on Tusker with paired-end fasta files and `8 CPUs` is shown below: +An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below: {{% panel header="`bowtie2_alignment.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md index 6a8e4839221e15820406417ee6dd920c97ec248e..02ad668349887eea242ae8f31fe3c84a1cf803b6 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md @@ -22,7 +22,7 @@ $ bwa mem index_prefix [input_reads.fastq|input_reads_pair_1.fastq input_reads_p where **index_prefix** is the index for the reference genome generated from **bwa index**, and **input_reads.fastq**, **input_reads_pair_1.fastq**, **input_reads_pair_2.fastq** are the input files of sequencing data that can be single-end or paired-end respectively. Additional **options** for **bwa mem** can be found in the BWA manual. -Simple SLURM script for running **bwa mem** on Tusker with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below: +Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below: {{% panel header="`bwa_mem.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md index 8ede108b7ef01ad0312e57199b064c5a2d5db776..6028edfaac51fd8bfbc21ebb53b40fe01036618d 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md @@ -30,7 +30,7 @@ $ clustalo -h {{< /highlight >}} -Running Clustal Omega on Tusker with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below: +Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below: {{% panel header="`clustal_omega.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md index 69ac4d03eb4609a8d6f06f160c461c874c874255..057674311f3ed98ed1e4f5b758dbebe03b2b82e1 100644 --- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md +++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md @@ -27,7 +27,7 @@ $ tophat2 -h Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file. -An example of how to run TopHat2 on Tusker with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below: +An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below: {{% panel header="`tophat2_alignment.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md b/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md index 45f9d16e54f721e0f36814ae66aa650a57fdf13b..de6f22f2c4b5d550921fbf0079e561ed9d999e87 100644 --- a/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md +++ b/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md @@ -5,7 +5,7 @@ weight = "52" +++ -HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on both Tusker and Crane. +HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane. In order to use these resources, the "**biodata**" module needs to be loaded first. For how to load module, please check [Module Commands](#module_commands). diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md index 78ec596ccf4578324d72f31ae89f959c3900ae8a..07a0275202f5beb0fef6cd85f1d80366fb70cdf3 100644 --- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md +++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md @@ -16,7 +16,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file. -Running BamTools **convert** on Tusker with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below: +Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below: {{% panel header="`bamtools_convert.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md index 467b7d8a41b26f08c3950c88770eea0487d48efb..0a9787f788f611c00ef2aa5a6f436e448ab63a4c 100644 --- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md +++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md @@ -14,7 +14,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively. -Running **samtools view** on Tusker with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below: +Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below: {{% panel header="`samtools_view.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md index 836a3d233a69b8ea34388192d72350f069089efe..3cfaa501ed2314eb20c33f014b120bb576b528f4 100644 --- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md +++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md @@ -18,7 +18,7 @@ $ fastq-dump [options] input_reads.sra {{< /highlight >}} -An example of running **fastq-dump** on Tusker to convert SRA file containing paired-end reads is: +An example of running **fastq-dump** on Crane to convert SRA file containing paired-end reads is: {{% panel header="`sratoolkit.submit`"%}} {{< highlight bash >}} #!/bin/sh diff --git a/content/guides/running_applications/dmtcp_checkpointing.md b/content/guides/running_applications/dmtcp_checkpointing.md index 1583a4d6ed62f11b254905db281ffe28711c077c..5fca1a4c7816922840533973167b677822aa44a1 100644 --- a/content/guides/running_applications/dmtcp_checkpointing.md +++ b/content/guides/running_applications/dmtcp_checkpointing.md @@ -15,7 +15,7 @@ DMTCP are OpenMP, MATLAB, Python, Perl, MySQL, bash, gdb, X-Windows etc. DMTCP provides support for several resource managers, including SLURM, the resource manager used in HCC. The DMTCP module is available both on -Tusker and Crane, and is enabled by typing: +Crane, and is enabled by typing: {{< highlight bash >}} module load dmtcp @@ -24,7 +24,7 @@ module load dmtcp After the module is loaded, the first step is to run the command: {{< highlight bash >}} -[<username>@login.tusker ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command> +[<username>@login.crane ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command> {{< /highlight >}} where `--rm` option enables SLURM support, @@ -36,7 +36,7 @@ Beside the general options shown above, more `dmtcp_launch` options can be seen by using: {{< highlight bash >}} -[<username>@login.tusker ~]$ dmtcp_launch --help +[<username>@login.crane ~]$ dmtcp_launch --help {{< /highlight >}} `dmtcp_launch` creates few files that are used to resume the @@ -62,7 +62,7 @@ will keep running with the options defined in the initial Simple example of using DMTCP with [BLAST]({{< relref "/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment" >}}) -on Tusker is shown below: +on crane is shown below: {{% panel theme="info" header="dmtcp_blastx.submit" %}} {{< highlight batch >}} diff --git a/content/guides/running_applications/fortran_c_on_hcc.md b/content/guides/running_applications/fortran_c_on_hcc.md index 8a5ed998da1611224a75dece47144e7c3c7b2f38..1ed6a62ac0fa388134f3596cff1b4e340946d33a 100644 --- a/content/guides/running_applications/fortran_c_on_hcc.md +++ b/content/guides/running_applications/fortran_c_on_hcc.md @@ -8,7 +8,7 @@ This quick start demonstrates how to implement a Fortran/C program on HCC supercomputers. The sample codes and submit scripts can be downloaded from [serial_dir.zip](/attachments/serial_dir.zip). -#### Login to a HCC Cluster (Tusker or Crane) +#### Login to a HCC Cluster Log in to a HCC cluster through PuTTY ([For Windows Users]({{< relref "/quickstarts/connecting/for_windows_users">}})) or Terminal ([For Mac/Linux Users]({{< relref "/quickstarts/connecting/for_maclinux_users">}})) and make a subdirectory called `serial_dir` under the `$WORK` directory. diff --git a/content/guides/running_applications/running_gaussian_at_hcc.md b/content/guides/running_applications/running_gaussian_at_hcc.md index 5e7db26553039bbdf91f2ec6d937ee9d27014540..b3d5bf65a32cd646c71849b9d9ccd3fbd6cf8a77 100644 --- a/content/guides/running_applications/running_gaussian_at_hcc.md +++ b/content/guides/running_applications/running_gaussian_at_hcc.md @@ -21,8 +21,7 @@ of a **g09** license. For access, contact us at {{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu) and include your HCC username. After your account has been added to the -group "*gauss*", here are four simple steps to run Gaussian 09 on -Tusker and Crane: +group "*gauss*", here are four simple steps to run Gaussian 09 on Crane: **Step 1:** Copy **g09** sample input file and SLURM script to your "g09" test directory on the `/work` filesystem: diff --git a/content/guides/running_applications/running_theano.md b/content/guides/running_applications/running_theano.md index e1c873e3ad0a1151707d66999766ebf6827482f9..b493a42ef57179c26db18cf3735247ce833fc575 100644 --- a/content/guides/running_applications/running_theano.md +++ b/content/guides/running_applications/running_theano.md @@ -3,8 +3,7 @@ title = "Running Theano" description = "How to run the Theano on HCC resources." +++ -Theano is available on HCC resources via the modules system. CPU -versions are available on Tusker; both CPU and GPU +Theano is available on HCC resources via the modules system. Both CPU and GPU versions are available on Crane. Additionally, installs for both Python 2.7 and 3.6 are provided. diff --git a/content/guides/submitting_jobs/job_dependencies.md b/content/guides/submitting_jobs/job_dependencies.md index 488c03f95e3f7dd0f2cfe677d3f4c77f66adf531..98c96f2d3f00c797cc6cff15c339134cab1c5559 100644 --- a/content/guides/submitting_jobs/job_dependencies.md +++ b/content/guides/submitting_jobs/job_dependencies.md @@ -103,7 +103,7 @@ To start the workflow, submit Job A first: {{% panel theme="info" header="Submit Job A" %}} {{< highlight batch >}} -[demo01@login.tusker demo01]$ sbatch JobA.submit +[demo01@login.crane demo01]$ sbatch JobA.submit Submitted batch job 666898 {{< /highlight >}} {{% /panel %}} @@ -113,9 +113,9 @@ dependency: {{% panel theme="info" header="Submit Jobs B and C" %}} {{< highlight batch >}} -[demo01@login.tusker demo01]$ sbatch -d afterok:666898 JobB.submit +[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobB.submit Submitted batch job 666899 -[demo01@login.tusker demo01]$ sbatch -d afterok:666898 JobC.submit +[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobC.submit Submitted batch job 666900 {{< /highlight >}} {{% /panel %}} @@ -124,7 +124,7 @@ Finally, submit Job D as depending on both jobs B and C: {{% panel theme="info" header="Submit Job D" %}} {{< highlight batch >}} -[demo01@login.tusker demo01]$ sbatch -d afterok:666899:666900 JobD.submit +[demo01@login.crane demo01]$ sbatch -d afterok:666899:666900 JobD.submit Submitted batch job 666901 {{< /highlight >}} {{% /panel %}} @@ -135,7 +135,7 @@ of the dependency. {{% panel theme="info" header="Squeue Output" %}} {{< highlight batch >}} -[demo01@login.tusker demo01]$ squeue -u demo01 +[demo01@login.crane demo01]$ squeue -u demo01 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 666899 batch JobB demo01 PD 0:00 1 (Dependency) 666900 batch JobC demo01 PD 0:00 1 (Dependency) diff --git a/content/guides/submitting_jobs/partitions/_index.md b/content/guides/submitting_jobs/partitions/_index.md index fa725291962a98b38471505d7ec018620f715aa1..e23dc4683b9cc9446662bbb7f5660d9a3dc5efae 100644 --- a/content/guides/submitting_jobs/partitions/_index.md +++ b/content/guides/submitting_jobs/partitions/_index.md @@ -1,21 +1,17 @@ +++ title = "Partitions" -description = "Listing of partitions on Tusker and Crane." +description = "Listing of partitions on Crane." scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] +++ -Partitions are used in Crane and Tusker to distinguish different +Partitions are used in Crane to distinguish different resources. You can view the partitions with the command `sinfo`. ### Crane: [Full list for Crane]({{< relref "crane_available_partitions" >}}) -### Tusker: - -[Full list for Tusker]({{< relref "tusker_available_partitions" >}}) - #### Priority for short jobs To run short jobs for testing and development work, a job can specify a @@ -37,7 +33,7 @@ priority so it will run as soon as possible. Overall limitations of maximum job wall time. CPUs, etc. are set for all jobs with the default setting (when thea "–qos=" section is omitted) -and "short" jobs (described as above) on Tusker and Crane. +and "short" jobs (described as above) on Crane. The limitations are shown in the following form. | | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User | diff --git a/content/guides/submitting_jobs/partitions/tusker_available_partitions.md b/content/guides/submitting_jobs/partitions/tusker_available_partitions.md deleted file mode 100644 index 810f3793660ea0596b0e9ce1ad49d92feec5e68f..0000000000000000000000000000000000000000 --- a/content/guides/submitting_jobs/partitions/tusker_available_partitions.md +++ /dev/null @@ -1,12 +0,0 @@ -+++ -title = "Available Partitions for Tusker" -description = "List of available partitions for tusker.unl.edu." -scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] -css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] -+++ - -### Tusker: - -{{< table url="http://tusker-head.unl.edu:8192/slurm/partitions/json" >}} - -Two nodes have 512GB of memory instead of 256GB (Max Request = 500GB), and two have 1024GB of memory (Max Request = 1000GB). diff --git a/content/guides/submitting_jobs/submitting_r_jobs.md b/content/guides/submitting_jobs/submitting_r_jobs.md index ffbe293bd00bf205a56ba9237777acf825b2f380..77096d3b92011ec5ab4c9b2ca082482f7e1b363c 100644 --- a/content/guides/submitting_jobs/submitting_r_jobs.md +++ b/content/guides/submitting_jobs/submitting_r_jobs.md @@ -192,10 +192,7 @@ mpirun -n 1 R CMD BATCH Rmpi.R {{% /panel %}} When you run Rmpi job on Crane, please use the line `export -OMPI_MCA_mtl=^psm` in your submit script. On the other hand, if you -run Rmpi job on Tusker, you **do not need** to add this line. This is -because of the different Infiniband cards Tusker and Crane use. -Regardless of how may cores your job uses, the Rmpi package should +OMPI_MCA_mtl=^psm` in your submit script. Regardless of how may cores your job uses, the Rmpi package should always be run with `mpirun -n 1` because it spawns additional processes dynamically. diff --git a/content/osg/a_simple_example_of_submitting_an_htcondor_job.md b/content/osg/a_simple_example_of_submitting_an_htcondor_job.md index 9398f48c5eb20ffd3b375f277cb475e7469f3f27..3a68eb868061b30719dc4023c6cb46a75f2080cf 100644 --- a/content/osg/a_simple_example_of_submitting_an_htcondor_job.md +++ b/content/osg/a_simple_example_of_submitting_an_htcondor_job.md @@ -5,7 +5,7 @@ description = "A simple example of submitting an HTCondor job." This page describes a complete example of submitting an HTCondor job. -1. SSH to Tusker or Crane +1. SSH to Crane {{% panel theme="info" header="ssh command" %}} [apple@localhost]ssh apple@crane.unl.edu diff --git a/content/osg/how_to_submit_an_osg_job_with_htcondor.md b/content/osg/how_to_submit_an_osg_job_with_htcondor.md index 2e962d0e9a4692f5108754ebd574aa20a33307fe..ad535310724b5260923fb01ed25e7c83f40a72dd 100644 --- a/content/osg/how_to_submit_an_osg_job_with_htcondor.md +++ b/content/osg/how_to_submit_an_osg_job_with_htcondor.md @@ -3,7 +3,7 @@ title = "How to submit an OSG job with HTCondor" description = "How to submit an OSG job with HTCondor" +++ -{{% notice info%}}Jobs can be submitted to the OSG from Crane or Tusker, so +{{% notice info%}}Jobs can be submitted to the OSG from Crane, so there is no need to logon to a different submit host or get a grid certificate! {{% /notice %}} @@ -15,7 +15,7 @@ project provides software to schedule individual applications, workflows, and for sites to manage resources. It is designed to enable High Throughput Computing (HTC) on large collections of distributed resources for users and serves as the job scheduler used on the OSG. - Jobs are submitted from either the Crane or Tusker login nodes to the + Jobs are submitted from the Crane login node to the OSG using an HTCondor submission script. For those who are used to submitting jobs with SLURM, there are a few key differences to be aware of: diff --git a/content/quickstarts/connecting/for_maclinux_users.md b/content/quickstarts/connecting/for_maclinux_users.md index a808f71056e0e554bad1eb8284b90d6fca5e144c..1cb4f2f01bd816ae5cd4daaa91d1d83b43742302 100644 --- a/content/quickstarts/connecting/for_maclinux_users.md +++ b/content/quickstarts/connecting/for_maclinux_users.md @@ -25,8 +25,8 @@ Access to HCC Supercomputers For Mac/Linux users, use the system program Terminal to access to the HCC supercomputers. In the Terminal prompt, -type `ssh <username>@tusker.unl.edu` and the corresponding password -to get access to the HCC cluster **Tusker**. Note that <username> +type `ssh <username>@crane.unl.edu` and the corresponding password +to get access to the HCC cluster **Crane**. Note that <username> should be replaced by your HCC account username. If you do not have a HCC account, please contact a HCC specialist ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) diff --git a/content/quickstarts/connecting/for_windows_users.md b/content/quickstarts/connecting/for_windows_users.md index 508fdfaec8af11846ad6c24d2ff98df18e613658..fb3f1a78c79b221d42d712b8d652eabd5705f642 100644 --- a/content/quickstarts/connecting/for_windows_users.md +++ b/content/quickstarts/connecting/for_windows_users.md @@ -30,16 +30,16 @@ Users]({{< relref "for_maclinux_users" >}}). -------------- For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the HCC supercomputers. In the Command Prompt, -type `ssh <username>@tusker.unl.edu` and the corresponding password -to get access to the HCC cluster **Tusker**. Note that <username> +type `ssh <username>@crane.unl.edu` and the corresponding password +to get access to the HCC cluster **Crane**. Note that <username> should be replaced by your HCC account username. If you do not have a HCC account, please contact a HCC specialist ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) or go to http://hcc.unl.edu/newusers. -To use the **Crane** cluster, replace tusker.unl.edu with crane.unl.edu. + {{< highlight bash >}} -C:\> ssh <username>@tusker.unl.edu +C:\> ssh <username>@crane.unl.edu C:\> <password> {{< /highlight >}} @@ -56,7 +56,7 @@ or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe) Here we use the HCC cluster **Tusker** for demonstration. To use the -**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`. +**Crane** cluster, replace `tusker.unl.edu` with `crane.unl.edu`. 1. On the first screen, type `tusker.unl.edu` for Host Name, then click **Open**. diff --git a/content/quickstarts/connecting/how_to_change_your_password.md b/content/quickstarts/connecting/how_to_change_your_password.md index c1b83a228671a06a562fee6c005b9e4f14bad286..5ca8b8c2f560def011638d4c9f550c66440b8fe4 100644 --- a/content/quickstarts/connecting/how_to_change_your_password.md +++ b/content/quickstarts/connecting/how_to_change_your_password.md @@ -20,7 +20,7 @@ the following instructions to work.** - [Tutorial Video](#tutorial-video) Every HCC user has a password that is same on all HCC machines -(Tusker, Crane, Anvil). This password needs to satisfy the HCC +(Crane, Anvil). This password needs to satisfy the HCC password requirements. ### HCC password requirements @@ -47,7 +47,7 @@ to change it: #### Change your password via the command line To change a current or temporary password, the user needs to login to -any HCC cluster (Crane or Tusker) and use the ***passwd*** command: +any HCC cluster and use the ***passwd*** command: **Change HCC password** diff --git a/content/quickstarts/submitting_jobs.md b/content/quickstarts/submitting_jobs.md index e8ed451b3ab18d8f58b653dd6f0612a6caa5089b..1da16d4708fd05f7c13c5837a89ffe09ad578bd2 100644 --- a/content/quickstarts/submitting_jobs.md +++ b/content/quickstarts/submitting_jobs.md @@ -4,9 +4,9 @@ description = "How to submit jobs to HCC resources" weight = "10" +++ -Crane and Tusker are managed by +Crane is managed by the [SLURM](https://slurm.schedmd.com) resource manager. -In order to run processing on Crane or Tusker, you +In order to run processing on Crane, you must create a SLURM script that will run your processing. After submitting the job, SLURM will schedule your processing on an available worker node. @@ -81,10 +81,7 @@ sleep 60 - **mem** Specify the real memory required per node in MegaBytes. If you exceed this limit, your job will be stopped. Note that for you - should ask for less memory than each node actually has. For - instance, Tusker has 1TB, 512GB and 256GB of RAM per node. You may - only request 1000GB of RAM for the 1TB node, 500GB of RAM for the - 512GB nodes, and 250GB of RAM for the 256GB nodes. For Crane, the + should ask for less memory than each node actually has. For Crane, the max is 500GB. - **job-name** The name of the job. Will be reported in the job listing. diff --git a/static/images/3178523.png b/static/images/3178523.png index e3ec1af23d328e75a58c961576ef1b0df755e071..3b322bc36c77b8a8795e40d379d4ba1497550cf6 100644 Binary files a/static/images/3178523.png and b/static/images/3178523.png differ diff --git a/static/images/3178524.png b/static/images/3178524.png index 601b495b045101ac0d6c842397349c3ecc2394e1..02209f377da231093f58a408697050f261d43f91 100644 Binary files a/static/images/3178524.png and b/static/images/3178524.png differ diff --git a/static/images/3178530.png b/static/images/3178530.png index 1f49096ac846d05e9269ccd3f8983785c78584e3..fb672eee115fba31525a374e870145164df25447 100644 Binary files a/static/images/3178530.png and b/static/images/3178530.png differ diff --git a/static/images/3178531.png b/static/images/3178531.png index 4e50bff8ea691d312720da7b98ba8432c557b7a8..38b5108283aae36b8baef176eb6b5414a478b499 100644 Binary files a/static/images/3178531.png and b/static/images/3178531.png differ diff --git a/static/images/3178532.png b/static/images/3178532.png index 2ac4bd78edf58de43b63ef8130fb7a2045b4def9..e46e3236ccf6b8c61c0328a07a6cf9c00e3847be 100644 Binary files a/static/images/3178532.png and b/static/images/3178532.png differ diff --git a/static/images/3178533.png b/static/images/3178533.png index 94d388b78283e330f4e78469b48bfbebc23c37db..4a0e0d14abe24487b549986521fc7c0167d69c9e 100644 Binary files a/static/images/3178533.png and b/static/images/3178533.png differ diff --git a/static/images/3178539.png b/static/images/3178539.png index db39f83729efe8da1f145525c7834052d6361f39..c48e3b07a6ba20d15008b3bfa9c6bd3637320600 100644 Binary files a/static/images/3178539.png and b/static/images/3178539.png differ diff --git a/static/images/8127261.png b/static/images/8127261.png index 3f28b1e9b6ec35c2110e5c759ed1b8795db6de9b..31dad783fcdf8308a7674ba998c2894c24b18c43 100644 Binary files a/static/images/8127261.png and b/static/images/8127261.png differ diff --git a/static/images/8127262.png b/static/images/8127262.png index 33fb01433160826f2724350b41c1a05b92c436e4..e5784e39cba3c53537a04b64983da6f185da71f8 100644 Binary files a/static/images/8127262.png and b/static/images/8127262.png differ diff --git a/static/images/8127263.png b/static/images/8127263.png index b309882d5a0386ed2904de4896331e701c8eed82..865161af446e2a5dd49966c94311650122a32090 100644 Binary files a/static/images/8127263.png and b/static/images/8127263.png differ diff --git a/static/images/8127264.png b/static/images/8127264.png index d9749e24045d24b92a581a273599a75282bd3a70..6f2a97c7fd2ab21322316a88f763dfc01d61d879 100644 Binary files a/static/images/8127264.png and b/static/images/8127264.png differ diff --git a/static/images/8127266.png b/static/images/8127266.png index 16e48bf591564b5eb956feb8f3d03844fa6a7ba5..16185d575a234cb2ffa813f705ae3736bebf313f 100644 Binary files a/static/images/8127266.png and b/static/images/8127266.png differ diff --git a/static/images/8127268.png b/static/images/8127268.png index 3d2e85e39140a3708e7183f0781564ffe7356abe..3bf7fbb88c44ddb7019ed734fbc09575244064f4 100644 Binary files a/static/images/8127268.png and b/static/images/8127268.png differ