diff --git a/content/OSG/a_simple_example_of_submitting_an_htcondor_job.md b/content/OSG/a_simple_example_of_submitting_an_htcondor_job.md index eb01695415a5bc8d03860645b260414c9f7fa235..4aa879361dcd3384b442fa0f178635ddbe0a738a 100644 --- a/content/OSG/a_simple_example_of_submitting_an_htcondor_job.md +++ b/content/OSG/a_simple_example_of_submitting_an_htcondor_job.md @@ -6,21 +6,21 @@ weight=30 This page describes a complete example of submitting an HTCondor job. -1. SSH to Crane +1. SSH to Swan {{% panel theme="info" header="ssh command" %}} - [apple@localhost]ssh apple@crane.unl.edu + [apple@localhost]ssh apple@swan.unl.edu {{% /panel %}} {{% panel theme="info" header="output" %}} - [apple@login.crane~]$ + [apple@login.swan~]$ {{% /panel %}} 2. Write a simple python program in a file "hello.py" that we wish to run using HTCondor {{% panel theme="info" header="edit a python code named 'hello.py'" %}} - [apple@login.crane ~]$ vim hello.py + [apple@login.swan ~]$ vim hello.py {{% /panel %}} Then in the edit window, please input the code below: @@ -64,13 +64,13 @@ This page describes a complete example of submitting an HTCondor job. above ) {{% panel theme="info" header="create output directory" %}} - [apple@login.crane ~]$ mkdir OUTPUT + [apple@login.swan ~]$ mkdir OUTPUT {{% /panel %}} 5. Submit your job {{% panel theme="info" header="condor_submit" %}} - [apple@login.crane ~]$ condor_submit hello.submit + [apple@login.swan ~]$ condor_submit hello.submit {{% /panel %}} {{% panel theme="info" header="Output of submit" %}} @@ -83,11 +83,11 @@ This page describes a complete example of submitting an HTCondor job. 6. Check status of `condor_q` {{% panel theme="info" header="condor_q" %}} - [apple@login.crane ~]$ condor_q + [apple@login.swan ~]$ condor_q {{% /panel %}} {{% panel theme="info" header="Output of `condor_q`" %}} - -- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?... + -- Schedd: login.swan.hcc.unl.edu : <129.93.227.113:9619?... ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 720587.0 logan 12/15 10:48 33+14:41:17 H 0 0.0 continuous.cron 20 720588.0 logan 12/15 10:48 200+02:40:08 H 0 0.0 checkprogress.cron diff --git a/content/OSG/how_to_submit_an_osg_job_with_htcondor.md b/content/OSG/how_to_submit_an_osg_job_with_htcondor.md index 0bea23db3ae6e14007ec04c25e0dd93ebca276a6..a28e09fc8f4bbba6d042f10fbf79d690a4da4c0d 100644 --- a/content/OSG/how_to_submit_an_osg_job_with_htcondor.md +++ b/content/OSG/how_to_submit_an_osg_job_with_htcondor.md @@ -4,7 +4,7 @@ description = "How to submit an OSG job with HTCondor" weight=20 +++ -{{% notice info%}}Jobs can be submitted to the OSG from Crane, so +{{% notice info%}}Jobs can be submitted to the OSG from Swan, so there is no need to logon to a different submit host or get a grid certificate! {{% /notice %}} @@ -16,7 +16,7 @@ project provides software to schedule individual applications, workflows, and for sites to manage resources. It is designed to enable High Throughput Computing (HTC) on large collections of distributed resources for users and serves as the job scheduler used on the OSG. - Jobs are submitted from the Crane login node to the + Jobs are submitted from the Swan login node to the OSG using an HTCondor submission script. For those who are used to submitting jobs with SLURM, there are a few key differences to be aware of: @@ -133,7 +133,7 @@ the submitted job: 1. How to submit a job to OSG - assuming that you named your HTCondor script as a file applejob.txt - {{< highlight bash >}}[apple@login.crane ~] $ condor_submit applejob{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_submit applejob{{< /highlight >}} You will see the following output after submitting the job {{% panel theme="info" header="Example of condor_submit" %}} @@ -149,14 +149,14 @@ the submitted job: ones that are owned by the named user* - {{< highlight bash >}}[apple@login.crane ~] $ condor_q apple{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_q apple{{< /highlight >}} The code section below shows a typical output. You may notice that the column ST represents the status of the job (H: Held and I: Idle or waiting) {{% panel theme="info" header="Example of condor_q" %}} - -- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?... + -- Schedd: login.swan.hcc.unl.edu : <129.93.227.113:9619?... ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1013034.4 apple 3/26 16:34 0+00:21:00 H 0 0.0 sjrun.py INPUT/INP 1013038.0 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP @@ -173,19 +173,19 @@ the submitted job: from the held status so that it can be rescheduled by the HTCondor. *Release one job:* - {{< highlight bash >}}[apple@login.crane ~] $ condor_release 1013034.4{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_release 1013034.4{{< /highlight >}} *Release all jobs of a user apple:* - {{< highlight bash >}}[apple@login.crane ~] $ condor_release apple{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_release apple{{< /highlight >}} 4. How to delete a submitted job - if you want to delete a submitted job you may use the shell commands as listed below *Delete one job:* - {{< highlight bash >}}[apple@login.crane ~] $ condor_rm 1013034.4{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_rm 1013034.4{{< /highlight >}} *Delete all jobs of a user apple:* - {{< highlight bash >}}[apple@login.crane ~] $ condor_rm apple{{< /highlight >}} + {{< highlight bash >}}[apple@login.swan ~] $ condor_rm apple{{< /highlight >}} 5. How to get help form HTCondor command diff --git a/content/OSG/using_distributed_environment_modules_on_osg.md b/content/OSG/using_distributed_environment_modules_on_osg.md index 15f7464b301a7e40a8f2415b49a9d1b9bf89ee64..1dc1e284cee810eebd96d460c7c7ddb8c6d48e44 100644 --- a/content/OSG/using_distributed_environment_modules_on_osg.md +++ b/content/OSG/using_distributed_environment_modules_on_osg.md @@ -11,14 +11,14 @@ set of modules provided on OSG can differ from those on the HCC clusters. To switch to the OSG modules environment on an HCC machine: {{< highlight bash >}} -[apple@login.crane~]$ source osg_oasis_init +[apple@login.swan~]$ source osg_oasis_init {{< /highlight >}} Use the module avail command to see what software and libraries are available: {{< highlight bash >}} -[apple@login.crane~]$ module avail +[apple@login.swan~]$ module avail ------------------- /cvmfs/oasis.opensciencegrid.org/osg/modules/modulefiles/Core -------------------- abyss/2.0.2 gnome_libs/1.0 pegasus/4.7.1 @@ -36,7 +36,7 @@ available: Loading modules is done with the `module load` command: {{< highlight bash >}} -[apple@login.crane~]$ module load python/2.7 +[apple@login.swan~]$ module load python/2.7 {{< /highlight >}} There are two things required in order to use modules in your HTCondor @@ -99,7 +99,7 @@ loading the `R` and `libgfortran` modules. Make the script executable: -{{< highlight bash >}}[apple@login.crane~]$ chmod a+x R-script.sh{{< /highlight >}} +{{< highlight bash >}}[apple@login.swan~]$ chmod a+x R-script.sh{{< /highlight >}} Finally, create the HTCondor submit script, `R.submit`: @@ -124,16 +124,16 @@ transferred with the job. Submit the jobs with the `condor_submit` command: -{{< highlight bash >}}[apple@login.crane~]$ condor_submit R.submit{{< /highlight >}} +{{< highlight bash >}}[apple@login.swan~]$ condor_submit R.submit{{< /highlight >}} Check on the status of your jobs with `condor_q`: -{{< highlight bash >}}[apple@login.crane~]$ condor_q{{< /highlight >}} +{{< highlight bash >}}[apple@login.swan~]$ condor_q{{< /highlight >}} When your jobs have completed, find the average estimate for Pi from all 100 jobs: {{< highlight bash >}} -[apple@login.crane~]$ grep "[1]" mcpi.out.* | awk '{sum += $2} END { print "Average =", sum/NR}' +[apple@login.swan~]$ grep "[1]" mcpi.out.* | awk '{sum += $2} END { print "Average =", sum/NR}' Average = 3.13821 {{< /highlight >}} diff --git a/content/_index.md b/content/_index.md index 4402557d4d800c5188da85cc8220206f9d6c3c9d..f0e20f1a205e201d5b5e15775fde5e7dd8e754fe 100644 --- a/content/_index.md +++ b/content/_index.md @@ -35,16 +35,9 @@ Which Cluster to Use? are new to using HCC resources, Swan is the recommended cluster to use initially. Swan has 2 Intel Icelake CPUs (56 cores) per node, with 256GB RAM per node. - -**Crane**: Crane is the largest HCC resource. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node. **Important Notes** -- The Crane and Swan clusters are separate. But, they are - similar enough that submission scripts on whichever one will work on - another, and vice versa (excluding GPU resources and some combinations of - RAM/core requests). - - The worker nodes cannot write to the `/home` directories. You must use your `/work` directory for processing in your job. You may access your work directory by using the command: @@ -57,8 +50,6 @@ Resources - ##### Swan - HCC's newest Intel-based cluster, with 56 cores and 256GB RAM per node. -- ##### Crane - Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node. - - ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site. - ##### Anvil - HCC's cloud computing cluster based on Openstack @@ -70,7 +61,6 @@ Resource Capabilities | Cluster | Overview | Processors | RAM\* | Connection | Storage | ------- | ---------| ---------- | --- | ---------- | ------ -| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 62.5GB<br><br>79 nodes @ 250GB<br><br>37 nodes @ 500GB<br><br>4 nodes @ 1500GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage | **Swan** | 168 node LINUX cluster | 168 Intel Xeon Gold 6348 CPU, 2 CPU/56 cores per node | 168 nodes @ 256GB <br><br> 2 nodes @ 2000GB | HDR100 Infiniband | 3.5TB local scratch per node <br><br> ~5200TB shared Lustre storage | | **Red** | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space | | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) | diff --git a/content/accounts/how_to_change_your_password.md b/content/accounts/how_to_change_your_password.md index 7bfabb1687f53df878d0c44d6b9b77904e387331..3c79e763bc2c47db5d58e51225c60624446766d3 100644 --- a/content/accounts/how_to_change_your_password.md +++ b/content/accounts/how_to_change_your_password.md @@ -20,7 +20,7 @@ the following instructions to work.** - [Tutorial Video](#tutorial-video) Every HCC user has a password that is same on all HCC machines -(Crane, Swan, Anvil). This password needs to satisfy the HCC +(Swan, Anvil). This password needs to satisfy the HCC password requirements. ### HCC password requirements diff --git a/content/accounts/setting_up_and_using_duo.md b/content/accounts/setting_up_and_using_duo.md index 5d55f46b4753f7d0b54ca75132dfcaf3c7bb9910..8a11ed587b7d3b0b58411c898f77a9a96fe8dfa5 100644 --- a/content/accounts/setting_up_and_using_duo.md +++ b/content/accounts/setting_up_and_using_duo.md @@ -69,14 +69,14 @@ U2F use. Example login using Duo Push ---------------------------- -This demonstrates an example login to Crane using the Duo Push method. +This demonstrates an example login to Swan using the Duo Push method. Using another method (SMS, phone call, etc.) proceeds in the same way. (Click on any image for a larger version.) First, a user connects via SSH using their normal HCC username/password, exactly as before. -{{< figure src="/images/5832713.png" width="600" >}} +{{< figure src="/images/duo_login_pass.png" width="600" >}} {{% notice warning%}}**Account lockout** @@ -94,11 +94,11 @@ this example, the choices are Duo Push notification, SMS message, or phone call. Choosing option 1 for Duo Push, a request to verify the login will be sent to the user's smartphone. -{{< figure src="/images/5832716.png" height="350" >}} +{{< figure src="/images/duo_app_request.png" height="350" >}} Simply tap `Approve` to verify the login. -{{< figure src="/images/5832717.png" height="350" >}} +{{< figure src="/images/duo_app_approved.png" height="350" >}} {{% notice warning%}}**If you receive a verification request you didn't initiate, deny the request and contact HCC immediately via email at @@ -108,7 +108,7 @@ request and contact HCC immediately via email at In the terminal, the login will now complete and the user will logged in as usual. -{{< figure src="/images/5832714.png" height="350" >}} +{{< figure src="/images/duo_login_successful.png" height="350" >}} Duo Authentication Methods diff --git a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/_index.md b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/_index.md index c3cb52f9a69e929c2fffca2a769940d85382ed7a..6c018b586ad0ccb3eba304aea1c2cc8517860d83 100644 --- a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/_index.md +++ b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/_index.md @@ -55,7 +55,7 @@ application `hello_world`: {{% panel theme="info" header="perf-report example" %}} {{< highlight bash >}} -[<username>@login.crane ~]$ perf-report ./hello-world +[<username>@login.swan ~]$ perf-report ./hello-world {{< /highlight >}} {{% /panel %}} @@ -69,7 +69,7 @@ to read from a file, you must use the `--input` option to the {{% panel theme="info" header="perf-report stdin redirection" %}} {{< highlight bash >}} -[<username>@login.crane ~]$ perf-report --input=my_input.txt ./hello-world +[<username>@login.swan ~]$ perf-report --input=my_input.txt ./hello-world {{< /highlight >}} {{% /panel %}} @@ -79,7 +79,7 @@ More **perf-report** options can be seen by using: {{% panel theme="info" header="perf-report options" %}} {{< highlight bash >}} -[<username>@login.crane ~]$ perf-report --help +[<username>@login.swan ~]$ perf-report --help {{< /highlight >}} {{% /panel %}} diff --git a/content/applications/app_specific/allinea_profiling_and_debugging/using_allinea_forge_via_reverse_connect.md b/content/applications/app_specific/allinea_profiling_and_debugging/using_allinea_forge_via_reverse_connect.md index c0c266c123fa94cd6d911c39f932c0aed8ed2c45..544c1aa239ef81d6b425178854042660a0600b5f 100644 --- a/content/applications/app_specific/allinea_profiling_and_debugging/using_allinea_forge_via_reverse_connect.md +++ b/content/applications/app_specific/allinea_profiling_and_debugging/using_allinea_forge_via_reverse_connect.md @@ -30,10 +30,10 @@ Click the *Add* button on the new window. {{< figure src="/images/16516459.png" width="400" >}} -To setup a connection to Crane, fill in the fields as follows: +To setup a connection to Swan, fill in the fields as follows: ``` -Connection Name: Crane -Host Name: <username>@crane.unl.edu +Connection Name: Swan +Host Name: <username>@swan.unl.edu Remote Installation Directory: /util/opt/allinea/22.0 ``` @@ -48,7 +48,7 @@ Connections* to return back to the main Allinea window. ### Test the Reverse Connect feature -To test the connection, choose *Crane* from the *Remote Launch* menu. +To test the connection, choose *Swan* from the *Remote Launch* menu. {{< figure src="/images/16516457.png" width="300" >}} @@ -59,7 +59,7 @@ A *Connect to Remote Host* dialog will appear and prompt for a password. The login procedure is the same as for PuTTY or any other SSH program. Enter your HCC password followed by the Duo login. If the login was successful, you should see -*Connected to: \<username\>@crane.unl.edu* in the lower right corner of +*Connected to: \<username\>@swan.unl.edu* in the lower right corner of the Allinea window. The next step is to run a sample interactive job and test the Reverse diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/_index.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/_index.md index c37870f7b0a0800706b259d65a23c521537f9320..5f19835c66d83372ec2a96f1e8963970fbe313bc 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/_index.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/_index.md @@ -12,5 +12,5 @@ The following pages, [Create Local BLAST Database]({{<relref "create_local_blast ### Useful Information -In order to test the BLAST (blast/2.2) performance on Crane, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below: +In order to test the BLAST (blast/2.2) performance on Swan, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below: {{< readfile file="/static/html/blast.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md index d01dfee4efef219f8eef827a38c7d688773748d7..861893143785330374b584543649ce2761453826 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md @@ -12,7 +12,7 @@ $ makeblastdb -in input_reads.fasta -dbtype [nucl|prot] -out input_reads_db where **input_reads.fasta** is the input file containing all sequences that need to be made into a database, and **dbtype** can be either `nucl` or `prot` depending on the type of the input file. -Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below: +Simple example of how **makeblastdb** can be run on Swan using SLURM script and nucleotide database is shown below: {{% panel header="`blast_db.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md index 4c2ffc08a933dddfb5861fd53fe942bb50220221..d0e21c6573fbfdde8b38ac1ce2c298f33409bb05 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md @@ -28,7 +28,7 @@ $ blastn -help These BLAST alignment commands are multi-threaded, and therefore using the BLAST option **-num_threads <number_of_CPUs>** is recommended. -HCC hosts multiple BLAST databases and indices on Crane. In order to use these resources, the ["biodata" module]({{<relref "/applications/app_specific/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases: +HCC hosts multiple BLAST databases and indices on Swan. In order to use these resources, the ["biodata" module]({{<relref "/applications/app_specific/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases: - **16SMicrobial** - **nr** diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blat.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blat.md index 6d41e9c93483121dd0b213a4bfce3c77d47e3ab9..3d37b645fa6e7d9cf7553f09c8f0ffae340615a4 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/blat.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/blat.md @@ -21,7 +21,7 @@ $ blat {{< /highlight >}} -Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below: +Running BLAT on Swan with query file `input_reads.fasta` and database `db.fa` is shown below: {{% panel header="`blat_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie.md index ae79a451468afd15f08c0c926c5b8dcf76e07d3c..fb0f8f2e730a14c7f72f9b9d611e52507677ec7f 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie.md @@ -25,7 +25,7 @@ manual](http://bowtie-bio.sourceforge.net/manual.shtml). Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using the following flags: **-q** (fastq files), **-f** (fasta files), **-r** (raw one-sequence per line), or **-c** (sequences given on command line). -An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below: +An example of how to run Bowtie alignment on Swan with single-end fastq file and `8 CPUs` is shown below: {{% panel header="`bowtie_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie2.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie2.md index f6054dbf87a9768c348adc99f64c9e507c77b1c2..56bec650a33035bbcda8c65ceccb970391e0cbe5 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie2.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bowtie2.md @@ -31,7 +31,7 @@ $ bowtie2 -x index_prefix [-q|--qseq|-f|-r|-c] [-1 input_reads_pair_1.[fasta|fas where **index_prefix** is the generated index using the **bowtie2-build** command, and **options** are optional parameters that can be found in the [Bowtie2 manual](http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml). Bowtie2 supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using one of the following flags: **-q** (fastq files), **--qseq** (Illumina's qseq format), **-f** (fasta files), **-r** (raw one sequence per line), or **-c** (sequences given on command line). -An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below: +An example of how to run Bowtie2 local alignment on Swan with paired-end fasta files and `8 CPUs` is shown below: {{% panel header="`bowtie2_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md index 7e96edbcf770fe804cbd6d8e6606c043dbba705d..1449f917fb87c85a0ca0465743f5a9d7477ea010 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md @@ -22,7 +22,7 @@ $ bwa mem index_prefix [input_reads.fastq|input_reads_pair_1.fastq input_reads_p where **index_prefix** is the index for the reference genome generated from **bwa index**, and **input_reads.fastq**, **input_reads_pair_1.fastq**, **input_reads_pair_2.fastq** are the input files of sequencing data that can be single-end or paired-end respectively. Additional **options** for **bwa mem** can be found in the BWA manual. -Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below: +Simple SLURM script for running **bwa mem** on Swan with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below: {{% panel header="`bwa_mem.submit`"%}} {{< highlight bash >}} #!/bin/bash @@ -137,5 +137,5 @@ $ bwa bwt2sa input_reads.bwt output_reads.sa ### Useful Information -In order to test the scalability of BWA (bwa/0.7) on Crane, we used two paired-end input fastq files, `large_1.fastq` and `large_2.fastq`, and one single-end input fasta file, `large.fasta`. Some statistics about the input files and the time and memory resources used by **bwa mem** are shown on the table below: +In order to test the scalability of BWA (bwa/0.7) on Swan, we used two paired-end input fastq files, `large_1.fastq` and `large_2.fastq`, and one single-end input fasta file, `large.fasta`. Some statistics about the input files and the time and memory resources used by **bwa mem** are shown on the table below: {{< readfile file="/static/html/bwa.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md index 95b625b2ec7f5198d57434ce2ae2472e9fc16e68..2020401038760a542145cc78328a680a2d8c986e 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md @@ -30,7 +30,7 @@ $ clustalo -h {{< /highlight >}} -Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below: +Running Clustal Omega on Swan with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below: {{% panel header="`clustal_omega.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/tophat_tophat2.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/tophat_tophat2.md index c0590f6a8a2d5994e10249e0c100afb762f58e0e..52a7dc67e64f76dffb7e8d19001760b92e6af743 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/tophat_tophat2.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/tophat_tophat2.md @@ -27,7 +27,7 @@ $ tophat2 -h Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file. -An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below: +An example of how to run TopHat2 on Swan with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below: {{% panel header="`tophat2_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/biodata_module.md b/content/applications/app_specific/bioinformatics_tools/biodata_module.md index 2e2f4eb045799bf3dd56cdbb369f85cbf6e0b0bd..4aa14a1b023e7ea32b5cd71febd70eb2a403d273 100644 --- a/content/applications/app_specific/bioinformatics_tools/biodata_module.md +++ b/content/applications/app_specific/bioinformatics_tools/biodata_module.md @@ -7,7 +7,7 @@ weight = "52" +++ -HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane and Swan. +HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Swan. In order to use these resources, the "**biodata**" module needs to be loaded first. For how to load module, please check [Module Commands]({{< relref "/applications/modules/_index.md" >}}). @@ -40,7 +40,7 @@ $ ls $BLAST {{< /highlight >}} -An example of how to run Bowtie2 local alignment on Crane utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below: +An example of how to run Bowtie2 local alignment on Swan utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below: {{% panel header="`bowtie2_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash @@ -61,7 +61,7 @@ bowtie2 -x $BOWTIE2_HORSE -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.f {{% /panel %}} -An example of BLAST run against the non-redundant nucleotide database available on Crane is provided below: +An example of BLAST run against the non-redundant nucleotide database available on Swan is provided below: {{% panel header="`blastn_alignment.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md index 8cc866a115b4539431844243c82f6671b56b87f9..9d72d75c510592b5df4f1a2a31f2b27492f64a59 100644 --- a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md +++ b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md @@ -16,7 +16,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file. -Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below: +Running BamTools **convert** on Swan with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below: {{% panel header="`bamtools_convert.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md index 9e84ab36c0b84aea6a07d15e78e7367e40b82858..0133fcc931fe53f0c3b85eb3ffed993be8c67c7f 100644 --- a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md +++ b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md @@ -14,7 +14,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively. -Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below: +Running **samtools view** on Swan with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below: {{% panel header="`samtools_view.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/sratoolkit.md b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/sratoolkit.md index 2479607afd00cbc107e59857972f3decb61ba059..4de1777471f63a51d16808c03ba7982fc444dadc 100644 --- a/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/sratoolkit.md +++ b/content/applications/app_specific/bioinformatics_tools/data_manipulation_tools/sratoolkit.md @@ -25,7 +25,7 @@ $ fastq-dump [options] input_reads.sra This command can be applied on the downloaded SRA data with **"prefetch"**. -An example of running **fastq-dump** on Crane to convert SRA file containing paired-end reads is: +An example of running **fastq-dump** on Swan to convert SRA file containing paired-end reads is: {{% panel header="`sratoolkit.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md b/content/applications/app_specific/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md index b48c94f64c0e32bd1c291f4b72e01c9b48a2fa80..a7208aa9536100b5a4d4ffb6feaa7e02846044d1 100644 --- a/content/applications/app_specific/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md +++ b/content/applications/app_specific/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md @@ -19,7 +19,7 @@ $ cufflinks -h {{< /highlight >}} -An example of how to run Cufflinks on Crane with alignment file in SAM format, output directory `cufflinks_output` and 8 CPUs is shown below: +An example of how to run Cufflinks on Swan with alignment file in SAM format, output directory `cufflinks_output` and 8 CPUs is shown below: {{% panel header="`cufflinks.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md b/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md index 0bc7928e03000ef8f0963e2102c87631d382a6d5..f038b33d18ef1def5205249c5ec1c09846767b14 100644 --- a/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md +++ b/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md @@ -17,7 +17,7 @@ $ cap3 {{< /highlight >}} -An example of how to run basic CAP3 SLURM script on Crane is shown +An example of how to run basic CAP3 SLURM script on Swan is shown below: {{% panel header="`cap3.submit`"%}} {{< highlight bash >}} @@ -48,5 +48,5 @@ The consensus fasta sequences are saved in the file `input_reads.fasta.cap.cont ### Useful Information -In order to test the CAP3 (cap3/122107) performance on Crane, we created separately three nucleotide datasets, `small.fasta`, `medium.fasta` and `large.fasta`. Some statistics about the input datasets and the time and memory resources used by CAP3 on Crane are shown in the table below: +In order to test the CAP3 (cap3/122107) performance on Swan, we created separately three nucleotide datasets, `small.fasta`, `medium.fasta` and `large.fasta`. Some statistics about the input datasets and the time and memory resources used by CAP3 on Swan are shown in the table below: {{< readfile file="/static/html/cap3.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md b/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md index c905ce22478a645c1a19dc8a29db61acbc0faeaf..14f2e9826e31314f20cfabb4ec9a72e6d16ccf81 100644 --- a/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md +++ b/content/applications/app_specific/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md @@ -32,7 +32,7 @@ $ cd-hit CD-HIT is multi-threaded program, and therefore, using multiple threads is recommended. By setting the CD-HIT parameter `-T 0`, all CPUs defined in the SLURM script will be used. Setting the parameter `-M 0` allows unlimited usage of the available memory. -Simple SLURM CD-HIT script for Crane with 8 CPUs is given in addition: +Simple SLURM CD-HIT script for Swan with 8 CPUs is given in addition: {{% panel header="`cd-hit.submit`"%}} {{< highlight bash >}} #!/bin/bash diff --git a/content/applications/app_specific/dmtcp_checkpointing.md b/content/applications/app_specific/dmtcp_checkpointing.md index 19786d370a6a6e2894b22211d3ca135b0b0c0afd..7ee330d48e007ad72357d01300f7ca530f3e98ca 100644 --- a/content/applications/app_specific/dmtcp_checkpointing.md +++ b/content/applications/app_specific/dmtcp_checkpointing.md @@ -14,8 +14,8 @@ examples of binary programs on Linux distributions that can be used with DMTCP are OpenMP, MATLAB, Python, Perl, MySQL, bash, gdb, X-Windows etc. DMTCP provides support for several resource managers, including SLURM, -the resource manager used in HCC. The DMTCP module is available both on -Crane, and is enabled by typing: +the resource manager used in HCC. The DMTCP module is available on +Swan, and is enabled by typing: {{< highlight bash >}} module load dmtcp @@ -24,7 +24,7 @@ module load dmtcp After the module is loaded, the first step is to run the command: {{< highlight bash >}} -[<username>@login.crane ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command> +[<username>@login1.swan ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command> {{< /highlight >}} where `--rm` option enables SLURM support, @@ -36,7 +36,7 @@ Beside the general options shown above, more `dmtcp_launch` options can be seen by using: {{< highlight bash >}} -[<username>@login.crane ~]$ dmtcp_launch --help +[<username>@login1.swan ~]$ dmtcp_launch --help {{< /highlight >}} `dmtcp_launch` creates few files that are used to resume the @@ -62,7 +62,7 @@ will keep running with the options defined in the initial Simple example of using DMTCP with [BLAST]({{< relref "/applications/app_specific/bioinformatics_tools/alignment_tools/blast/running_blast_alignment" >}}) -on crane is shown below: +on swan is shown below: {{% panel theme="info" header="dmtcp_blastx.submit" %}} {{< highlight batch >}} diff --git a/content/applications/app_specific/running_gaussian_at_hcc.md b/content/applications/app_specific/running_gaussian_at_hcc.md index a903fe15e87632a6619be8f30f7f0b17fb383aae..fc8887dda6a913f045f26dd680960223bd824133 100644 --- a/content/applications/app_specific/running_gaussian_at_hcc.md +++ b/content/applications/app_specific/running_gaussian_at_hcc.md @@ -21,7 +21,7 @@ of a **g09** license. For access, contact us at {{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu) and include your HCC username. After your account has been added to the -group "*gauss*", here are four simple steps to run Gaussian 09 on Crane: +group "*gauss*", here are four simple steps to run Gaussian 09 on Swan: **Step 1:** Copy **g09** sample input file and SLURM script to your "g09" test directory on the `/work` filesystem: diff --git a/content/applications/app_specific/running_paraview.md b/content/applications/app_specific/running_paraview.md index 977344b8049249f7328d2e2c053daf2212f3eeca..6ff329e2ff77cc96bde09a8890ed86eca084f058 100644 --- a/content/applications/app_specific/running_paraview.md +++ b/content/applications/app_specific/running_paraview.md @@ -29,7 +29,7 @@ Once the ParaView application has started, it can be used similar to a locally i If you want to use additional resources beyond those available through an Open Ondemand application, you can run a ParaView server as a SLURM job and connect the ParaView client running in OpenOnDemand to the ParaView server. -To help facilitate this, a headless build of ParaView has been installed on Crane and Swan, which can be used to provide extra computational resources the GUI session. +To help facilitate this, a headless build of ParaView has been installed on Swan, which can be used to provide extra computational resources the GUI session. To start the MPI server process on Swan, you can use the following submit script as an example: ```Bash diff --git a/content/applications/app_specific/running_postgres.md b/content/applications/app_specific/running_postgres.md index 567b90928fd011c45d1b56b121bbf32472293450..459a24abb92e875ff251262b9e56ccb2f66eb7c2 100644 --- a/content/applications/app_specific/running_postgres.md +++ b/content/applications/app_specific/running_postgres.md @@ -84,12 +84,12 @@ will help to avoid corruption by allowing the server to perform a graceful shutd Once the job starts, check the `postgres_server.out` file for information on which host and port the server is listening on. For example, {{< highlight bash >}} -Postgres server running on c1725.crane.hcc.unl.edu on port 10332 +Postgres server running on c1725.swan.hcc.unl.edu on port 10332 This job started at 2020-06-19T10:20:58 This job will end at 2020-06-19T10:50:57 (in 29:59) {{< /highlight >}} -Here, the server is running on host `c1725.crane.hcc.unl.edu` on port 10332. +Here, the server is running on host `c1725.swan.hcc.unl.edu` on port 10332. The output also contains information on when the job will end. This can be useful when submitting the companion analysis job(s) that will use the database. It is recommended to adjust the requested walltime of the analysis job(s) to ensure they will end _before_ the database job does. diff --git a/content/applications/app_specific/running_sas.md b/content/applications/app_specific/running_sas.md index 3b458b2e02c32e2aee4155234e32be2c882ffaf7..412b8f8b0a7e033a372da927e568f77cf17f9767 100644 --- a/content/applications/app_specific/running_sas.md +++ b/content/applications/app_specific/running_sas.md @@ -66,7 +66,7 @@ On [HCC OnDemand]({{< relref "../../open_ondemand/connecting_to_hcc_ondemand/" > [Launch a Jupyter Lab Notebook session]({{< relref "../../open_ondemand/virtual_desktop_and_interactive_apps/" >}}). After the Jupyter Lab Notebook session starts, select `SAS` from the `New` dropdown box. {{< figure src="/images/jupyterNew.png" >}} Here, you can run code in the Notebook's cells. The SAS code is then executed when you click on the "play" icon or press the `shift` and `enter` keys simultaneously. -{{< figure src="/images/jupyterCode.png" >}} +{{< figure src="/images/jupyter_sas_code.png" >}} ### SAS Interactive GUI Application [Launch the Interactive App]({{< relref "../../open_ondemand/virtual_desktop_and_interactive_apps/" >}}) by selecting `SAS` from the `Interactive Apps` drop-down menu at the top of the OnDemand Dashboard page and filling in the parameters needed for your job. diff --git a/content/applications/app_specific/running_theano.md b/content/applications/app_specific/running_theano.md index 1a28bac43fbe6c193c2c048200c141133462dd2e..d0ff336efaaf778a023f10393097207930865ec5 100644 --- a/content/applications/app_specific/running_theano.md +++ b/content/applications/app_specific/running_theano.md @@ -4,7 +4,7 @@ description = "How to run the Theano on HCC resources." +++ Theano is available on HCC resources via the modules system. Both CPU and GPU -versions are available on Crane. Additionally, installs for both Python +versions are available on Swan. Additionally, installs for both Python 2.7 and 3.6 are provided. ### Running the CPU version diff --git a/content/applications/modules/available_software_for_crane.md b/content/applications/modules/available_software_for_crane.md deleted file mode 100644 index b501d00feed00667086fd10de7aad2a06e7c2178..0000000000000000000000000000000000000000 --- a/content/applications/modules/available_software_for_crane.md +++ /dev/null @@ -1,45 +0,0 @@ -+++ -title = "Available Software for Crane" -description = "List of available software for crane.unl.edu." -scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] -css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] -+++ - -{{% notice tip %}} -HCC provides some software packages via the Apptainer container -software. If you do not see a desired package in the module list below, -please check the [Using Apptainer]({{< relref "using_apptainer" >}}) -page for the software list there. -{{% /notice %}} - -{{% panel theme="warning" header="Module prerequisites" %}} -If a module lists one or more prerequisites, the prerequisite module(s) -must be loaded before or along with, that module. - -For example, the `cdo/2.1` modules requires `compiler/pgi/13.` To load -the cdo module, doing either - -`module load compiler/pgi/13` - -`module load cdo/2.1` - -or - -`module load compiler/pgi/13 cdo/2.1` (Note the prerequisite module -**must** be first.) - -is acceptable. -{{% /panel %}} - -{{% panel theme="info" header="Multiple versions" %}} -Some packages list multiple compilers for prerequisites. This means that -the package has been built with each version of the compilers listed. -{{% /panel %}} - -{{% panel theme="warning" header="Custom GPU Anaconda Environment" %}} -If you are using custom GPU Anaconda Environment, the only module you need to load is `anaconda`: - -`module load anaconda` -{{% /panel %}} - -{{< table url="http://crane-head.unl.edu:8192/lmod/spider/json" >}} diff --git a/content/connecting/_index.md b/content/connecting/_index.md index ca82340c2389fbfee36dccfdd90a82795e35bc85..5bc46f956ea0b8a005d6b4dac97047bdbd02696b 100644 --- a/content/connecting/_index.md +++ b/content/connecting/_index.md @@ -10,7 +10,7 @@ How to connect to HCC resources **2. Open a terminal or SSH client** Most interactions with HCC clusters are done through SSH and the command line. In MacOS, Linux, recent versions of Windows 10, and Windows 11 there is an SSH client built-in and can be used from their [respective terminals]({{< relref "terminal.md" >}}). For older versions of Windows, an application such as [PuTTY]({{< relref "putty.md" >}}) or [MobaXterm]({{< relref "mobaxterm.md" >}}) is needed. -**3. Connect to an HCC cluster:** From the terminal or application, use SSH to connect to one of the available clusters. In the terminal, enter `ssh <username>@crane.unl.edu` to connect to the Crane cluster, for example. With [PuTTY]({{< relref "putty.md" >}}) or [MobaXterm]({{< relref "mobaxterm.md" >}}), refer to their respective pages for a guide on how to connect +**3. Connect to an HCC cluster:** From the terminal or application, use SSH to connect to one of the available clusters. In the terminal, enter `ssh <username>@swan.unl.edu` to connect to the Swan cluster, for example. With [PuTTY]({{< relref "putty.md" >}}) or [MobaXterm]({{< relref "mobaxterm.md" >}}), refer to their respective pages for a guide on how to connect If you are not familiar with using command line Linux, check out these resources: diff --git a/content/connecting/how_to_setup_x11_forwarding.md b/content/connecting/how_to_setup_x11_forwarding.md index 3d6c498b60650250058cab05b85e0e2686ede06c..399371c3c93842fe0e57de5306625b8ff29a4fe8 100644 --- a/content/connecting/how_to_setup_x11_forwarding.md +++ b/content/connecting/how_to_setup_x11_forwarding.md @@ -11,7 +11,7 @@ weight = "35" 2. Download PuTTY to your local PC and install. Download link: http://the.earth.li/~sgtatham/putty/latest/x86/putty.exe 3. Open Xming and keep it running in the background. 4. Configure PuTTY as below: - {{< figure src="/images/Putty-win10.png" height="400" >}} + {{< figure src="/images/putty_initial.png" height="400" >}} {{< figure src="/images/Putty-win10X11.png" height="400" >}} 5. To test your X11 setup, after login, type command `xeyes` and press diff --git a/content/connecting/mobaxterm.md b/content/connecting/mobaxterm.md index a54255074b8ee2fcb14c8ddeef6eab7c9f4d4e78..26dd6166acb1c0eca91329ab779ece1ac5724156 100644 --- a/content/connecting/mobaxterm.md +++ b/content/connecting/mobaxterm.md @@ -16,12 +16,11 @@ Access to HCC Supercomputers using MobaXterm To connect to HCC resources using MobaXterm, open the application and select the Session Icon. {{< figure src="/images/moba/main.png" height="450" >}} -Select SSH as the session type. Enter the cluster you are connecting to, in the example, `crane.unl.edu`, is used. Check `Specify username` and enter your HCC username in the the box. Note that <username> +Select SSH as the session type. Enter the cluster you are connecting to, in the example, `swan.unl.edu`, is used. Check `Specify username` and enter your HCC username in the the box. Note that <username> should be replaced by your HCC account username. If you do not have a HCC account, please contact a HCC specialist ({{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu)) or go to https://hcc.unl.edu/newusers. -To use the **Swan** cluster, replace crane.unl.edu with with swan.unl.edu. {{< figure src="/images/moba/session.png" height="450" >}} Select OK. You will be asked to enter your password and to authenticate with duo. @@ -40,7 +39,7 @@ MobaXterm allows file transfering in a 'drag and drop' style, similar to WinSCP. The above example transfers a folder from a local directory of the your computer to the `$HOME` directory of the HCC -supercomputer, Crane. Note that you need to replace `<group name>` +supercomputer, Swan. Note that you need to replace `<group name>` and `<username>` with your HCC group name and username. {{< figure src="/images/moba/upload.png" height="450" >}} **Downloading from remote to local** @@ -48,7 +47,7 @@ and `<username>` with your HCC group name and username. The above example transfers a folder from the `$HOME` directory of -the HCC supercomputer, Crane, to a local directory on +the HCC supercomputer, Swan, to a local directory on your computer. {{< figure src="/images/moba/download.png" height="450" >}} **Editing remote files** diff --git a/content/connecting/putty.md b/content/connecting/putty.md index 95900569598dbe02babdfc96f0bae1ce60fed31d..213bc41d66c498c3df2ca9175c945f73396a53fb 100644 --- a/content/connecting/putty.md +++ b/content/connecting/putty.md @@ -8,9 +8,9 @@ weight = "20" ##### Please see [Setting up and Using Duo]({{< relref "setting_up_and_using_duo" >}}). - [Connecting to HCC Clusters](#connecting-to-hcc-clusters) -- [Windows 10 / 11](#windows-10-and-11) -- [Windows 8.1](#windows-8-1) -- [Next Steps:](#next-steps) +- [Windows 10 and 11](#windows-10-and-11) +- [Using Putty](#using-putty) + - [Next Steps:](#next-steps) ## Connecting to HCC Clusters @@ -27,10 +27,10 @@ Windows 10 and 11 users, can connect using the Command Prompt. Please see [Connecting with the Terminal]({{< relref "/connecting/terminal" >}}) for more details. -## Windows 8.1 +## Using Putty -------------- -For users with older Windows versions, you will need to install an SSH client to connect. +For Windows installations without built-in SSH, you will need to install an SSH client to connect. We will cover the use of PuTTY here, but you are free to use any compatable client, such as [MobaXterm]({{< relref "/connecting/mobaxterm" >}}) as well. @@ -40,14 +40,12 @@ To download and install PuTTY, visit the [PuTTY website] Once you have PuTTY installed, run the application and follow these steps: {{% notice info %}} -**Note that the example below uses the `Crane` cluster. -Replace all instances of `crane` with `swan` if -you want to connect to the `Swan` cluster. +**Note that the example below uses the `Swan` cluster. {{% /notice %}} -1. On the first screen, type `crane.unl.edu` for Host Name, then click +1. On the first screen, type `swan.unl.edu` for Host Name, then click **Open**. - {{< figure src="/images/3178523.png" height="450" >}} + {{< figure src="/images/putty_initial.png" height="450" >}} 2. On the second screen, click on **Yes**. {{< figure src="/images/3178524.png" height="300" >}} @@ -57,16 +55,16 @@ you want to connect to the `Swan` cluster. ({{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu)) or go to http://hcc.unl.edu/newusers. - {{% notice info %}}Replace `cbohn` with your username.{{% /notice %}} + {{% notice info %}}Replace `hccdemo` with your username.{{% /notice %}} - {{< figure src="/images/8127261.png" height="450" >}} + {{< figure src="/images/putty_username.png" height="450" >}} 4. On the next screen, enter your HCC account **password**. {{% notice info %}}**Note that PuTTY will not show the characters as you type for security reasons.**{{% /notice %}} - {{< figure src="/images/8127262.png" height="450" >}} + {{< figure src="/images/putty_password.png" height="450" >}} 5. After you input the correct password, you will be asked to choose a Duo authentication @@ -76,7 +74,7 @@ you want to connect to the `Swan` cluster. second. Then you will be brought to your home directory similar as below. - {{< figure src="/images/8127266.png" height="450" >}} + {{< figure src="/images/putty_duo.png" height="450" >}} 7. If you set up Duo via a smart phone, please type "1" in your terminal and press "Enter". (Duo-Push is the most cost-effective way @@ -88,12 +86,12 @@ you want to connect to the `Swan` cluster. not initiated by yourself, deny it and report this incident immediately to {{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu). - {{< figure src="/images/8127263.png" height="450" >}} + {{< figure src="/images/duo_app_request.png" height="450" >}} 9. After you approve the Duo login request, you will be brought to your home directory similar as below. - {{< figure src="/images/8127264.png" height="450" >}} + {{< figure src="/images/putty_duo.png" height="450" >}} ### Next Steps: diff --git a/content/connecting/terminal.md b/content/connecting/terminal.md index ab5603a999689077a0162dc3c23f1fde54c61ff6..937de3b2b9b483948afb50044a28053c2580fabc 100644 --- a/content/connecting/terminal.md +++ b/content/connecting/terminal.md @@ -45,25 +45,23 @@ If you are using an older versions of Windows, you will need to [download an SSH ### Connecting To HCC Clusters Once you have opened your terminal and you have a prompt, you can connect using the `ssh` command. -For example, to connect to the Crane cluster type the following in your terminal or Command Prompt window: +For example, to connect to the Swan cluster type the following in your terminal or Command Prompt window: {{< highlight bash >}} -$ ssh <username>@crane.unl.edu +$ ssh <username>@swan.unl.edu {{< /highlight >}} -where `<username>` is replaced with your HCC account name. To use the **Swan** cluster, -replace crane.unl.edu with swan.unl.edu. +where `<username>` is replaced with your HCC account name. The first time you connect to one of our clusters from a computer, you will be prompted to verify the connection: {{< highlight bash >}} -The authenticity of host 'crane.unl.edu (129.93.227.113)' can't be established. -RSA key fingerprint is SHA256:GDH3+iqSp3WJxtUE6tXNQcWRwpf0xjYgkQrYBDX3Ir0. -RSA key fingerprint is MD5:e5:c5:ac:07:ff:47:53:18:5e:8b:44:16:51:78:4a:c7. -Are you sure you want to continue connecting (yes/no)? +The authenticity of host 'swan.unl.edu (129.93.227.88)' can't be established. +ECDSA key fingerprint is SHA256:qcyi6CEw1gUgumEghA+TcXFmu39MAO4Pyrt8rT6+ymk. +Are you sure you want to continue connecting (yes/no/[fingerprint])? {{< /highlight >}} -Type `yes` to indicate that you do intend to connect to Crane. +Type `yes` to indicate that you do intend to connect to Swan. You will then be prompted for your password. @@ -88,7 +86,7 @@ Enter the number which corresponds to your preferred option. You will know you h if your prompt changes to the following: {{< highlight bash >}} -[<username>@login.crane ~]$ +[<username>@login1.swan ~]$ {{< /highlight >}} ### Next Steps: diff --git a/content/good_hcc_practices/._index.md.swp b/content/good_hcc_practices/._index.md.swp deleted file mode 100644 index 45769c1032ad5c242e41d59c17632b65ad1883bd..0000000000000000000000000000000000000000 Binary files a/content/good_hcc_practices/._index.md.swp and /dev/null differ diff --git a/content/good_hcc_practices/_index.md b/content/good_hcc_practices/_index.md index 715b9d513ce328d7c7c8eb884dd1f6bfcb5c7a03..b1b1bf15926f47318cd08d743e09c68dc6eba1f9 100644 --- a/content/good_hcc_practices/_index.md +++ b/content/good_hcc_practices/_index.md @@ -4,8 +4,8 @@ description = "Guidelines for good HCC practices" weight = "95" +++ -Crane and Swan, our two high-performance clusters, are shared among all our users. -Sometimes, some users' activities may negatively impact the clusters and the users. +Swan, our high-performance cluster, is shared among all our users. +Sometimes, some users' activities may negatively impact the cluster and the users. To avoid this, we provide the following guidelines for good HCC practices. ## Login Node @@ -67,7 +67,7 @@ the respective SLURM options, your application will use only 1 core by default. * **Avoid submitting large number of short (less than half an hour of running time) SLURM jobs.** The scheduler spends more time and memory in processing those jobs, which may cause problems and reduce the scheduler's responsiveness for everyone. Instead, group the short tasks into jobs that will run longer. -* **The maximum running time on our clusters is 7 days.** If your job needs more time than that, please consider +* **The maximum running time on our cluster is 7 days.** If your job needs more time than that, please consider improving the code, splitting the job into smaller tasks, or using checkpointing tools such as [DMTCP]({{< relref "dmtcp_checkpointing" >}}). * Before submitting a job, it is recommended to make sure that **you are executing the application correctly, you are passing the right arguments, and you don't have typos**. You can do this using an [interactive session]({{< relref "creating_an_interactive_job" >}}). diff --git a/content/handling_data/data_storage/_index.md b/content/handling_data/data_storage/_index.md index 016fb55ae3301584bd8331ec00c0313a5fcb8b8b..4c50893e22af07a0a31c40c08c5265ac3a1beeb4 100644 --- a/content/handling_data/data_storage/_index.md +++ b/content/handling_data/data_storage/_index.md @@ -36,7 +36,7 @@ environmental variable (i.e. '`cd $COMMON`') The common directory operates similarly to work and is mounted with **read and write capability to worker nodes all HCC Clusters**. This -means that any files stored in common can be accessed from Crane and Swan, making this directory ideal for items that need to be +means that any files stored in common can be accessed from Swan, making this directory ideal for items that need to be accessed from multiple clusters such as reference databases and shared data files. diff --git a/content/handling_data/data_storage/data_for_unmc_users_only.md b/content/handling_data/data_storage/data_for_unmc_users_only.md deleted file mode 100644 index 708c9140488620c85c0733c79d71ed11bd3e78fc..0000000000000000000000000000000000000000 --- a/content/handling_data/data_storage/data_for_unmc_users_only.md +++ /dev/null @@ -1,47 +0,0 @@ -+++ -title = "Data for UNMC Users Only" -description= "Data storage options for UNMC users" -weight = 60 -+++ - -{{% panel theme="danger" header="Sensitive and Protected Data" %}} HCC currently has no storage that is suitable for HIPAA or other PID -data sets. Users are not permitted to store such data on HCC machines. -Crane have a special directory, only for UNMC users. Please -note that this filesystem is still not suitable for HIPAA or other PID -data sets. -{{% /panel %}} - ---- -### Transferring files to this machine from UNMC. - -You will need to email us -at <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> to -gain access to this machine. Once you do, you can sftp to 10.14.250.1 -and upload your files. Note that sftp is your only option. You may use -different sftp utilities depending on your platform you are logging in -from. Email us if you need help with this. Once you are logged in, you -should be at `/volumes/UNMC1ZFS/[group]/[username]`, or -`/home/[group]/[username]`. Both are the same location and you will be -allowed to write files there. - -For Windows, learn more about logging in and uploading files -[here](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users). - -Using your uploaded files on Crane. ---------------------------------------------- - -Using your -uploaded files is easy. Just go to -`/shared/unmc1/[group]/[username]` and your files will be in the same -place. You may notice that the directory is not available at times. This -is because the unmc1 directory is automounted. This means, if you try to -go to the directory, it will show up. Just "`cd`" to -`/shared/unmc1/[group]/[username]` and all of the files will be -there. - -If you have space requirements outside what is currently provided, -please -email <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> and -we will gladly discuss alternatives. - - diff --git a/content/handling_data/data_storage/integrating_box_with_hcc.md b/content/handling_data/data_storage/integrating_box_with_hcc.md index d9692ee505dc29dc36c22d75358f5c334e19c53e..4b3e0416610690b0d954e26f316c889250e67778 100644 --- a/content/handling_data/data_storage/integrating_box_with_hcc.md +++ b/content/handling_data/data_storage/integrating_box_with_hcc.md @@ -56,7 +56,7 @@ lftp box 7. To upload or download files, use `get` and `put` commands. For example: {{% panel theme="info" header="Transferring files" %}} {{< highlight bash >}} -[demo2@login.crane ~]$ lftp box +[demo2@login.swan ~]$ lftp box lftp demo2@example.edu@ftp.box.com:/> put myfile.txt lftp demo2@example.edu@ftp.box.com:/> get my_other_file.txt {{< /highlight >}} @@ -65,14 +65,14 @@ lftp demo2@example.edu@ftp.box.com:/> get my_other_file.txt 8. To download directories, use the `mirror` command. To upload directories, use the `mirror` command with the `-R` option. For example, to download a directory named `my_box-dir` to your current directory: {{% panel theme="info" header="Download a directory from Box" %}} {{< highlight bash >}} -[demo2@login.crane ~]$ lftp box +[demo2@login.swan ~]$ lftp box lftp demo2@example.edu@ftp.box.com:/> mirror my_box_dir {{< /highlight >}} {{% /panel %}} To upload a directory named `my_hcc_dir` to Box, use `mirror` with the `-R` option: {{% panel theme="info" header="Upload a directory to Box" %}} {{< highlight bash >}} -[demo2@login.crane ~]$ lftp box +[demo2@login.swan ~]$ lftp box lftp demo2@example.edu@ftp.box.com:/> mirror -R my_hcc_dir {{< /highlight >}} {{% /panel %}} diff --git a/content/handling_data/data_storage/linux_file_permissions.md b/content/handling_data/data_storage/linux_file_permissions.md index 4739cbb2118ed0d8c939c87fbf026956cb1b9048..03c76131d674ecdb7a2db4b7af2739d776586828 100644 --- a/content/handling_data/data_storage/linux_file_permissions.md +++ b/content/handling_data/data_storage/linux_file_permissions.md @@ -11,7 +11,7 @@ weight = 20 ## Opening a Terminal Window ----------------------- -Use your local terminal to connect to a cluster, or open a new terminal window on [Crane](https://crane.unl.edu). +Use your local terminal to connect to a cluster, or open a new terminal window on [Swan](https://swan.unl.edu). Click [here](https://hcc.unl.edu/docs/Quickstarts/connecting/) if you need help connecting to a cluster with a local terminal. diff --git a/content/handling_data/data_storage/using_attic.md b/content/handling_data/data_storage/using_attic.md index 71e1a0870c8ffc59c9c6e1ca2120d0ce08e5557a..96db1c65aa476a6276dd6c268c029856ab179936 100644 --- a/content/handling_data/data_storage/using_attic.md +++ b/content/handling_data/data_storage/using_attic.md @@ -32,8 +32,8 @@ cost, please see the ### Transfer Files Using Globus Connect The easiest and fastest way to access Attic is via Globus. You can -transfer files between your computer, our clusters ($HOME, $WORK, and $COMMON on -Crane or Swan), and Attic. Here is a detailed tutorial on +transfer files between your computer, our cluster ($HOME, $WORK, and $COMMON on +Swan), and Attic. Here is a detailed tutorial on how to set up and use [Globus Connect]({{< relref "/handling_data/data_transfer/globus_connect" >}}). For Attic, use the Globus Endpoint **hcc\#attic**. Your Attic files are located at `~, `which is a shortcut diff --git a/content/handling_data/data_storage/using_the_common_file_system.md b/content/handling_data/data_storage/using_the_common_file_system.md index 8f0206d4f3f4c787e44f5a46eb6aac69cc6a6146..3acb166f6bbd220b2b815d7d25e848a2ed8f9d98 100644 --- a/content/handling_data/data_storage/using_the_common_file_system.md +++ b/content/handling_data/data_storage/using_the_common_file_system.md @@ -7,7 +7,7 @@ weight = 30 ### Quick overview: - Connected read/write to all HCC HPC cluster resources – you will see - the same files "in common" on any HCC cluster (i.e. Crane and Swan). + the same files "in common" on any HCC cluster (i.e. Swan). - 30 TB Per-group quota at no charge – larger quota available for $105/TB/year - No backups are made! Don't be silly! Precious data should still be diff --git a/content/handling_data/data_transfer/connect_to_cb3_irods.md b/content/handling_data/data_transfer/connect_to_cb3_irods.md index c8a61e040bc0ff01ac1e088b80792bda5b05b10f..d4d6b4d06d3c5a064afabe1c51271791a7c98700 100644 --- a/content/handling_data/data_transfer/connect_to_cb3_irods.md +++ b/content/handling_data/data_transfer/connect_to_cb3_irods.md @@ -45,9 +45,9 @@ data transfers should use CyberDuck instead. 6. After logging in, a new explorer window will appear and you will be in your personal directory. You can transfer files or directories by dragging and dropping them to or from your local machine into the window. {{< figure src="/images/30442927.png" class="img-border" height="450" >}} -### Using the iRODS CLI tools from Crane/Swan +### Using the iRODS CLI tools from Swan -The iRODS icommand tools are available on Crane and Swan to use for data transfer to/from the clusters. +The iRODS icommand tools are available on Swan to use for data transfer to/from the clusters. They first require creating a small json configuration file. Create a directory named `~/.irods` first by running {{< highlight bash >}} diff --git a/content/handling_data/data_transfer/cyberduck.md b/content/handling_data/data_transfer/cyberduck.md index ee035fdf6e9a4793b6f11ac945e5bc10c2f464eb..77e596a0d9f3af7f819b39aad7082294e913c6a2 100644 --- a/content/handling_data/data_transfer/cyberduck.md +++ b/content/handling_data/data_transfer/cyberduck.md @@ -40,7 +40,7 @@ To add an HCC machine, in the bookmarks pane click the "+" icon: {{< figure src="/images/7274500.png" height="450" >}} Ensure the type of connection is SFTP. Enter the hostname of the machine -you wish to connect to (crane.unl.edu, swan.unl.edu) in the **Server** +you wish to connect to (swan.unl.edu) in the **Server** field, and your HCC username in the **Username** field. The **Nickname** field is arbitrary, so enter whatever you prefer. diff --git a/content/handling_data/data_transfer/globus_connect/_index.md b/content/handling_data/data_transfer/globus_connect/_index.md index 20ffd31a9c8f60325f8d20a2e18f022ff9d661f9..417888bdacb2709d72e5ea18a2f97bffd04a408b 100644 --- a/content/handling_data/data_transfer/globus_connect/_index.md +++ b/content/handling_data/data_transfer/globus_connect/_index.md @@ -8,7 +8,7 @@ weight = 5 a fast and robust file transfer service that allows users to quickly move large amounts of data between computer clusters and even to and from personal workstations. This service has been made available for -Crane, Swan, and Attic. HCC users are encouraged to use Globus +Swan, and Attic. HCC users are encouraged to use Globus Connect for their larger data transfers as an alternative to slower and more error-prone methods such as scp and winSCP. @@ -16,7 +16,7 @@ more error-prone methods such as scp and winSCP. ### Globus Connect Advantages -- Dedicated transfer servers on Crane, Swan, and Attic allow +- Dedicated transfer servers on Swan, and Attic allow large amounts of data to be transferred quickly between sites. - A user can install Globus Connect Personal on his or her workstation @@ -39,7 +39,7 @@ the <a href="https://www.globusid.org/create" class="external-link">Globus Conn Accounts are free and grant users access to any Globus collection for which they are authorized. A collection is simply a file system to or from which a user transfers files. All HCC users are authorized to -access their own /home, /work, and /common directories on Crane and Swan via the Globus collections (named: `hcc#crane` and `hcc#swan`). Those who have leased Attic storage allocation can +access their own /home, /work, and /common directories on Swan via the Globus collections (named: `hcc#swan`). Those who have leased Attic storage allocation can access their /attic directories via the Globus collection `hcc#attic`. To initialize or activate the collection, users will be required to enter their HCC username, password, and Duo credentials for authentication. diff --git a/content/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints.md b/content/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints.md index bbab2cfdba89f4924c6ae07527b543ad98d2f2ff..fbf6f222c9bbf377d39795b82bc49ed965ef0e44 100644 --- a/content/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints.md +++ b/content/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints.md @@ -4,7 +4,7 @@ description = "How to activate HCC collections on Globus" weight = 20 +++ -You will not be able to transfer files to or from an HCC collection using Globus Connect without first activating the collection. Collections are available for Crane (`hcc#crane`), Swan, (`hcc#swan`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these collections and begin making transfers. +You will not be able to transfer files to or from an HCC collection using Globus Connect without first activating the collection. Collections are available for Swan, (`hcc#swan`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these collections and begin making transfers. 1. [Sign in](https://app.globus.org) to your Globus account using your campus credentials or your Globus ID (if you have one). If you use CILogo to authenticate, you will be redirected to your local campus/facility authentication page (UNL in this example) to enter your campus credentials. Once you've authenticated, you will be redirected back to your Globus dashboard. {{< figure src="/images/globus_login.png" >}} @@ -12,7 +12,7 @@ Then click on 'Collections' in the left sidebar. {{< figure src="/images/globus_click_collections.png" >}} -2. Find the collection you want by entering '`hcc#crane`', '`hcc#swan`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the collection, click the green 'activate' icon. On the following page, click 'continue'. +2. Find the collection you want by entering '`hcc#swan`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the collection, click the green 'activate' icon. On the following page, click 'continue'. {{< figure src="/images/globus_click_activate_icon.png" >}} {{< figure src="/images/globus_click_activate_continue.png" >}} diff --git a/content/handling_data/data_transfer/globus_connect/file_sharing.md b/content/handling_data/data_transfer/globus_connect/file_sharing.md index 3939c12680111f8842ddf98395c6cbd0674cc531..a44c486069f6976b3f9dc80412218fa712a72b94 100644 --- a/content/handling_data/data_transfer/globus_connect/file_sharing.md +++ b/content/handling_data/data_transfer/globus_connect/file_sharing.md @@ -5,7 +5,7 @@ weight = 50 +++ If you would like another colleague or researcher to have access to your -data, you may create a shared collection on Crane, Swan, or Attic. You can personally manage access to this collection and +data, you may create a shared collection on Swan, or Attic. You can personally manage access to this collection and give access to anybody with a Globus account (whether or not they have an HCC account). *Please use this feature responsibly by sharing only what is necessary and granting access only to trusted @@ -19,7 +19,7 @@ creating shares in your `home` directory. 1. Sign in to your Globus account, click on the 'File Manager' tab and search for the collection that you will use to host your shared collection. For example, if you would like to share data in your - Crane `work` directory, search for the `hcc#crane` collection. Once + Swan `work` directory, search for the `hcc#swan` collection. Once you have found the collection, it will need to be activated if it has not been already (see [collection activation instructions here]({{< relref "activating_hcc_cluster_endpoints" >}})). @@ -28,13 +28,13 @@ creating shares in your `home` directory. {{< figure src="/images/globus_share_select.png" >}} {{< figure src="/images/globus_share_share.png" >}} -2. Click on 'Add a Guest Collection'. In the 'Path' box, enter the full path to the directory you +1. Click on 'Add a Guest Collection'. In the 'Path' box, enter the full path to the directory you would like to share. Only files under this directory will be shared to the users you grant access. Enter a name for the collection and provide a short description if you wish. Finally, click 'Create Share'. {{< figure src="/images/globus_create_share.png" >}} -3. To share the collection with someone, click on 'Add Permissions -- Share With' under the 'Permissions' tab for the shared collection you just created. Next enter the *relative path* of the +1. To share the collection with someone, click on 'Add Permissions -- Share With' under the 'Permissions' tab for the shared collection you just created. Next enter the *relative path* of the directory that this user should be able to access. For example, if the source path of your shared collection is `/work/<groupid>/<userid>` but you would like your diff --git a/content/handling_data/data_transfer/globus_connect/file_transfers_between_endpoints.md b/content/handling_data/data_transfer/globus_connect/file_transfers_between_endpoints.md index 8c6acae771f17227adde0b0e777309f05b8448e9..a32cf8954b0b7cf7cff06055b2277848b93f34dc 100644 --- a/content/handling_data/data_transfer/globus_connect/file_transfers_between_endpoints.md +++ b/content/handling_data/data_transfer/globus_connect/file_transfers_between_endpoints.md @@ -7,7 +7,7 @@ weight = 30 To transfer files between HCC clusters, you will first need to [activate]({{< relref "/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints" >}}) the two collections you would like to use (the available collections -are: `hcc#crane` `hcc#swan`, and `hcc#attic`). Once +are: `hcc#swan`, and `hcc#attic`). Once that has been completed, follow the steps below to begin transferring files. (Note: You can also transfer files between an HCC collection and any other Globus collection for which you have authorized access. That @@ -27,7 +27,7 @@ purposes we use two HCC collections.) first. {{< figure src="/images/globus_select_transfer.png">}} -2. In the two "Collection" text boxes, enter the names of the two collections you would like to use (for example, `hcc#attic` and `hcc#crane`). Alternatively, you can select from one of your recently used, bookmarked, owned, or shared collections that are pulled up when you click on the text box to enter your search. Enter the +1. In the two "Collection" text boxes, enter the names of the two collections you would like to use (for example, `hcc#attic` and `hcc#swan`). Alternatively, you can select from one of your recently used, bookmarked, owned, or shared collections that are pulled up when you click on the text box to enter your search. Enter the directory paths for both the source and destination (the 'from' and 'to' paths on the respective collections). Press 'Enter' to view files under these directories. Select the files or directories you would @@ -36,7 +36,7 @@ purposes we use two HCC collections.) transfer. {{< figure src="/images/globus_start_transfer.png" >}} -3. Globus will display a message that your transfer was submitted successfully, and you can click on "View details" in this message to see the status of the transfer. You will also receive an email when the transfer has completed (this is especially helpful in the case of large, long-duration transfers). Finally, to see your newly transferred file(s) in the destination folder, select the "refresh list" icon for that collection. +1. Globus will display a message that your transfer was submitted successfully, and you can click on "View details" in this message to see the status of the transfer. You will also receive an email when the transfer has completed (this is especially helpful in the case of large, long-duration transfers). Finally, to see your newly transferred file(s) in the destination folder, select the "refresh list" icon for that collection. {{< figure src="/images/globus_refresh_list.png" >}} diff --git a/content/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations.md b/content/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations.md index 16c12650e0601801fafae6b87e32f68c624b1916..0e739ab49c42731bef687eb35c04cb793ad5a0b9 100644 --- a/content/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations.md +++ b/content/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations.md @@ -28,7 +28,7 @@ collections. From your Globus account, select the 'File Manager' tab from the left sidebar and enter the name of your new collection in the 'Collection' text box. Press 'Enter' and then navigate to the appropriate directory. Select "Transfer of Sync to.." from the right sidebar (or select the "two panels" - icon from the top right corner) and Enter the second collection (for example: `hcc#crane`, `hcc#swan`, or `hcc#attic`), + icon from the top right corner) and Enter the second collection (for example: `hcc#swan`, or `hcc#attic`), type or navigate to the desired directory, and initiate the file transfer by clicking on the blue arrow button. {{< figure src="/images/globus_personal_transfer.png" >}} diff --git a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md index e3bc34b70ce60f655fe7a6efcf4351e5dd5728f6..d8ef4194dad63f267ca7c0e819f0208e065f5a36 100644 --- a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md +++ b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md @@ -27,18 +27,18 @@ to their Globus account. To do so, follow these steps: 3. Use the command `globus login` to start the authorization procedure. A web address will be displayed on screen. Copy and paste this URL into your browser. -{{< figure src="/images/21071052.png" >}} +{{< figure src="/images/globus_cli_login.png" >}} 4. If you are not already logged into Globus, do so now. 5. Label the connection if desired, then click Allow to grant Globus the permissions outlined on the screen. -{{< figure src="/images/21071053.png" >}} +{{< figure src="/images/globus_cli_auth.png" >}} 6. Copy the Authorization Code and paste it into the prompt in your terminal. -{{< figure src="/images/21071054.png" >}} -{{< figure src="/images/21071055.png" >}} +{{< figure src="/images/globus_cli_auth_code.png" >}} +{{< figure src="/images/globus_cli_auth_paste.png" >}} 7. At this point, you can verify you are logged in: {{< highlight bash >}}globus whoami{{< /highlight >}} @@ -61,17 +61,17 @@ For example: {{< highlight bash >}}globus endpoint search hcc{{< /highlight >}} will display any endpoint with "hcc" in it's Display Name. -{{< figure src="/images/21071056.png" >}} +{{< figure src="/images/globus_cli_search.png" >}} To use the endpoint, you can refer it it by it's UUID number. To activate an endpoint, use the command `globus endpoint activate --web`: -{{< figure src="/images/21073509.png" >}} +{{< figure src="/images/globus_cli_activate_url.png" >}} Copy the given URL and paste it into the address bar of your web browser. If you are not already logged into the Globus website, you will be prompted to do so. Once you are logged in, you need to click the `Activate Now` button to activate the endpoint. -{{< figure src="/images/21073503.png" >}} +{{< figure src="/images/globus_cli_activate_now.png" >}} Once an endpoint is activated, it will remain activate for 7 days. You can now transfer and manipulate files on the remote endpoint. @@ -79,14 +79,14 @@ can now transfer and manipulate files on the remote endpoint. {{% notice info %}} To make it easier to use, we recommend saving the UUID number as a bash variable to make the commands easier to use. For example, we will -continue to use the above endpoint (Crane) by assigning its UUID code -to the variable `crane` as follows: -{{< figure src="/images/21073499.png" >}} +continue to use the above endpoint (Swan) by assigning its UUID code +to the variable `swan` as follows: +{{< figure src="/images/globus_cli_env_var.png" >}} This command must be repeated upon each new login or terminal session unless you save these in your environmental variables. If you do not wish to do this step, you can proceed by placing the correct UUID in -place of whenever you see `$crane`. +place of whenever you see `$swan`. {{% /notice %}} --- @@ -98,17 +98,17 @@ globus commands follow the format `globus <command> remote endpoint with the command `globus ls`. To list the files in the home directory on the remote endpoint, we would use the following command: -{{< figure src="/images/21073500.png" >}} +{{< figure src="/images/globus_cli_ls.png" >}} To make a directory on the remote endpoint, we would use the `globus mkdir` command. For example, to make a folder in the users work -directory on Crane, we would use the following command: -{{< figure src="/images/21073501.png" >}} +directory on Swan, we would use the following command: +{{< figure src="/images/globus_cli_mkdir.png" >}} To rename files on the remote endpoint, we can use the `globus rename` command. To rename the test file we just created above, we would use the command: -{{< figure src="/images/21073502.png" >}} +{{< figure src="/images/globus_cli_rename.png" >}} --- ### Single Item Transfers @@ -116,25 +116,25 @@ command: All transfers must take place between Globus endpoints. Even if you are transferring from an endpoint that you are already connected to, that endpoint must be activated in Globus. Here, we are transferring between -Crane and Swan. We have activated the Crane endpoint and saved its -UUID to the variable `$swan` as we did for `$crane` above. +Attic and Swan. We have activated the Swan endpoint and saved its +UUID to the variable `$attic` as we did for `$swan` above. To transfer files, we use the command `globus transfer`. The format of this command is `globus transfer <endpoint1>:<file_path> <endpoint2>:<file_path>`. For example, here we are -transferring the file `testfile.txt` from the home directory on Crane -to the home directory on Crane: -{{< figure src="/images/21073505.png" >}} +transferring the file `testfile.txt` from the home directory on Swan +to the home directory on Swan: +{{< figure src="/images/globus_cli_transfer_file.png" >}} You can then check the status of a transfer, or delete it all together, using the given Task ID: -{{< figure src="/images/21073506.png" >}} +{{< figure src="/images/globus_cli_transfer_status.png" >}} To transfer entire directories, simply specify a directory in the file path as opposed to an individual file. Below, we are transferring the -`output` directory from the home directory on Crane to the home -directory on Crane: -{{< figure src="/images/21073507.png" >}} +`output` directory from the home directory on Swan to the home +directory on Swan: +{{< figure src="/images/globus_cli_transfer_dir.png" >}} For additional details and information on other features of the Globus CLI, visit [Command Line Interface (CLI) Examples](https://docs.globus.org/cli/examples/) in the Globus documentation. diff --git a/content/handling_data/data_transfer/high_speed_data_transfers.md b/content/handling_data/data_transfer/high_speed_data_transfers.md index a15c52e10ee32ee57a79bc2fd008e98ac11d5d81..296736cda99d7d1c35735845e782bfdd68eb7fb6 100644 --- a/content/handling_data/data_transfer/high_speed_data_transfers.md +++ b/content/handling_data/data_transfer/high_speed_data_transfers.md @@ -4,7 +4,7 @@ description = "How to transfer files directly from the transfer servers" weight = 40 +++ -Crane, Swan, and Attic each have a dedicated transfer server with +Swan, and Attic each have a dedicated transfer server with 10 Gb/s connectivity that allows for faster data transfers than the login nodes. With [Globus Connect]({{< relref "globus_connect" >}}), users @@ -17,12 +17,11 @@ using these dedicated servers for data transfers: Cluster | Transfer server ----------|---------------------- -Crane | `crane-xfer.unl.edu` Swan | `swan-xfer.unl.edu` Attic | `attic-xfer.unl.edu` {{% notice info %}} Because the transfer servers are login-disabled, third-party transfers -between `crane-xfer`, and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}). +between `swan-xfer`, and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}). {{% /notice %}} diff --git a/content/handling_data/data_transfer/scp.md b/content/handling_data/data_transfer/scp.md index 66742873f30c4bb47f4f8f00706cad5b0febbca6..acbf5d58222217b16c862af1ad94651b27bb1f7d 100644 --- a/content/handling_data/data_transfer/scp.md +++ b/content/handling_data/data_transfer/scp.md @@ -23,26 +23,26 @@ $ scp <username>@<host>:<path_to_files> <username>@<host>:<path_to_files> For the local location, you do not need to specify the username or host. **When transferring to and from your local computer, the `scp` command should be ran on your computer, NOT from HCC clusters.** -### Uploading a file to Crane +### Uploading a file to Swan -Here is an example of file transfer to and from the Crane cluster. +Here is an example of file transfer to and from the Swan cluster. To upload the file `data.csv` in your current directory to your `$WORK` directory -on the Crane cluster, you would use the command: +on the Swan cluster, you would use the command: {{< highlight bash >}} -$ scp ./data.csv <user_name>@crane.unl.edu:/work/<group_name>/<user_name> +$ scp ./data.csv <user_name>@swan.unl.edu:/work/<group_name>/<user_name> {{< /highlight >}} where `<user_name>` and `<group_name>` are replaced with your user name and your group name. -### Downloading a file from Crane +### Downloading a file from Swan To download the file `data.csv` from your `$WORK` directory -on the Crane cluster to your current directory, you would use the command: +on the Swan cluster to your current directory, you would use the command: {{< highlight bash >}} -$ scp <user_name>@crane.unl.edu:/work/<group_name>/<user_name>/data.csv ./ +$ scp <user_name>@swan.unl.edu:/work/<group_name>/<user_name>/data.csv ./ {{< /highlight >}} ### Potential incompatibility with recent versions of scp @@ -67,7 +67,7 @@ versions of scp.* Some example error messages are listed below. {{< highlight bash >}} # Recursive copy incompatibility example. # <source_dir> is missing from the target path resulting in error: -$ scp -r <source_dir> <user_name>@crane.unl.edu:/work/<group_name>/<user_name>/ +$ scp -r <source_dir> <user_name>@swan.unl.edu:/work/<group_name>/<user_name>/ scp: realpath /work/<group_name>/<user_name>/<source_dir>: No such file scp: upload "/work/<group_name>/<user_name>/<source_dir>": path canonicalization failed scp: failed to upload directory <source_dir> to /work/<group_name>/<user_name>/ @@ -77,7 +77,7 @@ scp: failed to upload directory <source_dir> to /work/<group_name>/<user_name>/ {{< highlight bash >}} # Shell environment expansion incompatibility example. # $WORK is treated as a literal string and not expanded by the remote shell: -$ scp <user_name>@crane.unl.edu:'$WORK/path/to/file' . +$ scp <user_name>@swan.unl.edu:'$WORK/path/to/file' . scp: $WORK/path/to/file: No such file or directory {{< /highlight >}} {{% /panel %}} @@ -88,12 +88,12 @@ scp invocation to use the SCP protocol. {{< highlight bash >}} # Creates <source_dir> and its contents entirely at the target path: -$ scp -O -r <source_dir> <user_name>@crane.unl.edu:/work/<group_name>/<user_name>/ -# Uses the remote shell on crane.unl.edu to expand $WORK to the string +$ scp -O -r <source_dir> <user_name>@swan.unl.edu:/work/<group_name>/<user_name>/ +# Uses the remote shell on swan.unl.edu to expand $WORK to the string # that is <user_name>'s full "/work/<group_name>/<user_name>" path. # Note the single quotes to keep the local shell from where scp is invoked # from attempting to expand the $WORK variable: -$ scp -O <user_name>@crane.unl.edu:'$WORK/path/to/file' . +$ scp -O <user_name>@swan.unl.edu:'$WORK/path/to/file' . {{< /highlight >}} Details of the change are available at the OpenSSH release 8.8 [Future diff --git a/content/handling_data/data_transfer/using_rclone_with_hcc.md b/content/handling_data/data_transfer/using_rclone_with_hcc.md index 6f642e1f1bb140433dccd0b706e3718a92dfb289..cc7cc8a85ac5b62389a8bfa6a072dde56a92b295 100644 --- a/content/handling_data/data_transfer/using_rclone_with_hcc.md +++ b/content/handling_data/data_transfer/using_rclone_with_hcc.md @@ -12,7 +12,7 @@ This tool can be used to transfer files between HCC clusters and outside cloud p 1. You must be able to access your [NU Office365](http://office.com/) account before beginning this process. Contact your local campus IT support if you need help with initial account setup. -2. Open a browser on your local machine and navigate to the [On-Demand portal]({{< relref "/open_ondemand" >}}) for the cluster of your choice. We use Crane for this example: [https://crane-ood.unl.edu](https://crane-ood.unl.edu). Select `Desktop` under `Interactive Apps` in the menu at the top of the page to get a virtual desktop on the cluster. +2. Open a browser on your local machine and navigate to the [On-Demand portal]({{< relref "/open_ondemand" >}}) for the cluster of your choice. We use Swan for this example: [https://swan-ood.unl.edu](https://swan-ood.unl.edu). Select `Desktop` under `Interactive Apps` in the menu at the top of the page to get a virtual desktop on the cluster. {{< figure src="/images/rclone_select_virtual_desktop.png" width="500" class="img-border">}} Scroll down to the bottom of the next page, and click on the blue `Launch` button. When the resource is ready, click on the blue `Launch Desktop` button that appears on the next page. {{< figure src="/images/rclone_launch_desktop.png" width="500" class="img-border">}} @@ -22,7 +22,7 @@ On the virtual desktop, click on the `Terminal Emulator` icon at the bottom of t 3. At the command prompt in the shell that opens, load the `rclone` module by entering the command below at the prompt: {{% panel theme="info" header="Load the Rclone module" %}} {{< highlight bash >}} -[demo03@c0809.crane ~]$ module load rclone +[demo03@c0809.swan ~]$ module load rclone {{< /highlight >}} {{% /panel %}} diff --git a/content/handling_data/data_transfer/using_rclone_with_nextcloud.md b/content/handling_data/data_transfer/using_rclone_with_nextcloud.md index 6e34e9074f409860ee5fd088699b12f57e465cab..b3e708b62e42125aadf230d71e59bb781ff17434 100644 --- a/content/handling_data/data_transfer/using_rclone_with_nextcloud.md +++ b/content/handling_data/data_transfer/using_rclone_with_nextcloud.md @@ -67,7 +67,7 @@ After logging into the cluster of your choice, load the `rclone` module by enter {{% panel theme="info" header="Load the Rclone module" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ module load rclone +[appstest@login.swan ~]$ module load rclone {{< /highlight >}} {{% /panel %}} @@ -75,7 +75,7 @@ To add a new remote in Rclone, run `rclone config`: {{% panel theme="info" header="Load the rclone config" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone config +[appstest@login.swan ~]$ rclone config {{< /highlight >}} {{% /panel %}} @@ -83,7 +83,7 @@ Choose the new remote option and enter a name (here "HCCNC" is used). {{% panel theme="info" header="Create new remote" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone config +[appstest@login.swan ~]$ rclone config 2021/11/11 22:38:42 NOTICE: Config file "/work/demo/appstest/.config/rclone/rclone.conf" not found - using defaults No remotes found - make a new one n) New remote @@ -217,7 +217,7 @@ c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q -[appstest@login.crane ~]$ +[appstest@login.swan ~]$ {{< /highlight >}} {{% /panel %}} @@ -225,7 +225,7 @@ To verify things are correct, use the `rclone lsf` command to list the contents {{% panel theme="info" header="Test Connection" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone lsf HCCNC: +[appstest@login.swan ~]$ rclone lsf HCCNC: Documents/ Nextcloud Manual.pdf Photos/ @@ -240,8 +240,8 @@ To upload or download files, use the `rclone copy` command. For example: {{% panel theme="info" header="Transferring files" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone copy HCCNC:/SomeFile.txt ./ -[appstest@login.crane ~]$ rclone copy ./SomeFile.txt HCCNC:/ +[appstest@login.swan ~]$ rclone copy HCCNC:/SomeFile.txt ./ +[appstest@login.swan ~]$ rclone copy ./SomeFile.txt HCCNC:/ {{< /highlight >}} {{% /panel %}} @@ -250,7 +250,7 @@ ify a destination folder. {{% panel theme="info" header="Download a directory from NextCloud" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone copy HCCNC:/my_hcc_dir ./my_hcc_dir +[appstest@login.swan ~]$ rclone copy HCCNC:/my_hcc_dir ./my_hcc_dir {{< /highlight >}} {{% /panel %}} @@ -258,7 +258,7 @@ To upload a directory named `my_hcc_dir` to NextCloud, use `rclone copy`. {{% panel theme="info" header="Upload a directory to NextCloud" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone copy ./my_hcc_dir HCCNC:/my_hcc_dir +[appstest@login.swan ~]$ rclone copy ./my_hcc_dir HCCNC:/my_hcc_dir {{< /highlight >}} {{% /panel %}} @@ -267,7 +267,7 @@ updated by name, checksum, or time. The example below would sync the files of th {{% panel theme="info" header="transfer.sh" %}} {{< highlight bash >}} -[appstest@login.crane ~]$ rclone sync ./my_hcc_dir HCCNC:/my_hcc_dir +[appstest@login.swan ~]$ rclone sync ./my_hcc_dir HCCNC:/my_hcc_dir {{< /highlight >}} {{% /panel %}} diff --git a/content/handling_data/data_transfer/winscp.md b/content/handling_data/data_transfer/winscp.md index 17dd48185bdcd3f2a230bb6d993e801d4af35c1d..969ec866e34db88c228c63eb26b39cd4c5329150 100644 --- a/content/handling_data/data_transfer/winscp.md +++ b/content/handling_data/data_transfer/winscp.md @@ -12,11 +12,10 @@ Usually it is convenient to upload and download files between your personal comp and the HCC supercomputers through a Graphic User Interface (GUI). Download and install the third party application **WinSCP** to connect the file systems between your personal computer and the HCC supercomputers. -Below is a step-by-step installation guide. Here we use the HCC cluster **Crane** -for demonstration. To use the **Swan** cluster, replace `crane.unl.edu` -with `swan.unl.edu`. +Below is a step-by-step installation guide. Here we use the HCC cluster **Swan** +for demonstration. -1. On the first screen, type `crane.unl.edu` for Host name, enter your +1. On the first screen, type `swan.unl.edu` for Host name, enter your HCC account username and password for User name and Password. Then click on **Login**. diff --git a/content/open_ondemand/connecting_to_hcc_ondemand.md b/content/open_ondemand/connecting_to_hcc_ondemand.md index 78345d82b035633ef895a767ff8017fb46f1817a..34120cc9f7d67d0ab08a99fd99ceda166f0ff809 100644 --- a/content/open_ondemand/connecting_to_hcc_ondemand.md +++ b/content/open_ondemand/connecting_to_hcc_ondemand.md @@ -6,7 +6,6 @@ weight=10 To access HCC’s instance of Open OnDemand, use one of the following links. -- For Crane, visit: [https://crane-ood.unl.edu](https://crane-ood.unl.edu) - For Swan, visit: [https://swan-ood.unl.edu](https://swan-ood.unl.edu) Log in with your HCC username, password, and Duo credentials. @@ -18,6 +17,6 @@ Once you have successfully logged in, you will be directed to your OnDemand dash {{< figure src="/images/OOD_Dashboard_1.png" width="700" class="img-border">}} -To return to the Dashboard at any point, click on "Crane OnDemand" in the upper left hand corner of the window. +To return to the Dashboard at any point, click on "Swan OnDemand" in the upper left hand corner of the window. Next: [Managing and Transferring Files with HCC OnDemand]({{< relref "managing_and_transferring_files" >}}) diff --git a/content/submitting_jobs/_index.md b/content/submitting_jobs/_index.md index eec6fba5eb966a273a668a2f0073eb828fa6fdd1..75edc93f35a065a4534fe9157e8f78ca8e6ced9e 100644 --- a/content/submitting_jobs/_index.md +++ b/content/submitting_jobs/_index.md @@ -4,9 +4,9 @@ description = "How to submit jobs to HCC resources" weight = "50" +++ -Crane and Swan are managed by +Swan is managed by the [SLURM](https://slurm.schedmd.com) resource manager. -In order to run processing on Crane, you +In order to run processing on Swan, you must create a SLURM script that will run your processing. After submitting the job, SLURM will schedule your processing on an available worker node. @@ -18,7 +18,8 @@ Before writing a submit file, you may need to - [Creating a SLURM Submit File](#creating-a-slurm-submit-file) - [Submitting the job](#submitting-the-job) - [Checking Job Status](#checking-job-status) - - [Checking Job Start](#checking-job-start) + - [Checking Job Start](#checking-job-start) + - [Removing the Job](#removing-the-job) - [Next Steps](#next-steps) @@ -78,8 +79,8 @@ sleep 60 - **mem** Specify the real memory required per node in MegaBytes. If you exceed this limit, your job will be stopped. Note that for you - should ask for less memory than each node actually has. For Crane, the - max is 500GB. + should ask for less memory than each node actually has. For Swan, the + max is 2000GB. - **job-name** The name of the job. Will be reported in the job listing. - **partition** @@ -143,12 +144,12 @@ example if you are part of a [partition]({{< relref "/submitting_jobs/partitions you can use the `-p` option to `squeue`: {{< highlight batch >}} -$ squeue -p esquared +$ squeue -p guest JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) - 73435 esquared MyRandom tingting R 10:35:20 1 ri19n10 - 73436 esquared MyRandom tingting R 10:35:20 1 ri19n12 - 73735 esquared SW2_driv hroehr R 10:14:11 1 ri20n07 - 73736 esquared SW2_driv hroehr R 10:14:11 1 ri20n07 + 73435 guest MyRandom demo01 R 10:35:20 1 ri19n10 + 73436 guest MyRandom demo01 R 10:35:20 1 ri19n12 + 73735 guest SW2_driv demo02 R 10:14:11 1 ri20n07 + 73736 guest SW2_driv demo02 R 10:14:11 1 ri20n07 {{< /highlight >}} #### Checking Job Start @@ -158,19 +159,19 @@ command `squeue --start`. The output of the command will show the expected start time of the jobs. {{< highlight batch >}} -$ squeue --start --user lypeng +$ squeue --start --user demo03 JOBID PARTITION NAME USER ST START_TIME NODES NODELIST(REASON) - 5822 batch Starace lypeng PD 2013-06-08T00:05:09 3 (Priority) - 5823 batch Starace lypeng PD 2013-06-08T00:07:39 3 (Priority) - 5824 batch Starace lypeng PD 2013-06-08T00:09:09 3 (Priority) - 5825 batch Starace lypeng PD 2013-06-08T00:12:09 3 (Priority) - 5826 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority) - 5827 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority) - 5828 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority) - 5829 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority) - 5830 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority) - 5831 batch Starace lypeng PD 2013-06-08T00:14:09 3 (Priority) - 5832 batch Starace lypeng PD N/A 3 (Priority) + 5822 batch python demo03 PD 2013-06-08T00:05:09 3 (Priority) + 5823 batch python demo03 PD 2013-06-08T00:07:39 3 (Priority) + 5824 batch python demo03 PD 2013-06-08T00:09:09 3 (Priority) + 5825 batch python demo03 PD 2013-06-08T00:12:09 3 (Priority) + 5826 batch python demo03 PD 2013-06-08T00:12:39 3 (Priority) + 5827 batch python demo03 PD 2013-06-08T00:12:39 3 (Priority) + 5828 batch python demo03 PD 2013-06-08T00:12:39 3 (Priority) + 5829 batch python demo03 PD 2013-06-08T00:13:09 3 (Priority) + 5830 batch python demo03 PD 2013-06-08T00:13:09 3 (Priority) + 5831 batch python demo03 PD 2013-06-08T00:14:09 3 (Priority) + 5832 batch python demo03 PD N/A 3 (Priority) {{< /highlight >}} The output shows the expected start time of the jobs, as well as the diff --git a/content/submitting_jobs/app_specific/submitting_r_jobs.md b/content/submitting_jobs/app_specific/submitting_r_jobs.md index db2c25ce349e452323e3d0c7eab8f60bc9eb245e..8f39a3003c6880489014d0042391d68b7cb89890 100644 --- a/content/submitting_jobs/app_specific/submitting_r_jobs.md +++ b/content/submitting_jobs/app_specific/submitting_r_jobs.md @@ -169,7 +169,7 @@ mclapply(rep(4, 5), rnorm, mc.cores=16) Submitting a multinode MPI R job to SLURM is very similar to [Submitting an MPI Job]({{< relref "submitting_an_mpi_job" >}}), since both are running multicore jobs on a multiple nodes. -Below is an example of running Rmpi on Crane on 2 nodes and 32 cores: +Below is an example of running Rmpi on Swan on 2 nodes and 32 cores: {{% panel theme="info" header="Rmpi.submit" %}} {{< highlight batch >}} @@ -188,7 +188,7 @@ mpirun -n 1 R CMD BATCH Rmpi.R {{< /highlight >}} {{% /panel %}} -When you run Rmpi job on Crane, please use the line `export +When you run Rmpi job on Swan, please use the line `export OMPI_MCA_mtl=^psm` in your submit script. Regardless of how may cores your job uses, the Rmpi package should always be run with `mpirun -n 1` because it spawns additional processes dynamically. diff --git a/content/submitting_jobs/hcc_acknowledgment_credit.md b/content/submitting_jobs/hcc_acknowledgment_credit.md index 0f09cabf73425de6e7d828aacf6fbf8468bb4c0d..0f2122dc4ad4b503cd8cf847289da45a3bf3b0d5 100644 --- a/content/submitting_jobs/hcc_acknowledgment_credit.md +++ b/content/submitting_jobs/hcc_acknowledgment_credit.md @@ -56,7 +56,7 @@ exhausted. **Why this ratio?** -All nodes in the Crane batch partition can meet this CPU to memory +All nodes in the Swan batch partition can meet this CPU to memory ratio. **Why have this ratio?** diff --git a/content/submitting_jobs/job_dependencies.md b/content/submitting_jobs/job_dependencies.md index 384d09c3a312cc55337b3e5d95830eda361776da..b735eedf351d3a1b2b77afee3f2bfecd82d3eec8 100644 --- a/content/submitting_jobs/job_dependencies.md +++ b/content/submitting_jobs/job_dependencies.md @@ -103,7 +103,7 @@ To start the workflow, submit Job A first: {{% panel theme="info" header="Submit Job A" %}} {{< highlight batch >}} -[demo01@login.crane demo01]$ sbatch JobA.submit +[demo01@login.swan demo01]$ sbatch JobA.submit Submitted batch job 666898 {{< /highlight >}} {{% /panel %}} @@ -113,9 +113,9 @@ dependency: {{% panel theme="info" header="Submit Jobs B and C" %}} {{< highlight batch >}} -[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobB.submit +[demo01@login.swan demo01]$ sbatch -d afterok:666898 JobB.submit Submitted batch job 666899 -[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobC.submit +[demo01@login.swan demo01]$ sbatch -d afterok:666898 JobC.submit Submitted batch job 666900 {{< /highlight >}} {{% /panel %}} @@ -124,7 +124,7 @@ Finally, submit Job D as depending on both jobs B and C: {{% panel theme="info" header="Submit Job D" %}} {{< highlight batch >}} -[demo01@login.crane demo01]$ sbatch -d afterok:666899:666900 JobD.submit +[demo01@login.swan demo01]$ sbatch -d afterok:666899:666900 JobD.submit Submitted batch job 666901 {{< /highlight >}} {{% /panel %}} @@ -135,7 +135,7 @@ of the dependency. {{% panel theme="info" header="Squeue Output" %}} {{< highlight batch >}} -[demo01@login.crane demo01]$ squeue -u demo01 +[demo01@login.swan demo01]$ squeue -u demo01 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 666899 batch JobB demo01 PD 0:00 1 (Dependency) 666900 batch JobC demo01 PD 0:00 1 (Dependency) diff --git a/content/submitting_jobs/monitoring_jobs.md b/content/submitting_jobs/monitoring_jobs.md index db9f68fa70a5d5674e99c9ccde86e2c1dfda9356..59a74a94bec940d63341faf58a35de65bec0bc80 100644 --- a/content/submitting_jobs/monitoring_jobs.md +++ b/content/submitting_jobs/monitoring_jobs.md @@ -35,7 +35,7 @@ sacct Lists all jobs by the current user and displays information such as JobID, JobName, State, and ExitCode. -{{< figure src="/images/21070053.png" height="150" >}} +{{< figure src="/images/sacct_generic.png" height="150" >}} Coupling this command with the --format flag will allow you to see more than the default information about a job. Fields to display should be @@ -47,7 +47,7 @@ a job, this command can be used: sacct --format JobID,JobName,Elapsed,MaxRSS {{< /highlight >}} -{{< figure src="/images/21070054.png" height="150" >}} +{{< figure src="/images/sacct_format.png" height="150" >}} Additional arguments and format field information can be found in [the SLURM documentation](https://slurm.schedmd.com/sacct.html). @@ -87,17 +87,17 @@ where `<NODE_ID>` is replaced by the name of the node where the monitored job is running. This information can be found out by looking at the squeue output under the `NODELIST` column. -{{< figure src="/images/21070055.png" width="700" >}} +{{< figure src="/images/srun_node_id.png" width="700" >}} ### Using `top` to monitor running jobs Once the interactive job begins, you can run `top` to view the processes on the node you are on: -{{< figure src="/images/21070056.png" height="400" >}} +{{< figure src="/images/srun_top.png" height="400" >}} Output for `top` displays each running process on the node. From the above image, we can see the various MATLAB processes being run by user -cathrine98. To filter the list of processes, you can type `u` followed +hccdemo. To filter the list of processes, you can type `u` followed by the username of the user who owns the processes. To exit this screen, press `q`. @@ -156,7 +156,7 @@ at the end of your submit script. `mem_report` can also be run as part of an interactive job: {{< highlight bash >}} -[demo13@c0218.crane ~]$ mem_report +[demo13@c0218.swan ~]$ mem_report Current memory usage for job 25745709 is: 2.57 MBs Maximum memory usage for job 25745709 is: 3.27 MBs {{< /highlight >}} diff --git a/content/submitting_jobs/partitions/_index.md b/content/submitting_jobs/partitions/_index.md index fad3ac3f7cd9fc4b7edc5fd9312767e24770ace1..91e325ca32afd8f787ea5521c7beaf824c63ea11 100644 --- a/content/submitting_jobs/partitions/_index.md +++ b/content/submitting_jobs/partitions/_index.md @@ -1,18 +1,14 @@ +++ title = "Available Partitions" -description = "Listing of partitions on Crane and Swan." +description = "Listing of partitions on Swan." scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] weight=70 +++ -Partitions are used on Crane and Swan to distinguish different +Partitions are used on Swan to distinguish different resources. You can view the partitions with the command `sinfo`. -### Crane: - -[Full list for Crane]({{< relref "crane_available_partitions" >}}) - ### Swan: [Full list for Swan]({{< relref "swan_available_partitions" >}}) @@ -38,7 +34,7 @@ priority so it will run as soon as possible. Overall limitations of maximum job wall time. CPUs, etc. are set for all jobs with the default setting (when thea "–qos=" section is omitted) -and "short" jobs (described as above) on Crane and Swan. +and "short" jobs (described as above) on Swan. The limitations are shown in the following form. | | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User | @@ -93,7 +89,7 @@ to owned resources, this method is recommended to maximize job throughput. ### Guest Partition The `guest` partition can be used by users and groups that do not own -dedicated resources on Crane or Swan. Jobs running in the `guest` partition +dedicated resources on Swan. Jobs running in the `guest` partition will run on the owned resources with Intel OPA interconnect. The jobs are preempted when the resources are needed by the resource owners and are restarted on another node. @@ -107,24 +103,3 @@ interconnect. They are suitable for serial or single node parallel jobs. The nodes in this partition are subjected to be drained and move to our Openstack cloud when more cloud resources are needed without notice in advance. - -### Use of Infiniband or OPA - -Crane nodes use either Infiniband or Intel Omni-Path interconnects in -the batch partition. Most users don't need to worry about which one to -choose. Jobs will automatically be scheduled for either of them by the -scheduler. However, if the user wants to use one of the interconnects -exclusively, the SLURM constraint keyword is available. Here are the -examples: - -{{% panel theme="info" header="SLURM Specification: Omni-Path" %}} -{{< highlight bash >}} -#SBATCH --constraint=opa -{{< /highlight >}} -{{% /panel %}} - -{{% panel theme="info" header="SLURM Specification: Infiniband" %}} -{{< highlight bash >}} -#SBATCH --constraint=ib -{{< /highlight >}} -{{% /panel %}} diff --git a/content/submitting_jobs/partitions/crane_available_partitions.md b/content/submitting_jobs/partitions/crane_available_partitions.md deleted file mode 100644 index a799bacaab8a1d4e971d9ec64d32085f7835b307..0000000000000000000000000000000000000000 --- a/content/submitting_jobs/partitions/crane_available_partitions.md +++ /dev/null @@ -1,10 +0,0 @@ -+++ -title = "Available Partitions for Crane" -description = "List of available partitions for crane.unl.edu." -scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] -css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] -+++ - -### Crane: - -{{< table url="http://crane-head.unl.edu:8192/slurm/partitions/json" >}} diff --git a/static/images/OOD_Active_jobs_1.png b/static/images/OOD_Active_jobs_1.png index 1575f5dcc1262551839dfbf80bd829795266b72f..b3ad78b5c9de77528165151a492c8f006251d9c2 100644 Binary files a/static/images/OOD_Active_jobs_1.png and b/static/images/OOD_Active_jobs_1.png differ diff --git a/static/images/OOD_Active_jobs_2.png b/static/images/OOD_Active_jobs_2.png index d17c9ef74cf33458cbf55ef9372238848d61722c..3501ff5c0b33ad19c3a7942f49e9e5586ba3d8c0 100644 Binary files a/static/images/OOD_Active_jobs_2.png and b/static/images/OOD_Active_jobs_2.png differ diff --git a/static/images/OOD_Dashboard_1.png b/static/images/OOD_Dashboard_1.png index a1a75889560843ab16f34fd728c8ea86a09577a1..f2cd746c653b3acb950d34919bd07d800ff72e87 100644 Binary files a/static/images/OOD_Dashboard_1.png and b/static/images/OOD_Dashboard_1.png differ diff --git a/static/images/OOD_Delete_desktop_1.png b/static/images/OOD_Delete_desktop_1.png index b144ba8f3cbee78a69216a583d04455de516bb06..9e28522621eaa3003c185aec7e9e26f0d8509194 100644 Binary files a/static/images/OOD_Delete_desktop_1.png and b/static/images/OOD_Delete_desktop_1.png differ diff --git a/static/images/OOD_Desktop_1.png b/static/images/OOD_Desktop_1.png index 847173ef2408f465734f5792f053b7e6e1aabab7..098286049a4c3c5dca13b34eba57ce8088a70aea 100644 Binary files a/static/images/OOD_Desktop_1.png and b/static/images/OOD_Desktop_1.png differ diff --git a/static/images/OOD_Files_menu_1.png b/static/images/OOD_Files_menu_1.png index 39fc6584d6b3c76198fbe0c40b90234823dd8549..95dd5d8d0b02325c40f7fa17abe09aad4452bde8 100644 Binary files a/static/images/OOD_Files_menu_1.png and b/static/images/OOD_Files_menu_1.png differ diff --git a/static/images/OOD_Interactive_apps_1.png b/static/images/OOD_Interactive_apps_1.png index cd35bd4cc175eab8550e43ebdcd68a2ff0d7bdb9..f2a75202af60dc47f007baea72bf301580e057df 100644 Binary files a/static/images/OOD_Interactive_apps_1.png and b/static/images/OOD_Interactive_apps_1.png differ diff --git a/static/images/OOD_Interactive_apps_2.png b/static/images/OOD_Interactive_apps_2.png index 5fbe17e584da8543dc9cc53132ee504e1a9ef1db..f4058dbeaab5fb83bd60a63d250cd4b9a348baba 100644 Binary files a/static/images/OOD_Interactive_apps_2.png and b/static/images/OOD_Interactive_apps_2.png differ diff --git a/static/images/OOD_Job_composer_1.png b/static/images/OOD_Job_composer_1.png index 03ea2cdb98ec2c6b62f925cb7bfbb6cfb3786d43..19bc068766893e776e96beccf6a13098aa2e9c8a 100644 Binary files a/static/images/OOD_Job_composer_1.png and b/static/images/OOD_Job_composer_1.png differ diff --git a/static/images/OOD_Job_composer_2.png b/static/images/OOD_Job_composer_2.png index becd70cb7c482291ddf890b14ad40e2ec43728af..2c44655a60c29b5f9c3c83824c39ae0d93164302 100644 Binary files a/static/images/OOD_Job_composer_2.png and b/static/images/OOD_Job_composer_2.png differ diff --git a/static/images/OOD_Jupyter_1.png b/static/images/OOD_Jupyter_1.png index e915d22dd4059c73f37802365c0f2dd736ba133a..7fb0762b85a3590fca816e9e0f90dc5456f79abf 100644 Binary files a/static/images/OOD_Jupyter_1.png and b/static/images/OOD_Jupyter_1.png differ diff --git a/static/images/OOD_Shell_1.png b/static/images/OOD_Shell_1.png index 57711681aadd6f1273adfd3a1fe1e868a3dc6578..3e2a99e8e8b6d85f8751b7dec3a5dd45e2fe95ef 100644 Binary files a/static/images/OOD_Shell_1.png and b/static/images/OOD_Shell_1.png differ diff --git a/static/images/OOD_Shell_2.png b/static/images/OOD_Shell_2.png index e16fe44888a1a8267b625166e2f700ddd2335da6..4c9af4f212574ee71c6da94f671f69175b97d0d8 100644 Binary files a/static/images/OOD_Shell_2.png and b/static/images/OOD_Shell_2.png differ diff --git a/static/images/OOD_Templates_1.png b/static/images/OOD_Templates_1.png index bb66b6fa9c707f4473d7bf231f29b9df4da2a341..7e7dbdddf52a355ed8f8408e74efa30758bc236a 100644 Binary files a/static/images/OOD_Templates_1.png and b/static/images/OOD_Templates_1.png differ diff --git a/static/images/Putty-win10X11.png b/static/images/Putty-win10X11.png index 54d80875df15f89530518f5e9b17b7d94cc2e86a..cf1d8220678271cd7d4dbb9932145ada27e896ea 100644 Binary files a/static/images/Putty-win10X11.png and b/static/images/Putty-win10X11.png differ diff --git a/static/images/Putty-win10XEYES.png b/static/images/Putty-win10XEYES.png index b85691d1268270daeaad997a5494827bf182d188..0e7ce945ac715ca59ccdb4bb0685cbdcc3f0235a 100644 Binary files a/static/images/Putty-win10XEYES.png and b/static/images/Putty-win10XEYES.png differ diff --git a/static/images/duo_app_approved.png b/static/images/duo_app_approved.png new file mode 100644 index 0000000000000000000000000000000000000000..3938cc004d21360cb2a2d0cb06ce029017a1cb0c Binary files /dev/null and b/static/images/duo_app_approved.png differ diff --git a/static/images/duo_app_request.png b/static/images/duo_app_request.png new file mode 100644 index 0000000000000000000000000000000000000000..5e28812405c3a70d72d5a90ee3086d9002cbeb54 Binary files /dev/null and b/static/images/duo_app_request.png differ diff --git a/static/images/duo_login_pass.png b/static/images/duo_login_pass.png new file mode 100644 index 0000000000000000000000000000000000000000..3471e5d1b91d18a366e4a4211f0157e514b0d8e1 Binary files /dev/null and b/static/images/duo_login_pass.png differ diff --git a/static/images/duo_login_successful.png b/static/images/duo_login_successful.png new file mode 100644 index 0000000000000000000000000000000000000000..4fd3f432355678005712619befa898c4dd8d4083 Binary files /dev/null and b/static/images/duo_login_successful.png differ diff --git a/static/images/globus_cli_activate_now.png b/static/images/globus_cli_activate_now.png new file mode 100644 index 0000000000000000000000000000000000000000..4e46fd390daa9980f0949560e95cd9000339b50b Binary files /dev/null and b/static/images/globus_cli_activate_now.png differ diff --git a/static/images/globus_cli_activate_url.png b/static/images/globus_cli_activate_url.png new file mode 100644 index 0000000000000000000000000000000000000000..50db148a1883c01f3204337a4b09bfbfc348fba9 Binary files /dev/null and b/static/images/globus_cli_activate_url.png differ diff --git a/static/images/globus_cli_auth.png b/static/images/globus_cli_auth.png new file mode 100644 index 0000000000000000000000000000000000000000..af63d02799ea5c5130f51ea9ab3fdaa8d7cac71d Binary files /dev/null and b/static/images/globus_cli_auth.png differ diff --git a/static/images/globus_cli_auth_code_gen.png b/static/images/globus_cli_auth_code_gen.png new file mode 100644 index 0000000000000000000000000000000000000000..7cf728418ac93e166465531111a67c424eeb4239 Binary files /dev/null and b/static/images/globus_cli_auth_code_gen.png differ diff --git a/static/images/globus_cli_auth_paste.png b/static/images/globus_cli_auth_paste.png new file mode 100644 index 0000000000000000000000000000000000000000..c2537bb6abf402e34f75d90e7ff3883757ecbaed Binary files /dev/null and b/static/images/globus_cli_auth_paste.png differ diff --git a/static/images/globus_cli_env_var.png b/static/images/globus_cli_env_var.png new file mode 100644 index 0000000000000000000000000000000000000000..7ff4c2596a00053012c12e0f10314c8eb2c9984d Binary files /dev/null and b/static/images/globus_cli_env_var.png differ diff --git a/static/images/globus_cli_login.png b/static/images/globus_cli_login.png new file mode 100644 index 0000000000000000000000000000000000000000..564d604eb2ac3e178507f3bddcd589f1ec06c7fd Binary files /dev/null and b/static/images/globus_cli_login.png differ diff --git a/static/images/globus_cli_ls.png b/static/images/globus_cli_ls.png new file mode 100644 index 0000000000000000000000000000000000000000..d86fcd1a3a597f007c81ef92fea1fd6c9dc32679 Binary files /dev/null and b/static/images/globus_cli_ls.png differ diff --git a/static/images/globus_cli_mkdir.png b/static/images/globus_cli_mkdir.png new file mode 100644 index 0000000000000000000000000000000000000000..c2ec501f9debb44733d8f2d774d463dcb85d903c Binary files /dev/null and b/static/images/globus_cli_mkdir.png differ diff --git a/static/images/globus_cli_rename.png b/static/images/globus_cli_rename.png new file mode 100644 index 0000000000000000000000000000000000000000..b5feb8e2cab1fe12c9000910a4c3feb6893124d9 Binary files /dev/null and b/static/images/globus_cli_rename.png differ diff --git a/static/images/globus_cli_search.png b/static/images/globus_cli_search.png new file mode 100644 index 0000000000000000000000000000000000000000..cb33f066cfa3784df002063d735291e8c458bc90 Binary files /dev/null and b/static/images/globus_cli_search.png differ diff --git a/static/images/globus_cli_transfer_dir.png b/static/images/globus_cli_transfer_dir.png new file mode 100644 index 0000000000000000000000000000000000000000..f2b8952b26f156e31d904f211fb045b3f2147542 Binary files /dev/null and b/static/images/globus_cli_transfer_dir.png differ diff --git a/static/images/globus_cli_transfer_file.png b/static/images/globus_cli_transfer_file.png new file mode 100644 index 0000000000000000000000000000000000000000..f955ddeb81162e711bd048845f86ce0892f1e819 Binary files /dev/null and b/static/images/globus_cli_transfer_file.png differ diff --git a/static/images/globus_cli_transfer_status.png b/static/images/globus_cli_transfer_status.png new file mode 100644 index 0000000000000000000000000000000000000000..80336b4563d87140743287d32ab1e581c66dbde9 Binary files /dev/null and b/static/images/globus_cli_transfer_status.png differ diff --git a/static/images/jupyterNew.png b/static/images/jupyterNew.png index 9ccdf386c75ebcc96c380cb427d6e5d6ab342b35..e3fdd6723b8e97bd0a888ed9ef335e7499434fe2 100644 Binary files a/static/images/jupyterNew.png and b/static/images/jupyterNew.png differ diff --git a/static/images/jupyter_sas_code.png b/static/images/jupyter_sas_code.png new file mode 100644 index 0000000000000000000000000000000000000000..02d45be7145b515190af580ca3223c8d6fd37d9e Binary files /dev/null and b/static/images/jupyter_sas_code.png differ diff --git a/static/images/putty_duo.png b/static/images/putty_duo.png new file mode 100644 index 0000000000000000000000000000000000000000..e8e76d16f1677c110f85ef89f4bc71d0ecae59cb Binary files /dev/null and b/static/images/putty_duo.png differ diff --git a/static/images/putty_initial.png b/static/images/putty_initial.png new file mode 100644 index 0000000000000000000000000000000000000000..1003e36e96afa75b1ca0b711450e6eb7bf32df9d Binary files /dev/null and b/static/images/putty_initial.png differ diff --git a/static/images/putty_password.png b/static/images/putty_password.png new file mode 100644 index 0000000000000000000000000000000000000000..c3ad4ac3c95bf716bada5dc2d0b5d0ca3380617b Binary files /dev/null and b/static/images/putty_password.png differ diff --git a/static/images/putty_share_connection1.png b/static/images/putty_share_connection1.png index 38e30b9fdd69d3df61f32baba545322e90533a87..1ccb99760dae87208596d30146e6725e827c643a 100644 Binary files a/static/images/putty_share_connection1.png and b/static/images/putty_share_connection1.png differ diff --git a/static/images/putty_share_connection2.png b/static/images/putty_share_connection2.png index 199ed95fa613e884d355c329663e00600f96cfaa..ea0da8a89ec61bf9f0b162d29dbaccdd180a7d68 100644 Binary files a/static/images/putty_share_connection2.png and b/static/images/putty_share_connection2.png differ diff --git a/static/images/putty_share_connection3.png b/static/images/putty_share_connection3.png index e3547d35464972583ca36fbcabc97cfddd777a99..8a6058d36380d1c54454fa24a2eb444aa123707d 100644 Binary files a/static/images/putty_share_connection3.png and b/static/images/putty_share_connection3.png differ diff --git a/static/images/putty_username.png b/static/images/putty_username.png new file mode 100644 index 0000000000000000000000000000000000000000..ed1ed48e2fb436d606fc53681399305f1195e4a8 Binary files /dev/null and b/static/images/putty_username.png differ diff --git a/static/images/sacct_format.png b/static/images/sacct_format.png new file mode 100644 index 0000000000000000000000000000000000000000..d7d6e775009013828f018ef00d2de0ff3db57437 Binary files /dev/null and b/static/images/sacct_format.png differ diff --git a/static/images/sacct_generic.png b/static/images/sacct_generic.png new file mode 100644 index 0000000000000000000000000000000000000000..41e7ec6701c6d3c0427f86c22fa35510b509ba6b Binary files /dev/null and b/static/images/sacct_generic.png differ diff --git a/static/images/sas_1.png b/static/images/sas_1.png index 02a877ebeafab845715494572594b2900d60e056..d89136e9558a91b1ab92548af0d8768249f64084 100644 Binary files a/static/images/sas_1.png and b/static/images/sas_1.png differ diff --git a/static/images/srun_node_id.png b/static/images/srun_node_id.png new file mode 100644 index 0000000000000000000000000000000000000000..91b6636f5cb4f715e2b4fd668f4794b52da5decb Binary files /dev/null and b/static/images/srun_node_id.png differ diff --git a/static/images/srun_top.png b/static/images/srun_top.png new file mode 100644 index 0000000000000000000000000000000000000000..90e924d943d4db535d74c1cd658cec79f824a27f Binary files /dev/null and b/static/images/srun_top.png differ