diff --git a/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md b/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md index 29014762153930db9fb81ef5eaeef81f642dcbd3..50ede1cc6671e595d57c67fe90684ccd6a94a285 100644 --- a/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md +++ b/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md @@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. **Logging In** ``` syntaxhighlighter-pre -ssh tusker.unl.edu -l demoXXXX +ssh crane.unl.edu -l demoXXXX ``` **[Cypwin Link](http://cygwin.com/install.html)** @@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder ``` syntaxhighlighter-pre $ ls -$ scp -r ./demo_code <username>@tusker.unl.edu:/work/demo/<username> +$ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username> <enter password> ``` @@ -59,7 +59,7 @@ Serial Job First, you need to login to the cluster ``` syntaxhighlighter-pre -$ ssh <username>@tusker.unl.edu +$ ssh <username>@crane.unl.edu <enter password> ``` @@ -133,14 +133,14 @@ code. It uses MPI for communication between the parallel processes. $ mpif90 fortran_mpi.f90 -o fortran_mpi.x ``` -Next, we will submit the MPI application to the Tusker cluster scheduler +Next, we will submit the MPI application to the cluster scheduler using the file `submit_tusker.mpi`. ``` syntaxhighlighter-pre $ qsub submit_tusker.mpi ``` -The Tusker cluster scheduler will pick machines (possibly several, +The cluster scheduler will pick machines (possibly several, depending on availability) to run the parallel MPI application. You can check the status of the job the same way you did with the Serial job: diff --git a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md index ba18fd08924a2d40eb7fbd5bbce34ce16145dfe2..28b25aa5027206c22b448fe4544288dd6ee91630 100644 --- a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md +++ b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md @@ -4,7 +4,7 @@ description = "Example of how to profile Ray using Allinea Performance Reports" +++ Simple example of using [Ray]({{< relref "/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray" >}}) -with Allinea PerformanceReports (`perf-report`) on Tusker is shown below: +with Allinea PerformanceReports (`perf-report`) is shown below: {{% panel theme="info" header="ray_perf_report.submit" %}} {{< highlight batch >}} diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md index 6028edfaac51fd8bfbc21ebb53b40fe01036618d..94fbd172aa458b6ced73f7927d1c42d5eba58c1a 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md @@ -64,5 +64,5 @@ The basic Clustal Omega output produces one alignment file in the specified outp ### Useful Information -In order to test the Clustal Omega performance on Tusker, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega on Tusker are shown on the table below: +In order to test the Clustal Omega performance, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega are shown on the table below: {{< readfile file="/static/html/clustal_omega.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md index 4ea2bb19101f678ffb45ce1202faef2c2c08919a..7f29d4a32a43422a08d435f31b3d51ebb95d84f0 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md @@ -59,5 +59,5 @@ Oases produces two additional output files: `transcripts.fa` and `contig-orderin ### Useful Information -In order to test the Oases (oases/0.2.8) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases on Tusker are shown in the table below: +In order to test the Oases (oases/0.2.8) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases are shown in the table below: {{< readfile file="/static/html/oases.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md index cea233648586ea25b4524524c90da741f65acf8f..795d16233e8487cd19b9834185433a55884eb13d 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md @@ -38,7 +38,7 @@ Ray supports both paired-end (`-p`) and single-end reads (`-s`). Moreover, Ray c Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`). Ray supports multiple file formats such as `fasta`, `fa`, `fasta.gz`, `fa.gz, `fasta.bz2`, `fa.bz2`, `fastq`, `fq`, `fastq.gz`, `fq.gz`, `fastq.bz2`, `fq.bz2`, `sff`, `csfasta`, `csfa`. -Simple SLURM script for running Ray on Tusker with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below: +Simple SLURM script for running Ray with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below: {{% panel header="`ray.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -76,5 +76,5 @@ One of the most important results are: ### Useful Information -In order to test the Ray performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray on Tusker are shown in the table below: +In order to test the Ray performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray are shown in the table below: {{< readfile file="/static/html/ray.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md index cd3b36d31d9914a6c0ac0c6a9dec53d91635d8dd..30b83f5686d6baff087cd1d0e544b2a717e2e5b2 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md @@ -94,7 +94,7 @@ q=input_reads.fq After creating the configuration file **configFile**, the next step is to run the assembler using this file. -Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` on Tusker is shown below: +Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` is shown below: {{% panel header="`soapdenovo2.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -128,7 +128,7 @@ output31.contig output31.edge.gz output31.links output31.p ### Useful Information -In order to test the SOAPdenovo2 (soapdenovo2/r240) performance on Tusker, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below: +In order to test the SOAPdenovo2 (soapdenovo2/r240) performance, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below: {{< readfile file="/static/html/soapdenovo2.html" >}} In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads. diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md index 1118484f6b60035b91d3835df19bf7318da04b46..3147c63996f152d32baec4b1cb6c876a60df5472 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md @@ -52,7 +52,7 @@ Each step may be run as its own job, providing a workaround for the single job w ### Useful Information -In order to test the Trinity (trinity/r2014-04-13p1) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity on Tusker are shown in the table below: +In order to test the Trinity (trinity/r2014-04-13p1) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity are shown in the table below: {{< readfile file="/static/html/trinity.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md index 2635bdbd1452bf3c569fc63e7ad57993ebf1a900..427424d769b0a9e7e609fd22a05d7088793b0053 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md @@ -18,5 +18,5 @@ Each step of Velvet (**velveth** and **velvetg**) may be run as its own job. The ### Useful Information -In order to test the Velvet (velvet/1.2) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet on Tusker are shown in the table below: +In order to test the Velvet (velvet/1.2) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet are shown in the table below: {{< readfile file="/static/html/velvet.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md index 4cd1db59fa9de90fe464200a613062925bdb32a3..0b18b6cd09e57f8328c01a97a3e83f2af063def9 100644 --- a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md +++ b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md @@ -22,7 +22,7 @@ $ scythe --help {{< /highlight >}} -Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` for Tusker is shown below: +Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` is shown below: {{% panel header="`scythe.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -52,5 +52,5 @@ Scythe returns fastq file of reads with removed adapter sequences. ### Useful Information -In order to test the Scythe (scythe/0.991) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe on Tusker are shown in the table below: +In order to test the Scythe (scythe/0.991) performance , we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe are shown in the table below: {{< readfile file="/static/html/scythe.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md index eb6cc4e775d7c7bd59d3bd91144b2e1ce8afd910..6062f2cc9e3e57c9ec20564647b2b260f527143b 100644 --- a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md +++ b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md @@ -79,5 +79,5 @@ Sickle returns fastq file of reads with trimmed low quality bases from both 3' a ### Useful Information -In order to test the Sickle (sickle/1.210) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle on Tusker are shown in the table below: +In order to test the Sickle (sickle/1.210) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle are shown in the table below: {{< readfile file="/static/html/sickle.html" >}} diff --git a/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md b/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md index aec31a28dff7bd68ab4aa8a2d99c89935a62a6a8..2efd86d9809458b58df3a384884453df5273a0eb 100644 --- a/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md +++ b/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md @@ -3,7 +3,7 @@ title = "Running OLAM at HCC" description = "How to run the OLAM (Ocean Land Atmosphere Model) on HCC resources." +++ -### OLAM compilation on Tusker +### OLAM compilation ##### pgi/11 compilation with mpi and openmp enabled 1. Load modules: diff --git a/content/connecting/for_windows_users.md b/content/connecting/for_windows_users.md index 9d915ca4e06c8993892f237acd1368afab3f5627..1e913b5de61d36a8c58a580dc5ce7507821c5896 100644 --- a/content/connecting/for_windows_users.md +++ b/content/connecting/for_windows_users.md @@ -30,16 +30,16 @@ Users]({{< relref "/connecting/for_maclinux_users" >}}). -------------- For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the HCC supercomputers. In the Command Prompt, -type `ssh <username>@tusker.unl.edu` and the corresponding password -to get access to the HCC cluster **Tusker**. Note that <username> +type `ssh <username>@crane.unl.edu` and the corresponding password +to get access to the HCC cluster **Crane**. Note that <username> should be replaced by your HCC account username. If you do not have a HCC account, please contact a HCC specialist ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) or go to http://hcc.unl.edu/newusers. -To use the **Crane** cluster, replace tusker.unl.edu with crane.unl.edu. +To use the **Rhino** cluster, replace crane.unl.edu with rhino.unl.edu. {{< highlight bash >}} -C:\> ssh <username>@tusker.unl.edu +C:\> ssh <username>@crane.unl.edu C:\> <password> {{< /highlight >}} @@ -55,10 +55,10 @@ PuTTY: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe) -Here we use the HCC cluster **Tusker** for demonstration. To use the -**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`. +Here we use the HCC cluster **Crane** for demonstration. To use the +**Rhino** or cluster, replace `crane.unl.edu` with `rhino.unl.edu`. -1. On the first screen, type `tusker.unl.edu` for Host Name, then click +1. On the first screen, type `crane.unl.edu` for Host Name, then click **Open**. {{< figure src="/images/3178523.png" height="450" >}} 2. On the second screen, click on **Yes**. @@ -118,28 +118,28 @@ For best results when transfering data to and from the clusters, refer to [Handl For Windows users, file transferring between your personal computer and the HCC supercomputers can be achieved through the command `scp`. -Here we use **Tusker** for example. **The following commands should be +Here we use **Crane** for example. **The following commands should be executed from your computer. ** **Uploading from local to remote** {{< highlight bash >}} -C:\> scp -r .\<folder name> <username>@tusker.unl.edu:/work/<group name>/<username> +C:\> scp -r .\<folder name> <username>@crane.unl.edu:/work/<group name>/<username> {{< /highlight >}} The above command line transfers a folder from the current directory (`.\`) of the your computer to the `$WORK` directory of the HCC -supercomputer, Tusker. Note that you need to replace `<group name>` +supercomputer, Crane. Note that you need to replace `<group name>` and `<username>` with your HCC group name and username. **Downloading from remote to local** {{< highlight bash >}} -C:\> scp -r <username>@tusker.unl.edu:/work/<group name>/<username>/<folder name> .\ +C:\> scp -r <username>@crane.unl.edu:/work/<group name>/<username>/<folder name> .\ {{< /highlight >}} The above command line transfers a folder from the `$WORK` directory of -the HCC supercomputer, Tusker, to the current directory (`.\`) of the +the HCC supercomputer, Crane, to the current directory (`.\`) of the your computer. @@ -151,11 +151,11 @@ Usually it is convenient to upload and download files between your personal comp and the HCC supercomputers through a Graphic User Interface (GUI). Download and install the third party application **WinSCP** to connect the file systems between your personal computer and the HCC supercomputers. -Below is a step-by-step installation guide. Here we use the HCC cluster **Tusker** -for demonstration. To use the **Crane** cluster, replace `tusker.unl.edu` -with `crane.unl.edu`. +Below is a step-by-step installation guide. Here we use the HCC cluster **Crane** +for demonstration. To use the **Rhino** cluster, replace `crane.unl.edu` +with `rhino.unl.edu`. -1. On the first screen, type `tusker.unl.edu` for Host name, enter your +1. On the first screen, type `crane.unl.edu` for Host name, enter your HCC account username and password for User name and Password. Then click on **Login**. diff --git a/content/handling_data/data_storage/_index.md b/content/handling_data/data_storage/_index.md index 33f75fddb02cc63d68ab4c867e9d4c778a83ffd0..4016b287790c588cf5d3a908a353853f3d13a7fd 100644 --- a/content/handling_data/data_storage/_index.md +++ b/content/handling_data/data_storage/_index.md @@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`') The common directory operates similarly to work and is mounted with **read and write capability to worker nodes all HCC Clusters**. This -means that any files stored in common can be accessed from Crane and Tusker, making this directory ideal for items that need to be +means that any files stored in common can be accessed from Crane and Rhino, making this directory ideal for items that need to be accessed from multiple clusters such as reference databases and shared data files. diff --git a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md index cbc6014cde84ce633a86efef469b2d36317f2d46..5586d889fb63349c9c0f9789652a274f6f5d8223 100644 --- a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md +++ b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md @@ -75,14 +75,14 @@ can now transfer and manipulate files on the remote endpoint. {{% notice info %}} To make it easier to use, we recommend saving the UUID number as a bash variable to make the commands easier to use. For example, we will -continue to use the above endpoint (Tusker) by assigning its UUID code -to the variable \`tusker\` as follows: +continue to use the above endpoint (Crane) by assigning its UUID code +to the variable \`crane\` as follows: {{< figure src="/images/21073499.png" >}} This command must be repeated upon each new login or terminal session unless you save these in your environmental variables. If you do not wish to do this step, you can proceed by placing the correct UUID in -place of whenever you see \`$tusker\`. +place of whenever you see \`$crane\`. {{% /notice %}} --- @@ -98,7 +98,7 @@ command: To make a directory on the remote endpoint, we would use the \`globus mkdir\` command. For example, to make a folder in the users work -directory on Tusker, we would use the following command: +directory on Crane, we would use the following command: {{< figure src="/images/21073501.png" >}} To rename files on the remote endpoint, we can use the \`globus rename\` @@ -112,14 +112,14 @@ command: All transfers must take place between Globus endpoints. Even if you are transferring from an endpoint that you are already connected to, that endpoint must be activated in Globus. Here, we are transferring between -Crane and Tusker. We have activated the Crane endpoint and saved its -UUID to the variable $crane as we did for $tusker above. +Crane and Rhino. We have activated the Crane endpoint and saved its +UUID to the variable $crane as we did for $crane above. To transfer files, we use the command \`globus transfer\`. The format of this command is \`globus transfer <endpoint1>:<file\_path> <endpoint2>:<file\_path>\`. For example, here we are transferring the file \`testfile.txt\` from the home directory on Crane -to the home directory on Tusker: +to the home directory on Crane: {{< figure src="/images/21073505.png" >}} You can then check the status of a transfer, or delete it all together, @@ -129,7 +129,7 @@ using the given Task ID: To transfer entire directories, simply specify a directory in the file path as opposed to an individual file. Below, we are transferring the \`output\` directory from the home directory on Crane to the home -directory on Tusker: +directory on Crane: {{< figure src="/images/21073507.png" >}} For additional details and information on other features of the Globus