Skip to content
Snippets Groups Projects
Commit 9b56b049 authored by Carrie A Brown's avatar Carrie A Brown
Browse files

Merge branch '24-Remove-Tucker' into 'master'

Removed Tucker

Closes #24

See merge request !172
parents cd9a7985 662dbff7
Branches
No related tags found
1 merge request!172Removed Tucker
Showing
with 43 additions and 43 deletions
...@@ -22,7 +22,7 @@ state-of-the-art supercomputing resources.  ...@@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. 
**Logging In** **Logging In**
``` syntaxhighlighter-pre ``` syntaxhighlighter-pre
ssh tusker.unl.edu -l demoXXXX ssh crane.unl.edu -l demoXXXX
``` ```
**[Cypwin Link](http://cygwin.com/install.html)** **[Cypwin Link](http://cygwin.com/install.html)**
...@@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder ...@@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder
``` syntaxhighlighter-pre ``` syntaxhighlighter-pre
$ ls $ ls
$ scp -r ./demo_code <username>@tusker.unl.edu:/work/demo/<username> $ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username>
<enter password> <enter password>
``` ```
...@@ -59,7 +59,7 @@ Serial Job ...@@ -59,7 +59,7 @@ Serial Job
First, you need to login to the cluster First, you need to login to the cluster
``` syntaxhighlighter-pre ``` syntaxhighlighter-pre
$ ssh <username>@tusker.unl.edu $ ssh <username>@crane.unl.edu
<enter password> <enter password>
``` ```
...@@ -133,14 +133,14 @@ code.  It uses MPI for communication between the parallel processes. ...@@ -133,14 +133,14 @@ code.  It uses MPI for communication between the parallel processes.
$ mpif90 fortran_mpi.f90 -o fortran_mpi.x $ mpif90 fortran_mpi.f90 -o fortran_mpi.x
``` ```
Next, we will submit the MPI application to the Tusker cluster scheduler Next, we will submit the MPI application to the cluster scheduler
using the file `submit_tusker.mpi`. using the file `submit_tusker.mpi`.
``` syntaxhighlighter-pre ``` syntaxhighlighter-pre
$ qsub submit_tusker.mpi $ qsub submit_tusker.mpi
``` ```
The Tusker cluster scheduler will pick machines (possibly several, The cluster scheduler will pick machines (possibly several,
depending on availability) to run the parallel MPI application. You can depending on availability) to run the parallel MPI application. You can
check the status of the job the same way you did with the Serial job: check the status of the job the same way you did with the Serial job:
......
...@@ -4,7 +4,7 @@ description = "Example of how to profile Ray using Allinea Performance Reports" ...@@ -4,7 +4,7 @@ description = "Example of how to profile Ray using Allinea Performance Reports"
+++ +++
Simple example of using [Ray]({{< relref "/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray" >}}) Simple example of using [Ray]({{< relref "/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray" >}})
with Allinea PerformanceReports (`perf-report`) on Tusker is shown below: with Allinea PerformanceReports (`perf-report`) is shown below:
{{% panel theme="info" header="ray_perf_report.submit" %}} {{% panel theme="info" header="ray_perf_report.submit" %}}
{{< highlight batch >}} {{< highlight batch >}}
......
...@@ -64,5 +64,5 @@ The basic Clustal Omega output produces one alignment file in the specified outp ...@@ -64,5 +64,5 @@ The basic Clustal Omega output produces one alignment file in the specified outp
### Useful Information ### Useful Information
In order to test the Clustal Omega performance on Tusker, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega on Tusker are shown on the table below: In order to test the Clustal Omega performance, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega are shown on the table below:
{{< readfile file="/static/html/clustal_omega.html" >}} {{< readfile file="/static/html/clustal_omega.html" >}}
...@@ -59,5 +59,5 @@ Oases produces two additional output files: `transcripts.fa` and `contig-orderin ...@@ -59,5 +59,5 @@ Oases produces two additional output files: `transcripts.fa` and `contig-orderin
### Useful Information ### Useful Information
In order to test the Oases (oases/0.2.8) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases on Tusker are shown in the table below: In order to test the Oases (oases/0.2.8) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases are shown in the table below:
{{< readfile file="/static/html/oases.html" >}} {{< readfile file="/static/html/oases.html" >}}
...@@ -38,7 +38,7 @@ Ray supports both paired-end (`-p`) and single-end reads (`-s`). Moreover, Ray c ...@@ -38,7 +38,7 @@ Ray supports both paired-end (`-p`) and single-end reads (`-s`). Moreover, Ray c
Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`). Ray supports multiple file formats such as `fasta`, `fa`, `fasta.gz`, `fa.gz, `fasta.bz2`, `fa.bz2`, `fastq`, `fq`, `fastq.gz`, `fq.gz`, `fastq.bz2`, `fq.bz2`, `sff`, `csfasta`, `csfa`. Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`). Ray supports multiple file formats such as `fasta`, `fa`, `fasta.gz`, `fa.gz, `fasta.bz2`, `fa.bz2`, `fastq`, `fq`, `fastq.gz`, `fq.gz`, `fastq.bz2`, `fq.bz2`, `sff`, `csfasta`, `csfa`.
Simple SLURM script for running Ray on Tusker with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below: Simple SLURM script for running Ray with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below:
{{% panel header="`ray.submit`"%}} {{% panel header="`ray.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
...@@ -76,5 +76,5 @@ One of the most important results are: ...@@ -76,5 +76,5 @@ One of the most important results are:
### Useful Information ### Useful Information
In order to test the Ray performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray on Tusker are shown in the table below: In order to test the Ray performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray are shown in the table below:
{{< readfile file="/static/html/ray.html" >}} {{< readfile file="/static/html/ray.html" >}}
...@@ -94,7 +94,7 @@ q=input_reads.fq ...@@ -94,7 +94,7 @@ q=input_reads.fq
After creating the configuration file **configFile**, the next step is to run the assembler using this file. After creating the configuration file **configFile**, the next step is to run the assembler using this file.
Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` on Tusker is shown below: Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` is shown below:
{{% panel header="`soapdenovo2.submit`"%}} {{% panel header="`soapdenovo2.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
...@@ -128,7 +128,7 @@ output31.contig output31.edge.gz output31.links output31.p ...@@ -128,7 +128,7 @@ output31.contig output31.edge.gz output31.links output31.p
### Useful Information ### Useful Information
In order to test the SOAPdenovo2 (soapdenovo2/r240) performance on Tusker, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below: In order to test the SOAPdenovo2 (soapdenovo2/r240) performance, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below:
{{< readfile file="/static/html/soapdenovo2.html" >}} {{< readfile file="/static/html/soapdenovo2.html" >}}
In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads. In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads.
...@@ -52,7 +52,7 @@ Each step may be run as its own job, providing a workaround for the single job w ...@@ -52,7 +52,7 @@ Each step may be run as its own job, providing a workaround for the single job w
### Useful Information ### Useful Information
In order to test the Trinity (trinity/r2014-04-13p1) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity on Tusker are shown in the table below: In order to test the Trinity (trinity/r2014-04-13p1) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity are shown in the table below:
{{< readfile file="/static/html/trinity.html" >}} {{< readfile file="/static/html/trinity.html" >}}
......
...@@ -18,5 +18,5 @@ Each step of Velvet (**velveth** and **velvetg**) may be run as its own job. The ...@@ -18,5 +18,5 @@ Each step of Velvet (**velveth** and **velvetg**) may be run as its own job. The
### Useful Information ### Useful Information
In order to test the Velvet (velvet/1.2) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet on Tusker are shown in the table below: In order to test the Velvet (velvet/1.2) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet are shown in the table below:
{{< readfile file="/static/html/velvet.html" >}} {{< readfile file="/static/html/velvet.html" >}}
...@@ -22,7 +22,7 @@ $ scythe --help ...@@ -22,7 +22,7 @@ $ scythe --help
{{< /highlight >}} {{< /highlight >}}
Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` for Tusker is shown below: Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` is shown below:
{{% panel header="`scythe.submit`"%}} {{% panel header="`scythe.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
...@@ -52,5 +52,5 @@ Scythe returns fastq file of reads with removed adapter sequences. ...@@ -52,5 +52,5 @@ Scythe returns fastq file of reads with removed adapter sequences.
### Useful Information ### Useful Information
In order to test the Scythe (scythe/0.991) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe on Tusker are shown in the table below: In order to test the Scythe (scythe/0.991) performance , we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe are shown in the table below:
{{< readfile file="/static/html/scythe.html" >}} {{< readfile file="/static/html/scythe.html" >}}
...@@ -79,5 +79,5 @@ Sickle returns fastq file of reads with trimmed low quality bases from both 3' a ...@@ -79,5 +79,5 @@ Sickle returns fastq file of reads with trimmed low quality bases from both 3' a
### Useful Information ### Useful Information
In order to test the Sickle (sickle/1.210) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle on Tusker are shown in the table below: In order to test the Sickle (sickle/1.210) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle are shown in the table below:
{{< readfile file="/static/html/sickle.html" >}} {{< readfile file="/static/html/sickle.html" >}}
...@@ -3,7 +3,7 @@ title = "Running OLAM at HCC" ...@@ -3,7 +3,7 @@ title = "Running OLAM at HCC"
description = "How to run the OLAM (Ocean Land Atmosphere Model) on HCC resources." description = "How to run the OLAM (Ocean Land Atmosphere Model) on HCC resources."
+++ +++
### OLAM compilation on Tusker ### OLAM compilation
##### pgi/11 compilation with mpi and openmp enabled ##### pgi/11 compilation with mpi and openmp enabled
1. Load modules: 1. Load modules:
......
...@@ -30,16 +30,16 @@ Users]({{< relref "/connecting/for_maclinux_users" >}}). ...@@ -30,16 +30,16 @@ Users]({{< relref "/connecting/for_maclinux_users" >}}).
-------------- --------------
For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the
HCC supercomputers. In the Command Prompt, HCC supercomputers. In the Command Prompt,
type `ssh <username>@tusker.unl.edu` and the corresponding password type `ssh <username>@crane.unl.edu` and the corresponding password
to get access to the HCC cluster **Tusker**. Note that &lt;username&gt; to get access to the HCC cluster **Crane**. Note that &lt;username&gt;
should be replaced by your HCC account username. If you do not have a should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
or go to http://hcc.unl.edu/newusers. or go to http://hcc.unl.edu/newusers.
To use the **Crane** cluster, replace tusker.unl.edu with crane.unl.edu. To use the **Rhino** cluster, replace crane.unl.edu with rhino.unl.edu.
{{< highlight bash >}} {{< highlight bash >}}
C:\> ssh <username>@tusker.unl.edu C:\> ssh <username>@crane.unl.edu
C:\> <password> C:\> <password>
{{< /highlight >}} {{< /highlight >}}
...@@ -55,10 +55,10 @@ PuTTY: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html ...@@ -55,10 +55,10 @@ PuTTY: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe) or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe)
Here we use the HCC cluster **Tusker** for demonstration. To use the Here we use the HCC cluster **Crane** for demonstration. To use the
**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`. **Rhino** or cluster, replace `crane.unl.edu` with `rhino.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host Name, then click 1. On the first screen, type `crane.unl.edu` for Host Name, then click
**Open**. **Open**.
{{< figure src="/images/3178523.png" height="450" >}} {{< figure src="/images/3178523.png" height="450" >}}
2. On the second screen, click on **Yes**. 2. On the second screen, click on **Yes**.
...@@ -118,28 +118,28 @@ For best results when transfering data to and from the clusters, refer to [Handl ...@@ -118,28 +118,28 @@ For best results when transfering data to and from the clusters, refer to [Handl
For Windows users, file transferring between your personal computer For Windows users, file transferring between your personal computer
and the HCC supercomputers can be achieved through the command `scp`. and the HCC supercomputers can be achieved through the command `scp`.
Here we use **Tusker** for example. **The following commands should be Here we use **Crane** for example. **The following commands should be
executed from your computer. ** executed from your computer. **
**Uploading from local to remote** **Uploading from local to remote**
{{< highlight bash >}} {{< highlight bash >}}
C:\> scp -r .\<folder name> <username>@tusker.unl.edu:/work/<group name>/<username> C:\> scp -r .\<folder name> <username>@crane.unl.edu:/work/<group name>/<username>
{{< /highlight >}} {{< /highlight >}}
The above command line transfers a folder from the current directory The above command line transfers a folder from the current directory
(`.\`) of the your computer to the `$WORK` directory of the HCC (`.\`) of the your computer to the `$WORK` directory of the HCC
supercomputer, Tusker. Note that you need to replace `<group name>` supercomputer, Crane. Note that you need to replace `<group name>`
and `<username>` with your HCC group name and username. and `<username>` with your HCC group name and username.
**Downloading from remote to local** **Downloading from remote to local**
{{< highlight bash >}} {{< highlight bash >}}
C:\> scp -r <username>@tusker.unl.edu:/work/<group name>/<username>/<folder name> .\ C:\> scp -r <username>@crane.unl.edu:/work/<group name>/<username>/<folder name> .\
{{< /highlight >}} {{< /highlight >}}
The above command line transfers a folder from the `$WORK` directory of The above command line transfers a folder from the `$WORK` directory of
the HCC supercomputer, Tusker, to the current directory (`.\`) of the the HCC supercomputer, Crane, to the current directory (`.\`) of the
your computer. your computer.
...@@ -151,11 +151,11 @@ Usually it is convenient to upload and download files between your personal comp ...@@ -151,11 +151,11 @@ Usually it is convenient to upload and download files between your personal comp
and the HCC supercomputers through a Graphic User Interface (GUI). and the HCC supercomputers through a Graphic User Interface (GUI).
Download and install the third party application **WinSCP** Download and install the third party application **WinSCP**
to connect the file systems between your personal computer and the HCC supercomputers. to connect the file systems between your personal computer and the HCC supercomputers.
Below is a step-by-step installation guide. Here we use the HCC cluster **Tusker** Below is a step-by-step installation guide. Here we use the HCC cluster **Crane**
for demonstration. To use the **Crane** cluster, replace `tusker.unl.edu` for demonstration. To use the **Rhino** cluster, replace `crane.unl.edu`
with `crane.unl.edu`. with `rhino.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host name, enter your 1. On the first screen, type `crane.unl.edu` for Host name, enter your
HCC account username and password for User name and Password. Then HCC account username and password for User name and Password. Then
click on **Login**. click on **Login**.
......
...@@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`') ...@@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`')
The common directory operates similarly to work and is mounted with The common directory operates similarly to work and is mounted with
**read and write capability to worker nodes all HCC Clusters**. This **read and write capability to worker nodes all HCC Clusters**. This
means that any files stored in common can be accessed from Crane and Tusker, making this directory ideal for items that need to be means that any files stored in common can be accessed from Crane and Rhino, making this directory ideal for items that need to be
accessed from multiple clusters such as reference databases and shared accessed from multiple clusters such as reference databases and shared
data files. data files.
......
...@@ -75,14 +75,14 @@ can now transfer and manipulate files on the remote endpoint. ...@@ -75,14 +75,14 @@ can now transfer and manipulate files on the remote endpoint.
{{% notice info %}} {{% notice info %}}
To make it easier to use, we recommend saving the UUID number as a bash To make it easier to use, we recommend saving the UUID number as a bash
variable to make the commands easier to use. For example, we will variable to make the commands easier to use. For example, we will
continue to use the above endpoint (Tusker) by assigning its UUID code continue to use the above endpoint (Crane) by assigning its UUID code
to the variable \`tusker\` as follows: to the variable \`crane\` as follows:
{{< figure src="/images/21073499.png" >}} {{< figure src="/images/21073499.png" >}}
This command must be repeated upon each new login or terminal session This command must be repeated upon each new login or terminal session
unless you save these in your environmental variables. If you do not unless you save these in your environmental variables. If you do not
wish to do this step, you can proceed by placing the correct UUID in wish to do this step, you can proceed by placing the correct UUID in
place of whenever you see \`$tusker\`. place of whenever you see \`$crane\`.
{{% /notice %}} {{% /notice %}}
--- ---
...@@ -98,7 +98,7 @@ command: ...@@ -98,7 +98,7 @@ command:
To make a directory on the remote endpoint, we would use the \`globus To make a directory on the remote endpoint, we would use the \`globus
mkdir\` command. For example, to make a folder in the users work mkdir\` command. For example, to make a folder in the users work
directory on Tusker, we would use the following command: directory on Crane, we would use the following command:
{{< figure src="/images/21073501.png" >}} {{< figure src="/images/21073501.png" >}}
To rename files on the remote endpoint, we can use the \`globus rename\` To rename files on the remote endpoint, we can use the \`globus rename\`
...@@ -112,14 +112,14 @@ command: ...@@ -112,14 +112,14 @@ command:
All transfers must take place between Globus endpoints. Even if you are All transfers must take place between Globus endpoints. Even if you are
transferring from an endpoint that you are already connected to, that transferring from an endpoint that you are already connected to, that
endpoint must be activated in Globus. Here, we are transferring between endpoint must be activated in Globus. Here, we are transferring between
Crane and Tusker. We have activated the Crane endpoint and saved its Crane and Rhino. We have activated the Crane endpoint and saved its
UUID to the variable $crane as we did for $tusker above. UUID to the variable $crane as we did for $crane above.
To transfer files, we use the command \`globus transfer\`. The format of To transfer files, we use the command \`globus transfer\`. The format of
this command is \`globus transfer &lt;endpoint1&gt;:&lt;file\_path&gt; this command is \`globus transfer &lt;endpoint1&gt;:&lt;file\_path&gt;
&lt;endpoint2&gt;:&lt;file\_path&gt;\`. For example, here we are &lt;endpoint2&gt;:&lt;file\_path&gt;\`. For example, here we are
transferring the file \`testfile.txt\` from the home directory on Crane transferring the file \`testfile.txt\` from the home directory on Crane
to the home directory on Tusker: to the home directory on Crane:
{{< figure src="/images/21073505.png" >}} {{< figure src="/images/21073505.png" >}}
You can then check the status of a transfer, or delete it all together, You can then check the status of a transfer, or delete it all together,
...@@ -129,7 +129,7 @@ using the given Task ID: ...@@ -129,7 +129,7 @@ using the given Task ID:
To transfer entire directories, simply specify a directory in the file To transfer entire directories, simply specify a directory in the file
path as opposed to an individual file. Below, we are transferring the path as opposed to an individual file. Below, we are transferring the
\`output\` directory from the home directory on Crane to the home \`output\` directory from the home directory on Crane to the home
directory on Tusker: directory on Crane:
{{< figure src="/images/21073507.png" >}} {{< figure src="/images/21073507.png" >}}
For additional details and information on other features of the Globus For additional details and information on other features of the Globus
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment