Commit 0d715206 authored by Caughlin Bohn's avatar Caughlin Bohn Committed by Carrie A Brown
Browse files

Updated Tusker things to Crane or Removed Tusker

parent 632ba850
......@@ -30,13 +30,6 @@ Which Cluster to Use?
are new to using HCC resources, Crane is the recommended cluster to use
initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
If your job requires more than 36 cores per node or you need more than
512GB of memory, consider using Tusker instead.
**Tusker**: Similar to Crane, Tusker is another cluster shared by all
campus users. It has 4 CPU/ 64 cores and 256GB RAM per node. Two nodes
have 1024GB RAM for very large memory jobs. So for jobs requiring more
than 36 cores per node or large memory, Tusker would be a better option.
User Login
----------
......@@ -44,12 +37,11 @@ User Login
For Windows users, please refer to this link [For Windows Users]({{< relref "for_windows_users" >}}).
For Mac or Linux users, please refer to this link [For Mac/Linux Users]({{< relref "for_maclinux_users">}}).
**Logging into Crane or Tusker**
**Logging into Crane**
{{< highlight bash >}}
ssh <username>@crane.unl.edu
or
ssh <username>@tusker.unl.edu
{{< /highlight >}}
Duo Security
......@@ -60,10 +52,6 @@ resources. Registration and usage of Duo security can be found in this
section: [Setting up and using Duo]({{< relref "setting_up_and_using_duo">}})
**Important Notes**
- The Crane and Tusker clusters are separate. But, they are
similar enough that submission scripts on whichever one will work on
another, and vice versa.  
 
- The worker nodes cannot write to the `/home` directories. You must
use your `/work` directory for processing in your job. You may
......@@ -77,8 +65,6 @@ Resources
- ##### Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
- ##### Tusker - consists of 106 AMD Interlagos-based nodes (6784 cores) interconnected with Mellanox QDR Infiniband.
- ##### Red - This cluster is the resource for UNL's US CMS Tier-2 site.
- [CMS](http://www.uscms.org/)
......@@ -95,7 +81,6 @@ Resource Capabilities
| Cluster | Overview | Processors | RAM | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*2 Nodes with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
......@@ -6,11 +6,13 @@ This document details the equipment resident in the Holland Computing Center (HC
HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Sandhills and Red. Sandhills is a linux cluster dedicated to general campus usage with 5,472 compute cores interconnected by low-latency InfiniBand networking. 175 TB of Lustre storage is complemented by 50 TB of NFS storage and 3 TB of local scratch per node.
HCC's resources at UNL include two distinct offerings: Sandhills and Red. Sandhills is a linux cluster dedicated to general campus usage with 5,472 compute cores interconnected by low-latency InfiniBand networking. 175 TB of Lustre storage is complemented by 50 TB of NFS storage and 3 TB of local scratch per node. Tusker offers 3,712 cores interconnected with Mellanox QDR InfiniBand along with 523TB of Lustre storage. Each compute node is a Dell R815 server with at least 256 GB RAM and 4 Opteron 6272 (2.1 GHz) processors.
The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Tusker, Crane, Anvil, Attic, and Common storage. Tusker offers 3,712 cores interconnected with Mellanox QDR InfiniBand along with 523TB of Lustre storage. Each compute node is a Dell R815 server with at least 256 GB RAM and 4 Opteron 6272 (2.1 GHz) processors. Tusker and Sandhills are currently being retired and will be moved to the Walter Scott Engineering Center located in Lincoln consolidated into one, new Tusker cluster.
Tusker and Sandhills are currently decommissioned. These resources will be combined into a new cluster called Rhino which will be available at a future date.
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 21 GPU nodes with 57 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
......@@ -36,7 +38,7 @@ These resources are detailed further below.
* 175TB shared scratch storage (Lustre) -> /work
* 3TB local scratch
# 1.2 Red
## 1.2 Red
* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
......@@ -57,14 +59,12 @@ These resources are detailed further below.
* 2x Dell S4810 switches
* 2x Dell N3048 switches
# 1.3 Silo (backup mirror for Attic)
## 1.3 Silo (backup mirror for Attic)
* 1 Mercury RM216 2U Rackmount Server 2 Xeon E5-2630 (12-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
# 2. HCC at PKI Resources
## 2.1 Tusker
## 1.4 Tusker
* 58 PowerEdge R815 systems
* 54x with 256 GB RAM, 2x with 512 GB RAM, 2x with 1024 GB RAM
......@@ -74,7 +74,9 @@ These resources are detailed further below.
* 3x Dell Powerconnect 6248 switches
* 523TB Lustre storage over InfiniBand
# 2.2 Crane
# 2. HCC at PKI Resources
## 2.1 Crane
* 452 Relion 2840e systems from Penguin
* 452x with 64 GB RAM
......@@ -110,12 +112,12 @@ These resources are detailed further below.
* 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz)
* 2 Nvidia P100 GPUs
# 2.3 Attic
## 2.2 Attic
* 1 Mercury RM216 2U Rackmount Server 2-socket Xeon E5-2630 (6-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
# 2.4 Anvil
## 2.3 Anvil
* 76 PowerEdge R630 systems
* 76x with 256 GB RAM
......@@ -133,7 +135,7 @@ These resources are detailed further below.
* 10 GbE networking
* 6x Dell S4048-ON switches
# 2.5 Shared Common Storage
## 2.4 Shared Common Storage
* Storage service providing 1.9PB usable capacity
* 6 SuperMicro 1028U-TNRTP+ systems
......
......@@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`')
The common directory operates similarly to work and is mounted with
**read and write capability to worker nodes all HCC Clusters**. This
means that any files stored in common can be accessed from Crane and Tusker, making this directory ideal for items that need to be
means that any files stored in common can be accessed from Crane, making this directory ideal for items that need to be
accessed from multiple clusters such as reference databases and shared
data files.
......
......@@ -27,7 +27,7 @@ allowed to write files there.
For Windows, learn more about logging in and uploading files
[here](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users).
Using your uploaded files on Tusker or Crane.
Using your uploaded files on Crane.
---------------------------------------------
Using your
......
......@@ -7,7 +7,7 @@ description = "Globus Connect overview"
a fast and robust file transfer service that allows users to quickly
move large amounts of data between computer clusters and even to and
from personal workstations.  This service has been made available for
Tusker, Crane, and Attic. HCC users are encouraged to use Globus
Crane, and Attic. HCC users are encouraged to use Globus
Connect for their larger data transfers as an alternative to slower and
more error-prone methods such as scp and winSCP. 
......@@ -15,7 +15,7 @@ more error-prone methods such as scp and winSCP. 
### Globus Connect Advantages
- Dedicated transfer servers on Tusker, Crane, and Attic allow
- Dedicated transfer servers on Crane, and Attic allow
large amounts of data to be transferred quickly between sites.
- A user can install Globus Connect Personal on his or her workstation
......@@ -38,7 +38,7 @@ the <a href="https://www.globus.org/SignUp" class="external-link">Globus Connec
 Accounts are free and grant users access to any Globus endpoint for
which they are authorized.  An endpoint is simply a file system to or
from which a user transfers files.  All HCC users are authorized to
access their own /home, /work, and /common directories on Tusker and Crane via the Globus endpoints (named: `hcc#tusker` and `hcc#crane`).  Those who have purchased Attic storage space can
access their own /home, /work, and /common directories on Crane via the Globus endpoints (named: `hcc#crane`).  Those who have purchased Attic storage space can
access their /attic directories via the Globus endpoint hcc\#attic. To
initialize or activate the endpoint, users will be required to enter
their HCC username, password, and Duo credentials for authentication.
......
......@@ -4,13 +4,13 @@ description = "How to activate HCC endpoints on Globus"
weight = 20
+++
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint.  Endpoints are available for Tusker (`hcc#tusker`), Crane (`hcc#crane`), and Attic (`hcc#attic`).  Follow the instructions below to activate any of these endpoints and begin making transfers.
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint.  Endpoints are available for Crane (`hcc#crane`), and Attic (`hcc#attic`).  Follow the instructions below to activate any of these endpoints and begin making transfers.
1. [Sign in](https://www.globus.org/SignIn) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar.
{{< figure src="/images/Glogin.png" >}}
{{< figure src="/images/endpoints.png" >}}
2. Find the endpoint you want by entering '`hcc#tusker`', '`hcc#crane`', or '`hcc#attic`' in the search box and hit 'enter'.  Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
2. Find the endpoint you want by entering '`hcc#crane`', or '`hcc#attic`' in the search box and hit 'enter'.  Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
{{< figure src="/images/activateEndpoint.png" >}}
{{< figure src="/images/EndpointContinue.png" >}}
......
......@@ -5,7 +5,7 @@ weight = 50
+++
If you would like another colleague or researcher to have access to your
data, you may create a shared endpoint on Tusker, Crane, or Attic. You can personally manage access to this endpoint and
data, you may create a shared endpoint on Crane, or Attic. You can personally manage access to this endpoint and
give access to anybody with a Globus account (whether or not
they have an HCC account).  *Please use this feature responsibly by
sharing only what is necessary and granting access only to trusted
......@@ -20,7 +20,7 @@ writable shared endpoints in your `work` directory (or `/shared`).
1. Sign in to your Globus account, click on the 'Endpoints' tab
and search for the endpoint that you will use to host your shared
endpoint.  For example, if you would like to share data in your
Tusker `work` directory, search for the `hcc#tusker` endpoint.  Once
Crane `work` directory, search for the `hcc#crane` endpoint.  Once
you have found the endpoint, it will need to be activated if it has
not been already (see [endpoint activation instructions
here]({{< relref "activating_hcc_cluster_endpoints" >}})).
......
......@@ -7,7 +7,7 @@ weight = 30
To transfer files between HCC clusters, you will first need to
[activate]({{< relref "activating_hcc_cluster_endpoints" >}}) the
two endpoints you would like to use (the available endpoints
are: `hcc#tusker,` `hcc#crane, and `hcc#attic)`.  Once
are: `hcc#crane` and `hcc#attic`).  Once
that has been completed, follow the steps below to begin transferring
files.  (Note: You can also transfer files between an HCC endpoint and
any other Globus endpoint for which you have authorized access.  That
......@@ -30,7 +30,7 @@ purposes we use two HCC endpoints.)
2. Enter the names of the two endpoints you would like to use, or
select from the drop-down menus (for
example, `hcc#tusker` and `hcc#crane`).  Enter the
example, `hcc#attic` and `hcc#crane`).  Enter the
directory paths for both the source and destination (the 'from' and
'to' paths on the respective endpoints). Press 'Enter' to view files
under these directories.  Select the files or directories you would
......
......@@ -28,7 +28,7 @@ endpoints.
 From your Globus account, select the 'File Manager' tab
from the left sidebar and enter the name of your new endpoint the 'Collection' text box. Press 'Enter' and then
navigate to the appropriate directory. Select "Transfer of Sync to.." from the right sidebar (or select the "two panels"
icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, `hcc#tusker`, or `hcc#attic`),
icon from the top right corner) and Enter the second endpoint (for example: `hcc#crane`, or `hcc#attic`),
type or navigate to the desired directory, and initiate the file transfer by clicking on the blue
arrow button.
{{< figure src="/images/PersonalTransfer.png" >}}
......
......@@ -4,7 +4,7 @@ description = "How to transfer files directly from the transfer servers"
weight = 10
+++
Crane, Tusker, and Attic each have a dedicated transfer server with
Crane and Attic each have a dedicated transfer server with
10 Gb/s connectivity that allows
for faster data transfers than the login nodes.  With [Globus
Connect]({{< relref "globus_connect" >}}), users
......@@ -18,11 +18,10 @@ using these dedicated servers for data transfers:
Cluster | Transfer server
----------|----------------------
Crane | `crane-xfer.unl.edu`
Tusker | `tusker-xfer.unl.edu`
Attic | `attic-xfer.unl.edu`
{{% notice info %}}
Because the transfer servers are login-disabled, third-party transfers
between `crane-xfer`, `tusker-xfer,` and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}).
between `crane-xfer`, and `attic-xfer` must be done via [Globus Connect]({{< relref "globus_connect" >}}).
{{% /notice %}}
......@@ -33,7 +33,7 @@ cost, please see the
The easiest and fastest way to access Attic is via Globus. You can
transfer files between your computer, our clusters ($HOME, $WORK, and $COMMON on
Crane and Tusker), and Attic. Here is a detailed tutorial on
Crane), and Attic. Here is a detailed tutorial on
how to set up and use [Globus Connect]({{< relref "globus_connect" >}}). For
Attic, use the Globus Endpoint **hcc\#attic**.  Your Attic files are
located at `~, `which is a shortcut
......
......@@ -26,11 +26,9 @@ Using Allinea Performance Reports on HCC
----------------------------------------
The Holland Computing Center owns **512 Allinea Performance Reports
licenses** that can be used to evaluate applications executed on Tusker
and Crane.
licenses** that can be used to evaluate applications executed on the clusters.
In order to use Allinea Performance Reports on HCC, the appropriate
module needs to be loaded first. To load the module on Tusker or Crane,
use
module needs to be loaded first. To load the module on, use
{{< highlight bash >}}
module load allinea/5.0
......@@ -57,7 +55,7 @@ application `hello_world`:
{{% panel theme="info" header="perf-report example" %}}
{{< highlight bash >}}
[<username>@login.tusker ~]$ perf-report ./hello-world
[<username>@login.crane ~]$ perf-report ./hello-world
{{< /highlight >}}
{{% /panel %}}
......@@ -71,7 +69,7 @@ to read from a file, you must use the `--input` option to the
{{% panel theme="info" header="perf-report stdin redirection" %}}
{{< highlight bash >}}
[<username>@login.tusker ~]$ perf-report --input=my_input.txt ./hello-world
[<username>@login.crane ~]$ perf-report --input=my_input.txt ./hello-world
{{< /highlight >}}
{{% /panel %}}
......@@ -81,7 +79,7 @@ More **perf-report** options can be seen by using:
{{% panel theme="info" header="perf-report options" %}}
{{< highlight bash >}}
[<username>@login.tusker ~]$ perf-report --help
[<username>@login.crane ~]$ perf-report --help
{{< /highlight >}}
{{% /panel %}}
......
......@@ -12,5 +12,5 @@ The following pages, [Create Local BLAST Database]({{<relref "create_local_blast
### Useful Information
In order to test the BLAST (blast/2.2) performance on Tusker, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below:
In order to test the BLAST (blast/2.2) performance on Crane, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below:
{{< readfile file="/static/html/blast.html" >}}
......@@ -12,7 +12,7 @@ $ makeblastdb -in input_reads.fasta -dbtype [nucl|prot] -out input_reads_db
where **input_reads.fasta** is the input file containing all sequences that need to be made into a database, and **dbtype** can be either `nucl` or `prot` depending on the type of the input file.
Simple example of how **makeblastdb** can be run on Tusker using SLURM script and nucleotide database is shown below:
Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below:
{{% panel header="`blast_db.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
......@@ -28,7 +28,7 @@ $ blastn -help
These BLAST alignment commands are multi-threaded, and therefore using the BLAST option **-num_threads <number_of_CPUs>** is recommended.
HCC hosts multiple BLAST databases and indices on both Tusker and Crane. In order to use these resources, the ["biodata" module] ({{<relref "/guides/running_applications/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases:
HCC hosts multiple BLAST databases and indices on Crane. In order to use these resources, the ["biodata" module] ({{<relref "/guides/running_applications/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases:
- **16SMicrobial**
- **env_nt**
......
......@@ -21,7 +21,7 @@ $ blat
{{< /highlight >}}
Running BLAT on Tusker with query file `input_reads.fasta` and database `db.fa` is shown below:
Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below:
{{% panel header="`blat_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
......@@ -25,7 +25,7 @@ manual] (http://bowtie-bio.sourceforge.net/manual.shtml).
Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using the following flags: **-q** (fastq files), **-f** (fasta files), **-r** (raw one-sequence per line), or **-c** (sequences given on command line).
An example of how to run Bowtie alignment on Tusker with single-end fastq file and `8 CPUs` is shown below:
An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below:
{{% panel header="`bowtie_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
......@@ -31,7 +31,7 @@ $ bowtie2 -x index_prefix [-q|--qseq|-f|-r|-c] [-1 input_reads_pair_1.[fasta|fas
where **index_prefix** is the generated index using the **bowtie2-build** command, and **options** are optional parameters that can be found in the [Bowtie2 manual] (http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml). Bowtie2 supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using one of the following flags: **-q** (fastq files), **--qseq** (Illumina's qseq format), **-f** (fasta files), **-r** (raw one sequence per line), or **-c** (sequences given on command line).
An example of how to run Bowtie2 local alignment on Tusker with paired-end fasta files and `8 CPUs` is shown below:
An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
......@@ -22,7 +22,7 @@ $ bwa mem index_prefix [input_reads.fastq|input_reads_pair_1.fastq input_reads_p
where **index_prefix** is the index for the reference genome generated from **bwa index**, and **input_reads.fastq**, **input_reads_pair_1.fastq**, **input_reads_pair_2.fastq** are the input files of sequencing data that can be single-end or paired-end respectively. Additional **options** for **bwa mem** can be found in the BWA manual.
Simple SLURM script for running **bwa mem** on Tusker with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
{{% panel header="`bwa_mem.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
......@@ -30,7 +30,7 @@ $ clustalo -h
{{< /highlight >}}
Running Clustal Omega on Tusker with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
{{% panel header="`clustal_omega.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment