Skip to content
Snippets Groups Projects
Commit 927e8a63 authored by Caughlin Bohn's avatar Caughlin Bohn
Browse files

Updating things to be Swan and not Crane

parent 4da93468
No related branches found
No related tags found
1 merge request!362Updating things to be Swan and not Crane
Showing
with 53 additions and 63 deletions
......@@ -6,21 +6,21 @@ weight=30
This page describes a complete example of submitting an HTCondor job.
1. SSH to Crane
1. SSH to Swan
{{% panel theme="info" header="ssh command" %}}
[apple@localhost]ssh apple@crane.unl.edu
[apple@localhost]ssh apple@swan.unl.edu
{{% /panel %}}
{{% panel theme="info" header="output" %}}
[apple@login.crane~]$
[apple@login.swan~]$
{{% /panel %}}
2. Write a simple python program in a file "hello.py" that we wish to
run using HTCondor
{{% panel theme="info" header="edit a python code named 'hello.py'" %}}
[apple@login.crane ~]$ vim hello.py
[apple@login.swan ~]$ vim hello.py
{{% /panel %}}
Then in the edit window, please input the code below:
......@@ -64,13 +64,13 @@ This page describes a complete example of submitting an HTCondor job.
above )
{{% panel theme="info" header="create output directory" %}}
[apple@login.crane ~]$ mkdir OUTPUT
[apple@login.swan ~]$ mkdir OUTPUT
{{% /panel %}}
5. Submit your job
{{% panel theme="info" header="condor_submit" %}}
[apple@login.crane ~]$ condor_submit hello.submit
[apple@login.swan ~]$ condor_submit hello.submit
{{% /panel %}}
{{% panel theme="info" header="Output of submit" %}}
......@@ -83,11 +83,11 @@ This page describes a complete example of submitting an HTCondor job.
6. Check status of `condor_q`
{{% panel theme="info" header="condor_q" %}}
[apple@login.crane ~]$ condor_q
[apple@login.swan ~]$ condor_q
{{% /panel %}}
{{% panel theme="info" header="Output of `condor_q`" %}}
-- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?...
-- Schedd: login.swan.hcc.unl.edu : <129.93.227.113:9619?...
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
720587.0 logan 12/15 10:48 33+14:41:17 H 0 0.0 continuous.cron 20
720588.0 logan 12/15 10:48 200+02:40:08 H 0 0.0 checkprogress.cron
......
......@@ -4,7 +4,7 @@ description = "How to submit an OSG job with HTCondor"
weight=20
+++
{{% notice info%}}Jobs can be submitted to the OSG from Crane, so
{{% notice info%}}Jobs can be submitted to the OSG from Swan, so
there is no need to logon to a different submit host or get a grid
certificate!
{{% /notice %}}
......@@ -16,7 +16,7 @@ project provides software to schedule individual applications,
workflows, and for sites to manage resources. It is designed to enable
High Throughput Computing (HTC) on large collections of distributed
resources for users and serves as the job scheduler used on the OSG.
Jobs are submitted from the Crane login node to the
Jobs are submitted from the Swan login node to the
OSG using an HTCondor submission script. For those who are used to
submitting jobs with SLURM, there are a few key differences to be aware
of:
......@@ -133,7 +133,7 @@ the submitted job:
1. How to submit a job to OSG - assuming that you named your HTCondor
script as a file applejob.txt
{{< highlight bash >}}[apple@login.crane ~] $ condor_submit applejob{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_submit applejob{{< /highlight >}}
You will see the following output after submitting the job
{{% panel theme="info" header="Example of condor_submit" %}}
......@@ -149,14 +149,14 @@ the submitted job:
ones that are owned by the named user*
{{< highlight bash >}}[apple@login.crane ~] $ condor_q apple{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_q apple{{< /highlight >}}
The code section below shows a typical output. You may notice that
the column ST represents the status of the job (H: Held and I: Idle
or waiting)
{{% panel theme="info" header="Example of condor_q" %}}
-- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?...
-- Schedd: login.swan.hcc.unl.edu : <129.93.227.113:9619?...
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
1013034.4 apple 3/26 16:34 0+00:21:00 H 0 0.0 sjrun.py INPUT/INP
1013038.0 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP
......@@ -173,19 +173,19 @@ the submitted job:
from the held status so that it can be rescheduled by the HTCondor.
*Release one job:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_release 1013034.4{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_release 1013034.4{{< /highlight >}}
*Release all jobs of a user apple:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_release apple{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_release apple{{< /highlight >}}
4. How to delete a submitted job - if you want to delete a submitted
job you may use the shell commands as listed below
*Delete one job:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_rm 1013034.4{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_rm 1013034.4{{< /highlight >}}
*Delete all jobs of a user apple:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_rm apple{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan ~] $ condor_rm apple{{< /highlight >}}
5. How to get help form HTCondor command
......
......@@ -11,14 +11,14 @@ set of modules provided on OSG can differ from those on the HCC
clusters. To switch to the OSG modules environment on an HCC machine:
{{< highlight bash >}}
[apple@login.crane~]$ source osg_oasis_init
[apple@login.swan~]$ source osg_oasis_init
{{< /highlight >}}
Use the module avail command to see what software and libraries are
available:
{{< highlight bash >}}
[apple@login.crane~]$ module avail
[apple@login.swan~]$ module avail
------------------- /cvmfs/oasis.opensciencegrid.org/osg/modules/modulefiles/Core --------------------
abyss/2.0.2 gnome_libs/1.0 pegasus/4.7.1
......@@ -36,7 +36,7 @@ available:
Loading modules is done with the `module load` command:
{{< highlight bash >}}
[apple@login.crane~]$ module load python/2.7
[apple@login.swan~]$ module load python/2.7
{{< /highlight >}}
There are two things required in order to use modules in your HTCondor
......@@ -99,7 +99,7 @@ loading the `R` and `libgfortran` modules.
Make the script executable:
{{< highlight bash >}}[apple@login.crane~]$ chmod a+x R-script.sh{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan~]$ chmod a+x R-script.sh{{< /highlight >}}
Finally, create the HTCondor submit script, `R.submit`:
......@@ -124,16 +124,16 @@ transferred with the job.
Submit the jobs with the `condor_submit` command:
{{< highlight bash >}}[apple@login.crane~]$ condor_submit R.submit{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan~]$ condor_submit R.submit{{< /highlight >}}
Check on the status of your jobs with `condor_q`:
{{< highlight bash >}}[apple@login.crane~]$ condor_q{{< /highlight >}}
{{< highlight bash >}}[apple@login.swan~]$ condor_q{{< /highlight >}}
When your jobs have completed, find the average estimate for Pi from all
100 jobs:
{{< highlight bash >}}
[apple@login.crane~]$ grep "[1]" mcpi.out.* | awk '{sum += $2} END { print "Average =", sum/NR}'
[apple@login.swan~]$ grep "[1]" mcpi.out.* | awk '{sum += $2} END { print "Average =", sum/NR}'
Average = 3.13821
{{< /highlight >}}
......@@ -35,16 +35,9 @@ Which Cluster to Use?
are new to using HCC resources, Swan is the recommended cluster to use
initially. Swan has 2 Intel Icelake CPUs (56 cores) per node, with 256GB RAM per
node.
**Crane**: Crane is the largest HCC resource. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
**Important Notes**
- The Crane and Swan clusters are separate. But, they are
similar enough that submission scripts on whichever one will work on
another, and vice versa (excluding GPU resources and some combinations of
RAM/core requests).
- The worker nodes cannot write to the `/home` directories. You must
use your `/work` directory for processing in your job. You may
access your work directory by using the command:
......@@ -57,8 +50,6 @@ Resources
- ##### Swan - HCC's newest Intel-based cluster, with 56 cores and 256GB RAM per node.
- ##### Crane - Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
- ##### Anvil - HCC's cloud computing cluster based on Openstack
......@@ -70,7 +61,6 @@ Resource Capabilities
| Cluster | Overview | Processors | RAM\* | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 62.5GB<br><br>79 nodes @ 250GB<br><br>37 nodes @ 500GB<br><br>4 nodes @ 1500GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Swan** | 168 node LINUX cluster | 168 Intel Xeon Gold 6348 CPU, 2 CPU/56 cores per node | 168 nodes @ 256GB <br><br> 2 nodes @ 2000GB | HDR100 Infiniband | 3.5TB local scratch per node <br><br> ~5200TB shared Lustre storage |
| **Red** | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
......@@ -20,7 +20,7 @@ the following instructions to work.**
- [Tutorial Video](#tutorial-video)
Every HCC user has a password that is same on all HCC machines
(Crane, Swan, Anvil). This password needs to satisfy the HCC
(Swan, Anvil). This password needs to satisfy the HCC
password requirements.
### HCC password requirements
......
......@@ -69,14 +69,14 @@ U2F use.
Example login using Duo Push
----------------------------
This demonstrates an example login to Crane using the Duo Push method.
This demonstrates an example login to Swan using the Duo Push method.
Using another method (SMS, phone call, etc.) proceeds in the same way.
(Click on any image for a larger version.)
First, a user connects via SSH using their normal HCC username/password,
exactly as before.
{{< figure src="/images/5832713.png" width="600" >}}
{{< figure src="/images/duo_login_pass.png" width="600" >}}
{{% notice warning%}}**Account lockout**
......@@ -94,11 +94,11 @@ this example, the choices are Duo Push notification, SMS message, or
phone call. Choosing option 1 for Duo Push, a request to verify the
login will be sent to the user's smartphone.
{{< figure src="/images/5832716.png" height="350" >}}
{{< figure src="/images/duo_app_request.png" height="350" >}}
Simply tap `Approve` to verify the login.
{{< figure src="/images/5832717.png" height="350" >}}
{{< figure src="/images/duo_app_approved.png" height="350" >}}
{{% notice warning%}}**If you receive a verification request you didn't initiate, deny the
request and contact HCC immediately via email at
......@@ -108,7 +108,7 @@ request and contact HCC immediately via email at
In the terminal, the login will now complete and the user will logged in
as usual.
{{< figure src="/images/5832714.png" height="350" >}}
{{< figure src="/images/duo_login_successful.png" height="350" >}}
Duo Authentication Methods
......
......@@ -55,7 +55,7 @@ application `hello_world`:
{{% panel theme="info" header="perf-report example" %}}
{{< highlight bash >}}
[<username>@login.crane ~]$ perf-report ./hello-world
[<username>@login.swan ~]$ perf-report ./hello-world
{{< /highlight >}}
{{% /panel %}}
......@@ -69,7 +69,7 @@ to read from a file, you must use the `--input` option to the
{{% panel theme="info" header="perf-report stdin redirection" %}}
{{< highlight bash >}}
[<username>@login.crane ~]$ perf-report --input=my_input.txt ./hello-world
[<username>@login.swan ~]$ perf-report --input=my_input.txt ./hello-world
{{< /highlight >}}
{{% /panel %}}
......@@ -79,7 +79,7 @@ More **perf-report** options can be seen by using:
{{% panel theme="info" header="perf-report options" %}}
{{< highlight bash >}}
[<username>@login.crane ~]$ perf-report --help
[<username>@login.swan ~]$ perf-report --help
{{< /highlight >}}
{{% /panel %}}
......
......@@ -30,10 +30,10 @@ Click the *Add* button on the new window.
{{< figure src="/images/16516459.png" width="400" >}}
To setup a connection to Crane, fill in the fields as follows:
To setup a connection to Swan, fill in the fields as follows:
```
Connection Name: Crane
Host Name: <username>@crane.unl.edu
Connection Name: Swan
Host Name: <username>@swan.unl.edu
Remote Installation Directory: /util/opt/allinea/22.0
```
......@@ -48,7 +48,7 @@ Connections* to return back to the main Allinea window.
### Test the Reverse Connect feature
To test the connection, choose *Crane* from the *Remote Launch* menu.
To test the connection, choose *Swan* from the *Remote Launch* menu.
{{< figure src="/images/16516457.png" width="300" >}}
......@@ -59,7 +59,7 @@ A *Connect to Remote Host* dialog will appear and prompt for a password.
The login procedure is the same as for PuTTY or any other SSH program.
Enter your HCC password followed by the Duo login.
If the login was successful, you should see
*Connected to: \<username\>@crane.unl.edu* in the lower right corner of
*Connected to: \<username\>@swan.unl.edu* in the lower right corner of
the Allinea window.
The next step is to run a sample interactive job and test the Reverse
......
......@@ -12,5 +12,5 @@ The following pages, [Create Local BLAST Database]({{<relref "create_local_blast
### Useful Information
In order to test the BLAST (blast/2.2) performance on Crane, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below:
In order to test the BLAST (blast/2.2) performance on Swan, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below:
{{< readfile file="/static/html/blast.html" >}}
......@@ -12,7 +12,7 @@ $ makeblastdb -in input_reads.fasta -dbtype [nucl|prot] -out input_reads_db
where **input_reads.fasta** is the input file containing all sequences that need to be made into a database, and **dbtype** can be either `nucl` or `prot` depending on the type of the input file.
Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below:
Simple example of how **makeblastdb** can be run on Swan using SLURM script and nucleotide database is shown below:
{{% panel header="`blast_db.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -28,7 +28,7 @@ $ blastn -help
These BLAST alignment commands are multi-threaded, and therefore using the BLAST option **-num_threads <number_of_CPUs>** is recommended.
HCC hosts multiple BLAST databases and indices on Crane. In order to use these resources, the ["biodata" module]({{<relref "/applications/app_specific/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases:
HCC hosts multiple BLAST databases and indices on Swan. In order to use these resources, the ["biodata" module]({{<relref "/applications/app_specific/bioinformatics_tools/biodata_module">}}) needs to be loaded first. The **$BLAST** variable contains the following currently available databases:
- **16SMicrobial**
- **nr**
......
......@@ -21,7 +21,7 @@ $ blat
{{< /highlight >}}
Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below:
Running BLAT on Swan with query file `input_reads.fasta` and database `db.fa` is shown below:
{{% panel header="`blat_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -25,7 +25,7 @@ manual](http://bowtie-bio.sourceforge.net/manual.shtml).
Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using the following flags: **-q** (fastq files), **-f** (fasta files), **-r** (raw one-sequence per line), or **-c** (sequences given on command line).
An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below:
An example of how to run Bowtie alignment on Swan with single-end fastq file and `8 CPUs` is shown below:
{{% panel header="`bowtie_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -31,7 +31,7 @@ $ bowtie2 -x index_prefix [-q|--qseq|-f|-r|-c] [-1 input_reads_pair_1.[fasta|fas
where **index_prefix** is the generated index using the **bowtie2-build** command, and **options** are optional parameters that can be found in the [Bowtie2 manual](http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml). Bowtie2 supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using one of the following flags: **-q** (fastq files), **--qseq** (Illumina's qseq format), **-f** (fasta files), **-r** (raw one sequence per line), or **-c** (sequences given on command line).
An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below:
An example of how to run Bowtie2 local alignment on Swan with paired-end fasta files and `8 CPUs` is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -22,7 +22,7 @@ $ bwa mem index_prefix [input_reads.fastq|input_reads_pair_1.fastq input_reads_p
where **index_prefix** is the index for the reference genome generated from **bwa index**, and **input_reads.fastq**, **input_reads_pair_1.fastq**, **input_reads_pair_2.fastq** are the input files of sequencing data that can be single-end or paired-end respectively. Additional **options** for **bwa mem** can be found in the BWA manual.
Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
Simple SLURM script for running **bwa mem** on Swan with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
{{% panel header="`bwa_mem.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......@@ -137,5 +137,5 @@ $ bwa bwt2sa input_reads.bwt output_reads.sa
### Useful Information
In order to test the scalability of BWA (bwa/0.7) on Crane, we used two paired-end input fastq files, `large_1.fastq` and `large_2.fastq`, and one single-end input fasta file, `large.fasta`. Some statistics about the input files and the time and memory resources used by **bwa mem** are shown on the table below:
In order to test the scalability of BWA (bwa/0.7) on Swan, we used two paired-end input fastq files, `large_1.fastq` and `large_2.fastq`, and one single-end input fasta file, `large.fasta`. Some statistics about the input files and the time and memory resources used by **bwa mem** are shown on the table below:
{{< readfile file="/static/html/bwa.html" >}}
......@@ -30,7 +30,7 @@ $ clustalo -h
{{< /highlight >}}
Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
Running Clustal Omega on Swan with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
{{% panel header="`clustal_omega.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -27,7 +27,7 @@ $ tophat2 -h
Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file.
An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
An example of how to run TopHat2 on Swan with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
{{% panel header="`tophat2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -7,7 +7,7 @@ weight = "52"
+++
HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane and Swan.
HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Swan.
In order to use these resources, the "**biodata**" module needs to be loaded first.
For how to load module, please check [Module Commands]({{< relref "/applications/modules/_index.md" >}}).
......@@ -40,7 +40,7 @@ $ ls $BLAST
{{< /highlight >}}
An example of how to run Bowtie2 local alignment on Crane utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below:
An example of how to run Bowtie2 local alignment on Swan utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......@@ -61,7 +61,7 @@ bowtie2 -x $BOWTIE2_HORSE -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.f
{{% /panel %}}
An example of BLAST run against the non-redundant nucleotide database available on Crane is provided below:
An example of BLAST run against the non-redundant nucleotide database available on Swan is provided below:
{{% panel header="`blastn_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -16,7 +16,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig
where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file.
Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
Running BamTools **convert** on Swan with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
{{% panel header="`bamtools_convert.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
......@@ -14,7 +14,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b
where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively.
Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
Running **samtools view** on Swan with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
{{% panel header="`samtools_view.submit`"%}}
{{< highlight bash >}}
#!/bin/bash
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment