Skip to content
Snippets Groups Projects
Commit 1e0c8dcb authored by Caughlin Bohn's avatar Caughlin Bohn
Browse files

Merge branch 'master' into 'runningSAS'

Fixed Conflicts
parents 56e6b2b1 140be4d5
No related branches found
No related tags found
1 merge request!252Running SAS on HCC
Showing
with 26 additions and 26 deletions
......@@ -9,7 +9,7 @@ with Allinea Performance Reports (`perf-report`) on Crane is shown below:
{{% panel theme="info" header="blastn_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks=16
......
......@@ -9,7 +9,7 @@ below:
{{% panel theme="info" header="lammps_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=LAMMPS
#SBATCH --ntasks=64
#SBATCH --time=12:00:00
......
......@@ -8,7 +8,7 @@ with Allinea PerformanceReports (`perf-report`) is shown below:
{{% panel theme="info" header="ray_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Ray
#SBATCH --ntasks-per-node=16
#SBATCH --time=10:00:00
......
......@@ -15,7 +15,7 @@ where **input_reads.fasta** is the input file containing all sequences that need
Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below:
{{% panel header="`blast_db.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Blast_DB
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -52,7 +52,7 @@ Basic SLURM example of nucleotide BLAST run against the non-redundant **nt** BL
{{% panel header="`blastn_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -92,7 +92,7 @@ Basic SLURM example of protein BLAST run against the non-redundant **nr **BLAS
{{% panel header="`blastx_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -24,7 +24,7 @@ $ blat
Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below:
{{% panel header="`blat_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Blat
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`
An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below:
{{% panel header="`bowtie_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -34,7 +34,7 @@ where **index_prefix** is the generated index using the **bowtie2-build** co
An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -25,7 +25,7 @@ where **index_prefix** is the index for the reference genome generated from **bw
Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
{{% panel header="`bwa_mem.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bwa_Mem
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -33,7 +33,7 @@ $ clustalo -h
Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
{{% panel header="`clustal_omega.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Clustal_Omega
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -30,7 +30,7 @@ Prior running TopHat/TopHat2, an index from the reference genome should be built
An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
{{% panel header="`tophat2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Tophat2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -43,7 +43,7 @@ $ ls $BLAST
An example of how to run Bowtie2 local alignment on Crane utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -64,7 +64,7 @@ bowtie2 -x $BOWTIE2_HORSE -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.f
An example of BLAST run against the non-redundant nucleotide database available on Crane is provided below:
{{% panel header="`blastn_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -19,7 +19,7 @@ where the option **-format** specifies the type of the output file, **input_a
Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
{{% panel header="`bamtools_convert.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BamTools_Convert
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -17,7 +17,7 @@ where **input_alignments.[bam|sam]** is the input file with the alignments in BA
Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
{{% panel header="`samtools_view.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SAMtools_View
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -21,7 +21,7 @@ $ fastq-dump [options] input_reads.sra
An example of running **fastq-dump** on Crane to convert SRA file containing paired-end reads is:
{{% panel header="`sratoolkit.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SRAtoolkit
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ Oases has a lot of parameters that can be found in its [manual](https://www.ebi.
A simple SLURM script to run Oases on the Velvet output stored in `output_directory/` with minimum transcript length of `200` is shown below:
{{% panel header="`oases.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Oases
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -41,7 +41,7 @@ Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`
Simple SLURM script for running Ray with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below:
{{% panel header="`ray.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Ray
#SBATCH --ntasks=8
#SBATCH --time=168:00:00
......
......@@ -97,7 +97,7 @@ After creating the configuration file **configFile**, the next step is to run th
Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` is shown below:
{{% panel header="`soapdenovo2.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SOAPdenovo2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Trinity is to run Trinity with the option **--no_run_inchworm**:
{{% panel header="`trinity_step1.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -29,7 +29,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The second step of running Trinity is to run Trinity with the option **--no_run_chrysalis**:
{{% panel header="`trinity_step2.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -48,7 +48,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The third step of running Trinity is to run Trinity with the option **--no_distributed_trinity_exec**:
{{% panel header="`trinity_step3.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step3
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -67,7 +67,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The fourth step of running Trinity is to run Trinity without any additional option:
{{% panel header="`trinity_step4.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step4
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 43 -fastq -longPaired -separate input_reads_pair_1.fas
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment