Commit fde2f4da authored by Caughlin Bohn's avatar Caughlin Bohn Committed by Caughlin Bohn

/bin/sh to /bin/bash

parent 7f01dc05
......@@ -9,7 +9,7 @@ with Allinea Performance Reports (`perf-report`) on Crane is shown below:
{{% panel theme="info" header="blastn_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks=16
......
......@@ -9,7 +9,7 @@ below:
{{% panel theme="info" header="lammps_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=LAMMPS
#SBATCH --ntasks=64
#SBATCH --time=12:00:00
......
......@@ -8,7 +8,7 @@ with Allinea PerformanceReports (`perf-report`) is shown below:
{{% panel theme="info" header="ray_perf_report.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Ray
#SBATCH --ntasks-per-node=16
#SBATCH --time=10:00:00
......
......@@ -15,7 +15,7 @@ where **input_reads.fasta** is the input file containing all sequences that need
Simple example of how **makeblastdb** can be run on Crane using SLURM script and nucleotide database is shown below:
{{% panel header="`blast_db.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Blast_DB
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -52,7 +52,7 @@ Basic SLURM example of nucleotide BLAST run against the non-redundant **nt** BL
{{% panel header="`blastn_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -92,7 +92,7 @@ Basic SLURM example of protein BLAST run against the non-redundant **nr **BLAS
{{% panel header="`blastx_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -24,7 +24,7 @@ $ blat
Running BLAT on Crane with query file `input_reads.fasta` and database `db.fa` is shown below:
{{% panel header="`blat_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Blat
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`
An example of how to run Bowtie alignment on Crane with single-end fastq file and `8 CPUs` is shown below:
{{% panel header="`bowtie_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -34,7 +34,7 @@ where **index_prefix** is the generated index using the **bowtie2-build** co
An example of how to run Bowtie2 local alignment on Crane with paired-end fasta files and `8 CPUs` is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -25,7 +25,7 @@ where **index_prefix** is the index for the reference genome generated from **bw
Simple SLURM script for running **bwa mem** on Crane with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
{{% panel header="`bwa_mem.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bwa_Mem
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -33,7 +33,7 @@ $ clustalo -h
Running Clustal Omega on Crane with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
{{% panel header="`clustal_omega.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Clustal_Omega
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -30,7 +30,7 @@ Prior running TopHat/TopHat2, an index from the reference genome should be built
An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
{{% panel header="`tophat2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Tophat2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -43,7 +43,7 @@ $ ls $BLAST
An example of how to run Bowtie2 local alignment on Crane utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below:
{{% panel header="`bowtie2_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Bowtie2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -64,7 +64,7 @@ bowtie2 -x $BOWTIE2_HORSE -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.f
An example of BLAST run against the non-redundant nucleotide database available on Crane is provided below:
{{% panel header="`blastn_alignment.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastN
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -19,7 +19,7 @@ where the option **-format** specifies the type of the output file, **input_a
Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
{{% panel header="`bamtools_convert.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BamTools_Convert
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -17,7 +17,7 @@ where **input_alignments.[bam|sam]** is the input file with the alignments in BA
Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
{{% panel header="`samtools_view.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SAMtools_View
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -21,7 +21,7 @@ $ fastq-dump [options] input_reads.sra
An example of running **fastq-dump** on Crane to convert SRA file containing paired-end reads is:
{{% panel header="`sratoolkit.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SRAtoolkit
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ Oases has a lot of parameters that can be found in its [manual](https://www.ebi.
A simple SLURM script to run Oases on the Velvet output stored in `output_directory/` with minimum transcript length of `200` is shown below:
{{% panel header="`oases.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Oases
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -41,7 +41,7 @@ Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`
Simple SLURM script for running Ray with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below:
{{% panel header="`ray.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Ray
#SBATCH --ntasks=8
#SBATCH --time=168:00:00
......
......@@ -97,7 +97,7 @@ After creating the configuration file **configFile**, the next step is to run th
Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` is shown below:
{{% panel header="`soapdenovo2.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=SOAPdenovo2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Trinity is to run Trinity with the option **--no_run_inchworm**:
{{% panel header="`trinity_step1.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -29,7 +29,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The second step of running Trinity is to run Trinity with the option **--no_run_chrysalis**:
{{% panel header="`trinity_step2.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step2
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -48,7 +48,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The third step of running Trinity is to run Trinity with the option **--no_distributed_trinity_exec**:
{{% panel header="`trinity_step3.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step3
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -67,7 +67,7 @@ Trinity --seqType fq --max_memory 100G --left input_reads_pair_1.fastq --right i
The fourth step of running Trinity is to run Trinity without any additional option:
{{% panel header="`trinity_step4.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Trinity_Step4
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 43 -fastq -longPaired -separate input_reads_pair_1.fas
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 51 -fasta -short input_reads.fasta -fasta -shortPaired
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 31 -fasta -short input_reads.fasta
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -25,7 +25,7 @@ $ cutadapt --help
Simple Cutadapt script that trims the adapter sequences **AGGCACACAGGG** and **TGAGACACGCA** from the 3' end and **AACCGGTT** from the 5' end of single-end fasta input file is shown below:
{{% panel header="`cutadapt.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Cutadapt
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -27,7 +27,7 @@ The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qu
Simple PRINSEQ SLURM script for single-end fasta data and fasta output format is shown below:
{{% panel header="`prinseq_single_end.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=PRINSEQ
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......@@ -59,7 +59,7 @@ The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qu
Simple PRINSEQ SLURM script for paired-end fastq data and fastq output format is shown below:
{{% panel header="`prinseq_paired_end.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=PRINSEQ
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -25,7 +25,7 @@ $ scythe --help
Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` is shown below:
{{% panel header="`scythe.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Scythe
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ where **input_reads.fastq** is the input file of sequencing data in fastq form
Simple SLURM Sickle script for Illumina single-end reads input file `input_reads.fastq` is shown below:
{{% panel header="`sickle_single.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Sickle
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......@@ -56,7 +56,7 @@ where **input_reads_pair_1.fastq** and **input_reads_pair_2.fastq** are the in
Simple SLURM Sickle script for Sanger paired-end reads input files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq` is shown below:
{{% panel header="`sickle_paired.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Sickle
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -25,7 +25,7 @@ $ tagcleaner.pl --help
Simple TagCleaner script for removing known 3' and 5' tag sequences (`NNNCCAAACACACCCAACACA` and `TGTGTTGGGTGTGTTTGGNNN` respectively) is shown below:
{{% panel header="`tagcleaner.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=TagCleaner
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -35,7 +35,7 @@ Sample QIIME submit script to run **pick_open_reference_otus.py** is:
{{% panel header="`qiime.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=QIIME
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -22,7 +22,7 @@ $ cufflinks -h
An example of how to run Cufflinks on Crane with alignment file in SAM format, output directory `cufflinks_output` and 8 CPUs is shown below:
{{% panel header="`cufflinks.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Cufflinks
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -21,7 +21,7 @@ An example of how to run basic CAP3 SLURM script on Crane is shown
below:
{{% panel header="`cap3.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=CAP3
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -35,7 +35,7 @@ CD-HIT is multi-threaded program, and therefore, using multiple threads is recom
Simple SLURM CD-HIT script for Crane with 8 CPUs is given in addition:
{{% panel header="`cd-hit.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=CD-HIT
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -66,7 +66,7 @@ on crane is shown below:
{{% panel theme="info" header="dmtcp_blastx.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks=8
......@@ -98,7 +98,7 @@ following submit file:
{{% panel theme="info" header="dmtcp_restart_blastx.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks=8
......
......@@ -123,7 +123,7 @@ line.
{{% panel header="`submit_f.serial`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=Fortran
......@@ -137,7 +137,7 @@ module load compiler/gcc/4.9
{{% panel header="`submit_c.serial`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=C
......
......@@ -212,7 +212,7 @@ main program name.
{{% panel header="`submit_f.mpi`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
......@@ -226,7 +226,7 @@ mpirun ./demo_f_mpi.x
{{% panel header="`submit_c.mpi`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
......
......@@ -96,7 +96,7 @@ Content of Gaussian SLURM submission file `run-g09-general.slurm`:
{{% panel theme="info" header="run-g09-general.slurm" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09
#SBATCH --nodes=1 --ntasks-per-node=4
#SBATCH --mem-per-cpu=2000
......@@ -164,7 +164,7 @@ Submit your initial **g09** job with the following SLURM submission file:
{{% panel theme="info" header="Submit with dmtcp" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09-dmtcp
#SBATCH --nodes=1 --ntasks-per-node=16
#SBATCH --mem-per-cpu=4000
......@@ -214,7 +214,7 @@ resume your interrupted job:
{{% panel theme="info" header="Resume with dmtcp" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09-restart
#SBATCH --nodes=1 --ntasks-per-node=16
#SBATCH --mem-per-cpu=4000
......
......@@ -206,7 +206,7 @@ USE_HDF5=1
{{% panel theme="info" header="Sample submit script for PGI compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
......@@ -223,7 +223,7 @@ mpirun /path/to/olam-4.2c-mpi
{{% panel theme="info" header="Sample submit script for Intel compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
......
......@@ -78,7 +78,7 @@ Using Singularity in a SLURM job is similar to how you would use any other softw
{{% panel theme="info" header="Example Singularity SLURM script" %}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
......@@ -201,7 +201,7 @@ For example,
{{% panel theme="info" header="Example SLURM script" %}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
......
......@@ -34,7 +34,7 @@ The SLURM submit files for each step are below.
{{%expand "JobA.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobA
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -49,7 +49,7 @@ sleep 120
{{%expand "JobB.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobB
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -66,7 +66,7 @@ sleep 120
{{%expand "JobC.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobC
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -83,7 +83,7 @@ sleep 120
{{%expand "JobC.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobD
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment