Commit 08a1c06e authored by Carrie A Brown's avatar Carrie A Brown
Browse files

Merge branch 'scriptstobash' into 'master'

/bin/sh to /bin/bash

Closes #37

See merge request !232
parents 33457313 fde2f4da
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 51 -fasta -short input_reads.fasta -fasta -shortPaired
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -10,7 +10,7 @@ weight = "10"
The first step of running Velvet is to run **velveth**:
{{% panel header="`velveth.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velveth
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......@@ -30,7 +30,7 @@ velveth output_directory/ 31 -fasta -short input_reads.fasta
After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
{{% panel header="`velvetg.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Velvet_Velvetg
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -25,7 +25,7 @@ $ cutadapt --help
Simple Cutadapt script that trims the adapter sequences **AGGCACACAGGG** and **TGAGACACGCA** from the 3' end and **AACCGGTT** from the 5' end of single-end fasta input file is shown below:
{{% panel header="`cutadapt.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Cutadapt
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -27,7 +27,7 @@ The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qu
Simple PRINSEQ SLURM script for single-end fasta data and fasta output format is shown below:
{{% panel header="`prinseq_single_end.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=PRINSEQ
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......@@ -59,7 +59,7 @@ The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qu
Simple PRINSEQ SLURM script for paired-end fastq data and fastq output format is shown below:
{{% panel header="`prinseq_paired_end.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=PRINSEQ
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -25,7 +25,7 @@ $ scythe --help
Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` is shown below:
{{% panel header="`scythe.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Scythe
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -28,7 +28,7 @@ where **input_reads.fastq** is the input file of sequencing data in fastq form
Simple SLURM Sickle script for Illumina single-end reads input file `input_reads.fastq` is shown below:
{{% panel header="`sickle_single.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Sickle
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......@@ -56,7 +56,7 @@ where **input_reads_pair_1.fastq** and **input_reads_pair_2.fastq** are the in
Simple SLURM Sickle script for Sanger paired-end reads input files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq` is shown below:
{{% panel header="`sickle_paired.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Sickle
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -25,7 +25,7 @@ $ tagcleaner.pl --help
Simple TagCleaner script for removing known 3' and 5' tag sequences (`NNNCCAAACACACCCAACACA` and `TGTGTTGGGTGTGTTTGGNNN` respectively) is shown below:
{{% panel header="`tagcleaner.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=TagCleaner
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -35,7 +35,7 @@ Sample QIIME submit script to run **pick_open_reference_otus.py** is:
{{% panel header="`qiime.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=QIIME
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -22,7 +22,7 @@ $ cufflinks -h
An example of how to run Cufflinks on Crane with alignment file in SAM format, output directory `cufflinks_output` and 8 CPUs is shown below:
{{% panel header="`cufflinks.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=Cufflinks
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -21,7 +21,7 @@ An example of how to run basic CAP3 SLURM script on Crane is shown
below:
{{% panel header="`cap3.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=CAP3
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
......
......@@ -35,7 +35,7 @@ CD-HIT is multi-threaded program, and therefore, using multiple threads is recom
Simple SLURM CD-HIT script for Crane with 8 CPUs is given in addition:
{{% panel header="`cd-hit.submit`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=CD-HIT
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
......
......@@ -66,7 +66,7 @@ on crane is shown below:
{{% panel theme="info" header="dmtcp_blastx.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks=8
......@@ -98,7 +98,7 @@ following submit file:
{{% panel theme="info" header="dmtcp_restart_blastx.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=BlastX
#SBATCH --nodes=1
#SBATCH --ntasks=8
......
......@@ -123,7 +123,7 @@ line.
{{% panel header="`submit_f.serial`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=Fortran
......@@ -137,7 +137,7 @@ module load compiler/gcc/4.9
{{% panel header="`submit_c.serial`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=C
......
......@@ -212,7 +212,7 @@ main program name.
{{% panel header="`submit_f.mpi`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
......@@ -226,7 +226,7 @@ mpirun ./demo_f_mpi.x
{{% panel header="`submit_c.mpi`"%}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
......
......@@ -96,7 +96,7 @@ Content of Gaussian SLURM submission file `run-g09-general.slurm`:
{{% panel theme="info" header="run-g09-general.slurm" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09
#SBATCH --nodes=1 --ntasks-per-node=4
#SBATCH --mem-per-cpu=2000
......@@ -164,7 +164,7 @@ Submit your initial **g09** job with the following SLURM submission file:
{{% panel theme="info" header="Submit with dmtcp" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09-dmtcp
#SBATCH --nodes=1 --ntasks-per-node=16
#SBATCH --mem-per-cpu=4000
......@@ -214,7 +214,7 @@ resume your interrupted job:
{{% panel theme="info" header="Resume with dmtcp" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH -J g09-restart
#SBATCH --nodes=1 --ntasks-per-node=16
#SBATCH --mem-per-cpu=4000
......
......@@ -206,7 +206,7 @@ USE_HDF5=1
{{% panel theme="info" header="Sample submit script for PGI compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
......@@ -223,7 +223,7 @@ mpirun /path/to/olam-4.2c-mpi
{{% panel theme="info" header="Sample submit script for Intel compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
......
......@@ -78,7 +78,7 @@ Using Singularity in a SLURM job is similar to how you would use any other softw
{{% panel theme="info" header="Example Singularity SLURM script" %}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
......@@ -201,7 +201,7 @@ For example,
{{% panel theme="info" header="Example SLURM script" %}}
{{< highlight bash >}}
#!/bin/sh
#!/bin/bash
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
......
......@@ -34,7 +34,7 @@ The SLURM submit files for each step are below.
{{%expand "JobA.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobA
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -49,7 +49,7 @@ sleep 120
{{%expand "JobB.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobB
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -66,7 +66,7 @@ sleep 120
{{%expand "JobC.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobC
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......@@ -83,7 +83,7 @@ sleep 120
{{%expand "JobC.submit" %}}
{{< highlight batch >}}
#!/bin/sh
#!/bin/bash
#SBATCH --job-name=JobD
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment