From e07bd1beb879a258ea2880498f83136ad4473cc6 Mon Sep 17 00:00:00 2001
From: npavlovikj <npavlovikj2@unl.edu>
Date: Mon, 10 Dec 2018 21:46:58 -0600
Subject: [PATCH] i Fix header in bio pages and update yubikey info wq

---
 .../alignment_tools/blast/_index.md           |  8 +--
 .../blast/create_local_blast_database.md      |  6 +--
 .../blast/running_blast_alignment.md          | 13 ++---
 .../alignment_tools/blat.md                   | 10 ++--
 .../alignment_tools/bowtie.md                 | 11 ++--
 .../alignment_tools/bowtie2.md                | 11 ++--
 .../alignment_tools/bwa/_index.md             |  7 +--
 .../bwa/running_bwa_commands.md               | 50 ++++++++---------
 .../alignment_tools/clustal_omega.md          | 18 ++++---
 .../alignment_tools/tophat_tophat2.md         | 10 ++--
 .../biodata_module/_index.md                  | 18 ++++---
 .../bamtools/_index.md                        |  9 ++--
 .../bamtools/running_bamtools_commands.md     | 54 ++++++++++---------
 .../samtools/_index.md                        |  6 ++-
 .../samtools/running_samtools_commands.md     | 34 ++++++------
 .../data_manipulation_tools/sratoolkit.md     | 10 ++--
 .../de_novo_assembly_tools/oases.md           | 15 +++---
 .../de_novo_assembly_tools/ray.md             | 19 ++++---
 .../de_novo_assembly_tools/soapdenovo2.md     | 16 +++---
 .../de_novo_assembly_tools/trinity/_index.md  | 11 ++--
 .../running_trinity_in_multiple_steps.md      | 13 +++--
 .../de_novo_assembly_tools/velvet/_index.md   |  7 +--
 .../running_velvet_with_paired_end_data.md    | 12 +++--
 ...vet_with_single_end_and_paired_end_data.md | 13 +++--
 .../running_velvet_with_single_end_data.md    | 12 +++--
 .../downloading_sra_data_from_ncbi.md         | 21 ++++----
 .../pre_processing_tools/cutadapt.md          | 20 +++----
 .../pre_processing_tools/prinseq.md           | 21 ++++----
 .../pre_processing_tools/scythe.md            | 21 ++++----
 .../pre_processing_tools/sickle.md            | 35 ++++++------
 .../pre_processing_tools/tagcleaner.md        | 17 +++---
 .../bioinformatics_tools/qiime.md             | 11 ++--
 .../cufflinks.md                              | 15 +++---
 .../cap3.md                                   | 19 ++++---
 .../cd_hit.md                                 | 22 ++++----
 .../quickstarts/setting_up_and_using_duo.md   |  9 ++--
 36 files changed, 336 insertions(+), 268 deletions(-)

diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md
index 1a67b199..a709aaef 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/_index.md
@@ -7,10 +7,10 @@ weight = "52"
 
 [BLAST] (https://blast.ncbi.nlm.nih.gov/Blast.cgi) is a local alignment tool that finds similarity between sequences. This tool compares nucleotide or protein sequences to sequence databases, and calculates significance of matches. Sometimes these input sequences are large and using the command-line BLAST is required.
 
-The following pages, [Create Local BLAST Database](create_local_blast_database) and [Running BLAST Alignment](running_blast_alignment) describe how to run some of the most common BLAST executables as a single job using the SLURM scheduler on HCC.
+The following pages, [Create Local BLAST Database](create_local_blast_database) and [Running BLAST Alignment](running_blast_alignment) describe how to run some of the most common BLAST executables as a single job using the SLURM scheduler on HCC.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the BLAST (blast/2.2) performance on Tusker, we aligned three nucleotide query datasets, `small.fasta`, `medium.fasta` and `large.fasta`, against the non-redundant nucleotide **nt.fasta** database from NCBI. Some statistics about the query datasets and the time and memory resources used for the alignment are shown on the table below:
-{{< readfile file="/static/html/blast.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/blast.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md
index a90c0b53..13f38d3b 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/create_local_blast_database.md
@@ -11,7 +11,7 @@ $ makeblastdb -in input_reads.fasta -dbtype [nucl|prot] -out input_reads_db
 {{< /highlight >}}
 where **input_reads.fasta** is the input file containing all sequences that need to be made into a database, and **dbtype** can be either `nucl` or `prot` depending on the type of the input file.
 
-\\
+
 Simple example of how **makeblastdb** can be run on Tusker using SLURM script and nucleotide database is shown below:
 {{% panel header="`blast_db.submit`"%}}
 {{< highlight bash >}}
@@ -30,8 +30,8 @@ makeblastdb -in input_reads.fasta -dbtype nucl -out input_reads_db
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
+
 More parameters used with **makeblastdb** can be seen by typing:
 {{< highlight bash >}}
 $ makeblastdb -help
-{{< /highlight >}}
\ No newline at end of file
+{{< /highlight >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md
index fb2cf5e2..6423e3f0 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment.md
@@ -13,6 +13,7 @@ Basic BLAST has the following commands:
 - **tblastn**: search translated nucleotide database using a protein query
 - **tblastx**: search translated nucleotide database using a translated nucleotide query
 
+
 The basic usage of **blastn** is:
 {{< highlight bash >}}
 $ blastn -query input_reads.fasta -db input_reads_db -out blastn_output.alignments [options]
@@ -26,7 +27,7 @@ $ blastn -help
 
 These BLAST alignment commands are multi-threaded, and therefore using the BLAST option **-num_threads <number_of_CPUs>** is recommended.
 
-\\
+
 HCC hosts multiple BLAST databases and indices on both Tusker and Crane. In order to use these resources, the ["biodata" module] (../../../biodata_module) needs to be loaded first. The **$BLAST** variable contains the following currently available databases:
 
 - **16SMicrobial**
@@ -51,7 +52,7 @@ HCC hosts multiple BLAST databases and indices on both Tusker and Crane. In orde
 
 If you want to create and use a BLAST database that is not mentioned above, check [Create Local BLAST Database](create_local_blast_database).
 
-\\
+
 Basic SLURM example of nucleotide BLAST run against the non-redundant **nt** BLAST database with `8 CPUs` is provided below. When running BLAST alignment, it is recommended to first copy the query and database files to the **/scratch/** directory of the worker node. Moreover, the BLAST output is also saved in this directory (**/scratch/blastn_output.alignments**). After BLAST finishes, the output file is copied from the worker node to your current work directory.
 {{% notice info %}}
 **Please note that the worker nodes can not write to the */home/* directories and therefore you need to run your job from your */work/* directory.**
@@ -81,16 +82,16 @@ cp /scratch/blastn_output.alignments $WORK/<project_folder>
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
+
 One important BLAST parameter is the **e-value threshold** that changes the number of hits returned by showing only those with value lower than the given. To show the hits with **e-value** lower than 1e-10, modify the given script as follows:
 {{< highlight bash >}}
 $ blastn -query input_reads.fasta -db input_reads_db -out blastn_output.alignments -num_threads $SLURM_NTASKS_PER_NODE -evalue 1e-10
 {{< /highlight >}}
 
-\\
+
 The default BLAST output is in pairwise format. However, BLAST’s parameter **-outfmt** supports output in [different formats] (https://www.ncbi.nlm.nih.gov/books/NBK279684/) that are easier for parsing.
 
-\\
+
 Basic SLURM example of protein BLAST run against the non-redundant **nr **BLAST database with tabular output format and `8 CPUs` is shown below. Similarly as before, the query and database files are copied to the **/scratch/** directory. The BLAST output is also saved in this directory (**/scratch/blastx_output.alignments**). After BLAST finishes, the output file is copied from the worker node to your current work directory.
 {{% notice info %}}
 **Please note that the worker nodes can not write to the */home/* directories and therefore you need to run your job from your */work/* directory.**
@@ -118,4 +119,4 @@ blastx -query /scratch/input_reads.fasta -db /scratch/nr -outfmt 6 -out /scratch
 
 cp /scratch/blastx_output.alignments $WORK/<project_folder>
 {{< /highlight >}}
-{{% /panel %}}
\ No newline at end of file
+{{% /panel %}}
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md
index 4f18d22a..19d29a5a 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/blat.md
@@ -7,18 +7,20 @@ weight = "10"
 
 BLAT is a pairwise alignment tool similar to BLAST. It is more accurate and about 500 times faster than the existing tools for mRNA/DNA alignments and it is about 50 times faster with protein/protein alignments. BLAT accepts short and long query and database sequences as input files.
 
+
 The basic usage of BLAT is:
 {{< highlight bash >}}
 $ blat database query output_alignment.txt [options]
 {{< /highlight >}}
 where **database** is the name of the database used for the alignment, **query** is the name of the input file of sequence data in `fasta/nib/2bit` format, and **output_alignment.txt** is the output alignment file.
 
+
 Additional parameters for BLAT alignment can be found in the [manual] (http://genome.ucsc.edu/FAQ/FAQblat), or by using:
 {{< highlight bash >}}
 $ blat
 {{< /highlight >}}
 
-\\
+
 Running BLAT on Tusker with query file `input_reads.fasta` and database `db.fa` is shown below:
 {{% panel header="`blat_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -39,8 +41,8 @@ blat db.fa input_reads.fasta output_alignment.txt
 
 Although BLAT is a single threaded program (`#SBATCH --nodes=1`, `#SBATCH --ntasks-per-node=1`) it is still much faster than the other alignment tools.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BLAT Output</span>
+
+### BLAT Output
 
 BLAT output is a list containing the following information:
 
@@ -48,4 +50,4 @@ BLAT output is a list containing the following information:
 - the region of query sequence that matches the database sequence
 - the size of the query sequence
 - the level of identity as a percentage of the alignment
-- the chromosome and position that the query sequence maps to
\ No newline at end of file
+- the chromosome and position that the query sequence maps to
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md
index abcef635..8fe470b1 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie.md
@@ -4,8 +4,10 @@ description =  "How to run Bowtie on HCC resources"
 weight = "10"
 +++
 
+
 [Bowtie] (http://bowtie-bio.sourceforge.net/index.shtml) is an ultrafast and memory-efficient aligner for large sets of sequencing reads to a reference genome. Bowtie indexes the genome with a Burrows-Wheeler index to keep its memory footprint small. Bowtie also supports usage of multiple processors to achieve greater alignment speed.
 
+
 The first and basic step of running Bowtie is to build and format an index from the reference genome. The basic usage of this command, **bowtie-build** is:
 {{< highlight bash >}}
 $ bowtie-build input_reference.fasta index_prefix
@@ -19,9 +21,10 @@ $ bowtie [-q|-f|-r|-c] index_prefix [-1 input_reads_pair_1.[fasta|fastq] -2 inpu
 where **index_prefix** is the generated index using the **bowtie-build** command, and **options** are optional parameters that can be found in the [Bowtie
 manual] (http://bowtie-bio.sourceforge.net/manual.shtml).
 
+
 Bowtie supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using the following flags: **-q** (fastq files), **-f** (fasta files), **-r** (raw one-sequence per line), or **-c** (sequences given on command line).
 
-\\
+
 An example of how to run Bowtie alignment on Tusker with single-end fastq file and `8 CPUs` is shown below:
 {{% panel header="`bowtie_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -40,7 +43,7 @@ bowtie -q index_prefix input_reads.fastq -p $SLURM_NTASKS_PER_NODE > bowtie_alig
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Bowtie Output</span>
 
-Bowtie output is an alignment file in SAM format, where one line is one alignment. Each line is a collection of 8 fields separated by tabs. The fields are: name of the aligned reads, reference strand aligned to, name of reference sequence where the alignment occurs, 0-based offset into the forward reference strand where leftmost character of the alignment occurs, read sequence, read qualities, the number of other instances where the same sequence is aligned against the same reference characters, and comma-separated list of mismatch descriptors.
\ No newline at end of file
+### Bowtie Output
+
+Bowtie output is an alignment file in SAM format, where one line is one alignment. Each line is a collection of 8 fields separated by tabs. The fields are: name of the aligned reads, reference strand aligned to, name of reference sequence where the alignment occurs, 0-based offset into the forward reference strand where leftmost character of the alignment occurs, read sequence, read qualities, the number of other instances where the same sequence is aligned against the same reference characters, and comma-separated list of mismatch descriptors.
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md
index b92784fc..afc54f50 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bowtie2.md
@@ -4,6 +4,7 @@ description =  "How to run Bowtie2 on HCC resources"
 weight = "10"
 +++
 
+
 [Bowtie2] (http://bowtie-bio.sourceforge.net/bowtie2/index.shtml) is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. Although Bowtie and Bowtie2 are both fast read aligners, there are few main differences between them:
 
 - Bowtie2 supports gapped alignment with affine gap penalties, without restrictions on the number of gaps and gap lengths.
@@ -14,6 +15,7 @@ weight = "10"
 - Bowtie2 does not align colorspace reads.
 - Bowtie and Bowtie2 indices are not compatible.
 
+
 Same as Bowtie, the first and basic step of running Bowtie2 is to build Bowtie2 index from a reference genome sequence. The basic usage of the
 command **bowtie2-build** is:
 {{< highlight bash >}}
@@ -21,13 +23,14 @@ $ bowtie2-build -f input_reference.fasta index_prefix
 {{< /highlight >}}
 where **input_reference.fasta** is an input file of sequence reads in fasta format, and **index_prefix** is the prefix of the generated index files. Beside the option **-f** that is used when the reference input file is a fasta file, the option **-c** can be used when the reference sequences are given on the command line.
 
+
 The command **bowtie2** takes a Bowtie2 index and set of sequencing read files and outputs set of alignments in SAM format. The general **bowtie2** usage is:
 {{< highlight bash >}}
 $ bowtie2 -x index_prefix [-q|--qseq|-f|-r|-c] [-1 input_reads_pair_1.[fasta|fastq] -2 input_reads_pair_2.[fasta|fastq] | -U input_reads.[fasta|fastq]] -S bowtie2_alignments.sam [options]
 {{< /highlight >}}
 where **index_prefix** is the generated index using the **bowtie2-build** command, and **options** are optional parameters that can be found in the [Bowtie2 manual] (http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml). Bowtie2 supports both single-end (`input_reads.[fasta|fastq]`) and paired-end (`input_reads_pair_1.[fasta|fastq]`, `input_reads_pair_2.[fasta|fastq]`) files in fasta or fastq format. The format of the input files also needs to be specified by using one of the following flags: **-q** (fastq files), **--qseq** (Illumina's qseq format), **-f** (fasta files), **-r** (raw one sequence per line), or **-c** (sequences given on command line).
 
-\\
+
 An example of how to run Bowtie2 local alignment on Tusker with paired-end fasta files and `8 CPUs` is shown below:
 {{% panel header="`bowtie2_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -46,7 +49,7 @@ bowtie2 -x index_prefix -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.fas
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Bowtie2 Output</span>
 
-Bowtie2 outputs alignments in SAM format that can further be manipulated with different tools, like SAMtools and GATK. Each line from the file describes an alignment and is a collection of at least 12 fields separated by tabs. Detailed information about Bowtie2 output fields can be found in the Bowtie2 manual.
\ No newline at end of file
+### Bowtie2 Output
+
+Bowtie2 outputs alignments in SAM format that can further be manipulated with different tools, like SAMtools and GATK. Each line from the file describes an alignment and is a collection of at least 12 fields separated by tabs. Detailed information about Bowtie2 output fields can be found in the Bowtie2 manual.
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/_index.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/_index.md
index 53e855a3..5d1e3510 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/_index.md
@@ -3,7 +3,7 @@ title = "BWA"
 description = "How to use BWA on HCC machines"
 weight = "52"
 +++
- 
+
 
 BWA (Burrows-Wheeler Aligner) is a software package for mapping relatively short nucleotide sequences against a long reference sequence. BWA is slower than Bowtie, but allows indels in the alignment.
 
@@ -11,7 +11,7 @@ The basic usage of BWA is:
 {{< highlight bash >}}
 $ bwa COMMAND [options]
 {{< /highlight >}}
-where **COMMAND** is one of the available BWA commands:
+where **COMMAND** is one of the available BWA commands:
 
 - **index**: index sequences in the FASTA format
 - **mem**: BWA-MEM algorithm
@@ -35,4 +35,5 @@ $  bwa COMMAND
 {{< /highlight >}}
 or check the [BWA manual] (http://bio-bwa.sourceforge.net/bwa.shtml).
 
-The page [Running BWA Commands](running_bwa_commands) shows how to run BWA on HCC.
\ No newline at end of file
+
+The page[Running BWA Commands](running_bwa_commands) shows how to run BWA on HCC.
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md
index f7f274ad..6a8e4839 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/bwa/running_bwa_commands.md
@@ -4,7 +4,7 @@ description =  "How to run BWA commands on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Index:</span>
+## BWA Index
 
 The first step of using BWA is to make an index of the reference genome in fasta format. The basic usage of the **bwa index** is:
 {{< highlight bash >}}
@@ -12,8 +12,8 @@ $ bwa index [-a bwtsw|is] input_reference.fasta index_prefix
 {{< /highlight >}}
 where **input_reference.fasta** is an input file of the reference genome in fasta format, and **index_prefix** is the prefix of the generated index files. The option **-a** is required and can have two values: **bwtsw** (does not work for short genomes) and **is** (does not work for long genomes). Therefore, this value is chosen according to the length of the genome.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Mem:</span>
+
+## BWA Mem
 
 The **bwa mem** algorithm is one of the three algorithms provided by BWA. It performs local alignment and produces alignments for different part of the query sequence. The basic usage of **bwa mem** is:
 {{< highlight bash >}}
@@ -21,7 +21,7 @@ $ bwa mem index_prefix [input_reads.fastq|input_reads_pair_1.fastq input_reads_p
 {{< /highlight >}}
 where **index_prefix** is the index for the reference genome generated from **bwa index**, and **input_reads.fastq**, **input_reads_pair_1.fastq**, **input_reads_pair_2.fastq** are the input files of sequencing data that can be single-end or paired-end respectively. Additional **options** for **bwa mem** can be found in the BWA manual.
 
-\\
+
 Simple SLURM script for running **bwa mem** on Tusker with paired-end fastq input data, `index_prefix` as reference genome index, SAM output file and `8 CPUs` is shown below:
 {{% panel header="`bwa_mem.submit`"%}}
 {{< highlight bash >}}
@@ -40,8 +40,8 @@ bwa mem index_prefix input_reads_pair_1.fastq input_reads_pair_2.fastq -t $SLURM
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Bwasw:</span>
+
+## BWA Bwasw
 
 The **bwa bwasw** algorithm is another algorithm provided by BWA. For input files with single-end reads it aligns the query sequences. For input files with paired-ends reads it performs paired-end alignment that only works for Illumina reads.
 
@@ -50,16 +50,16 @@ An example of **bwa bwasw** for single-end input file `input-reads.fasta` in fas
 $ bwa bwasw index_prefix input_reads.fasta -t $SLURM_NTASKS_PER_NODE > bwa_bwasw_alignments.sam
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Aln:</span>
+
+## BWA Aln
 
 The third BWA algorithm, **bwa aln**, aligns the input file of sequence data to the reference genome. In addition, there is an example of running **bwa aln** with single-end `input_reads.fasta` input file and `8 CPUs`:
 {{< highlight bash >}}
 $ bwa aln index_prefix input_reads.fasta -0 -t $SLURM_NTASKS_PER_NODE > bwa_aln_alignments.sai
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Samse and BWA Sampe:</span>
+
+## BWA Samse and BWA Sampe
 
 The command **bwa samse** uses the `bwa_aln_alignments.sai` output from **bwa aln** in order to generate SAM file from the alignments for single-end reads.
 
@@ -77,32 +77,32 @@ $ bwa samse -f bwa_aln_alignments.sam index_prefix bwa_aln_alignments_pair_1.sai
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Fastmap:</span>
+
+## BWA Fastmap
 
 The command **bwa fastmap** identifies and outputs super-maximal exact matches (SMEMs). The basic usage of **bwa fastmap** is:
 {{< highlight bash >}}
 $ bwa fastmap index_prefix input_reads.fasta > bwa_fastmap.matches
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Pemerge:</span>
+
+## BWA Pemerge
 
 The command **bwa pemerge** merges overlapping paired ends and can print either only the merged reads or the unmerged ones. An example of **bwa pemerge** of `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq` with `8 CPUs` and output file `output_reads_merged.fastq` that contains only the merged reads is shown below:
 {{< highlight bash >}}
 $ bwa pemerge -m input_reads_pair_1.fastq input_reads_pair_2.fastq -t $SLURM_NTASKS_PER_NODE > output_reads_merged.fastq
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Fa2pac:</span>
+
+## BWA Fa2pac
 
 The command **bwa fa2pac** converts fasta to pac files. The general usage of **bwa fa2pac** is:
 {{< highlight bash >}}
 $ bwa fa2pac input_reads.fasta pac_prefix
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Pac2bwt and BWA Pac2bwtgen:</span>
+
+## BWA Pac2bwt and BWA Pac2bwtgen
 
 The commands **bwa pac2bwt** and **bwa pac2bwtgen** convert pac to bwt files.
 
@@ -118,24 +118,24 @@ $ bwa pac2bwtgen input_reads.pac output_reads.bwt
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Bwtupdate:</span>
+
+## BWA Bwtupdate
 
 The command **bwa bwtupdate** updates bwt files to the new format. The general usage of **bwa bwtupdate** is:
 {{< highlight bash >}}
 $ bwa bwtupdate input_reads.bwt
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BWA Bwt2sa:</span>
+
+## BWA Bwt2sa
 
 The command **bwa bwt2sa** generates sa files from bwt and Occ files. The basic usage of **bwa bwt2sa** is:
 {{< highlight bash >}}
 $ bwa bwt2sa input_reads.bwt output_reads.sa
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the scalability of BWA (bwa/0.7) on Crane, we used two paired-end input fastq files, `large_1.fastq` and `large_2.fastq`, and one single-end input fasta file, `large.fasta`. Some statistics about the input files and the time and memory resources used by **bwa mem** are shown on the table below:
-{{< readfile file="/static/html/bwa.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/bwa.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md
index 717c7eb8..8ede108b 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/clustal_omega.md
@@ -4,15 +4,17 @@ description =  "How to run Clustal Omega on HCC resources"
 weight = "10"
 +++
 
+
 [Clustal Omega] (http://www.clustal.org/omega/) is a general purpose multiple sequence alignment (MSA) tool used mainly with protein, as well as DNA and RNA sequences. Clustal Omega is fast and scalable aligner that can align datasets of hundreds of thousands of sequences in reasonable time.
 
+
 The general usage of Clustal Omega is:
 {{< highlight bash >}}
 $ clustalo -i input_file.fasta -o output_file.fasta [options]
 {{< /highlight >}}
 where **input_file.fasta** is the multiple sequence input file in `fasta` format, and **output_file.fasta** is the multiple sequence alignment output file in `fasta` format.
 
-\\
+
 Clustal Omega accepts 3 types of sequence input files:
 
 - sequence file with aligned/unaligned sequences
@@ -21,13 +23,13 @@ Clustal Omega accepts 3 types of sequence input files:
 
 These input files must contain at least 2 sequences and must be in one of the following MSA file formats: `a2m`, `fa[sta]`, `clu[stal]`, `msf`, `phy[lip]`, `selex`, `st[ockholm]`, `vie[nna]`. Moreover, if not specified, the generated output file is in `fasta` format.
 
-\\
+
 More Clustal Omega options can be found by typing:
 {{< highlight bash >}}
 $ clustalo -h
 {{< /highlight >}}
 
-\\
+
 Running Clustal Omega on Tusker with input file `input_reads.fasta` with `8 threads` and `10GB memory` is shown below:
 {{% panel header="`clustal_omega.submit`"%}}
 {{< highlight bash >}}
@@ -54,13 +56,13 @@ $ clustalo -i input_reads.sto --dealign -v
 {{< /highlight >}}
 Clustal Omega will read the input file in Stockholm format, de-align the sequences, and then re-align them, printing progress report in meanwhile (**-v**). Because it is not specified, the output will be in the default `fasta` format.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Clustal Omega Output</span>
+
+### Clustal Omega Output
 
 The basic Clustal Omega output produces one alignment file in the specified output format. More intermediate outputs can be generated using specific Clustal Omega options, such as: **--distmat-out=<file>** (*pairwise distance matrix output file*) and **--guidetree-out=<file>** (*guide tree output file*).
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the Clustal Omega performance on Tusker, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega on Tusker are shown on the table below:
-{{< readfile file="/static/html/clustal_omega.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/clustal_omega.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md b/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md
index 294371c7..92a3208a 100644
--- a/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md
+++ b/content/guides/running_applications/bioinformatics_tools/alignment_tools/tophat_tophat2.md
@@ -9,6 +9,7 @@ weight = "10"
 
 Although there is no difference between the available options for both TopHat and TopHat2 and the number of output files, TopHat2 incorporates many significant improvements to TopHat. The TopHat package at HCC supports both **tophat** and **tophat2**.
 
+
 The basic usage of TopHat2 is:
 {{< highlight bash >}}
 $ [tophat|tophat2] [options] index_prefix [input_reads_pair_1.[fasta|fastq] input_reads_pair_2.[fasta|fastq] | input_reads.[fasta|fastq]]
@@ -17,6 +18,7 @@ where **index_prefix** is the basename of the genome index to be searched. This
 
 TopHat2 uses single or comma-separated list of paired-end and single-end reads in fasta or fastq format. The single-end reads need to be provided after the paired-end reads.
 
+
 More advanced TopHat2 options can be found in [its manual] (https://ccb.jhu.edu/software/tophat/manual.shtml), or by typing:
 {{< highlight bash >}}
 $ tophat2 -h
@@ -24,7 +26,7 @@ $ tophat2 -h
 
 Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file.
 
-\\
+
 An example of how to run TopHat2 on Tusker with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
 {{% panel header="`tophat2_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -45,8 +47,8 @@ tophat2 -p $SLURM_NTASKS_PER_NODE index_prefix input_reads_pair_1.fastq input_re
 
 TopHat2 generates its own output directory `tophat_output/` that contains multiple TopHat2 generated files.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">TopHat2 Output</span>
+
+### TopHat2 Output
 
 TopHat2 produces number of files in its `tophat_out/` output directory. Some of the generated files are:
 
@@ -56,4 +58,4 @@ TopHat2 produces number of files in its `tophat_out/` output directory. Some of
 - **insertions.bed**: BED track of insertions reported by TopHat
 - **deletions.bed**: BED track of deletions reported by TopHat
 - **prep_reads.info**: statistics about the input sequencing data (min/max read length, number of reads)
-- **align_summary.txt**: summary of the alignment counts (number of mapped reads, overall read mapping rate)
\ No newline at end of file
+- **align_summary.txt**: summary of the alignment counts (number of mapped reads, overall read mapping rate)
diff --git a/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md b/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md
index 8d1fcfa7..7fd7db27 100644
--- a/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/biodata_module/_index.md
@@ -4,6 +4,7 @@ description = "How to use Biodata Module on HCC machines"
 weight = "52"
 +++
 
+
 HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on both Tusker and Crane.  
 In order to use these resources, the "**biodata**" module needs to be loaded first.  
 For how to load module, please check [Module Commands](#module_commands).
@@ -19,19 +20,20 @@ The major environment variables are:
 **$GENOMES** - Directory containing all available genomes (multiple sources, builds possible  
 **$INDICES** - Directory containing indices for bowtie, bowtie2, bwa for all available genomes  
 **$UNIPROT** - Directory containing latest release of full UniProt database
-\\
-\\
-\\
+
+
 In order to check what genomes are available, you can type:
 {{< highlight bash >}}
 $ ls $GENOMES
 {{< /highlight >}}
-\\
+
+
 In order to check what BLAST databases are available, you can just type:
 {{< highlight bash >}}
 $ ls $BLAST
 {{< /highlight >}}
-\\
+
+
 An example of how to run Bowtie2 local alignment on Crane utilizing the default Horse, *Equus caballus* index (*BOWTIE2\_HORSE*) with paired-end fasta files and 8 CPUs is shown below:
 {{% panel header="`bowtie2_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -51,7 +53,8 @@ bowtie2 -x $BOWTIE2_HORSE -f -1 input_reads_pair_1.fasta -2 input_reads_pair_2.f
 
 {{< /highlight >}}
 {{% /panel %}}
-\\
+
+
 An example of BLAST run against the non-redundant nucleotide database available on Crane is provided below:
 {{% panel header="`blastn_alignment.submit`"%}}
 {{< highlight bash >}}
@@ -74,6 +77,7 @@ cp /scratch/blast_nucleotide.results .
 
 {{< /highlight >}}
 {{% /panel %}}
-  
+
+
 The organisms and their appropriate environmental variables for all genomes and chromosome files, as well as for short read aligned indices are shown on the link below:  
 [Organisms](#organisms)
diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/_index.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/_index.md
index c1e0d214..9399d8e3 100644
--- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/_index.md
@@ -3,15 +3,16 @@ title = "BamTools"
 description = "How to use BamTools on HCC machines"
 weight = "52"
 +++
- 
+
 
 The SAM/BAM format is a standard format for short read alignments. While SAM is the plain-text version of the alignments, BAM is compressed, binary format of the alignments that is used for space-saving. BamTools is a toolkit for handling BAM files. BamTools provides a powerful suite of command-lines programs for manipulating and querying BAM files for data.
 
+
 The basic usage of BamTools is:
 {{< highlight bash >}}
 $ bamtools COMMAND [options]
 {{< /highlight >}}
-where **COMMAND** is one of the following BamTools commands:
+where **COMMAND** is one of the following BamTools commands:
 
 - **convert**: Converts between BAM and a number of other formats
 - **count**: Prints number of alignments in BAM file(s)
@@ -27,10 +28,12 @@ where **COMMAND** is one of the following BamTools commands:
 - **split**: Splits a BAM file on user-specified property, creating a new BAM output file for each value found
 - **stats**: Prints some basic statistics from input BAM file(s)
 
+
 For detailed description and more information on a specific command, just type:
 {{< highlight bash >}}
 $ bamtools help COMMAND
 {{< /highlight >}}
 or check the BamTools web, https://github.com/pezmaster31/bamtools/wiki.
 
-The page [Running BamTools Commands](running_bamtools_commands) shows how to run BamTools on HCC.
\ No newline at end of file
+
+The page [Running BamTools Commands](running_bamtools_commands) shows how to run BamTools on HCC.
diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md
index 24901902..78ec596c 100644
--- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md
+++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/bamtools/running_bamtools_commands.md
@@ -4,7 +4,8 @@ description =  "How to run BamTools commands on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Convert:</span>
+
+## BamTools Convert
 
 One of the most frequently used BamTools command is **convert**.
 
@@ -14,6 +15,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig
 {{< /highlight >}}
 where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file.
 
+
 Running BamTools **convert** on Tusker with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
 {{% panel header="`bamtools_convert.submit`"%}}
 {{< highlight bash >}}
@@ -34,8 +36,8 @@ bamtools convert -format fastq -in input_alignments.bam -out output_reads.fastq
 
 All BamTools commands are single threaded, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to **1**.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Count:</span>
+
+## BamTools Count
 
 The basic usage of the BamTools **count** is:
 {{< highlight bash >}}
@@ -43,8 +45,8 @@ $ bamtools count -in input_alignments.bam
 {{< /highlight >}}
 The command **bamtools count** outputs the total number of alignments in the BAM file.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Coverage:</span>
+
+## BamTools Coverage
 
 The basic usage of the BamTools **coverage** is:
 {{< highlight bash >}}
@@ -52,8 +54,8 @@ $ bamtools coverage -in input_alignments.bam -out output_reads_coverage.txt
 {{< /highlight >}}
 The command **bamtools coverage **prints the coverage data for a single BAM file.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Filter:</span>
+
+## BamTools Filter
 
 The basic usage of the BamTools **filter** is:
 {{< highlight bash >}}
@@ -61,8 +63,8 @@ $ bamtools filter -in input_alignments.bam -out output_alignments_filtered.bam -
 {{< /highlight >}}
 The command **bamtools filter** filters the BAM file based on specified options. In this example, the resulting bam file `output_alignments_filtered.bam` contains alignments with length longer than 100 base pairs.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Header:</span>
+
+## BamTools Header
 
 The basic usage of the BamTools **header** is:
 {{< highlight bash >}}
@@ -70,8 +72,8 @@ $ bamtools header -in input_alignments.bam -out output_alignments_header.txt
 {{< /highlight >}}
 The command **bamtools header** prints the header from BAM file.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Index:</span>
+
+## BamTools Index
 
 The basic usage of the BamTools **index** is:
 {{< highlight bash >}}
@@ -79,8 +81,8 @@ $ bamtools index -in input_alignments.bam
 {{< /highlight >}}
 The command **bamtools index** creates index for BAM file and prints `input_alignments.bam.bai` file.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Merge:</span>
+
+## BamTools Merge
 
 The basic usage of the BamTools **merge** is:
 {{< highlight bash >}}
@@ -88,8 +90,8 @@ $ bamtools merge -in input_alignments_1.bam -in input_alignments_2.bam -in input
 {{< /highlight >}}
 The command **bamtools merge** merges multiple (more than 2) BAM files into one.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Random:</span>
+
+## BamTools Random
 
 The basic usage of the BamTools **random** is:
 {{< highlight bash >}}
@@ -97,8 +99,8 @@ $ bamtools random -in input_alignments.bam -out output_alignments_100.bam -n 100
 {{< /highlight >}}
 The command **bamtools random** grabs a random subset of alignments. With the option `-n 100`, 100 randomly chosen alignments are stored in the output file `output_alignments_100.bam`.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Resolve:</span>
+
+## BamTools Resolve
 
 The basic usage of the BamTools **resolve** is:
 {{< highlight bash >}}
@@ -106,8 +108,8 @@ $ bamtools resolve -twoPass -in input_alignments.bam -out output_alignments.bam
 {{< /highlight >}}
 The command **bamtools resolve** resolves paired-end reads. The resolving mode is required, and it can be `-makeStats`, `-markPairs`, or `-twoPass`.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Revert:</span>
+
+## BamTools Revert
 
 The basic usage of the BamTools **revert** is:
 {{< highlight bash >}}
@@ -115,8 +117,8 @@ $ bamtools revert -in input_alignments.bam -out output_alignments_reverted.bam
 {{< /highlight >}}
 The command **bamtools revert** removes duplicate marks and restores original base qualities.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Sort:</span>
+
+## BamTools Sort
 
 The basic usage of the BamTools **sort** is:
 {{< highlight bash >}}
@@ -124,8 +126,8 @@ $ bamtools sort -in input_alignments.bam -out output_alignments_sorted.bam -byna
 {{< /highlight >}}
 The command **bamtools sort** sorts a BAM file according to a given option. `output_alignments_sorted.bam` is the resulting file, where the alignments are sorted by name.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Split:</span>
+
+## BamTools Split
 
 The basic usage of the BamTools **split** is:
 {{< highlight bash >}}
@@ -133,11 +135,11 @@ $ bamtools split -in input_alignments.bam -mapped
 {{< /highlight >}}
 The command **bamtools split** splits BAM file on user-specified property and creates a new BAM output file for each value found. In the given example, an output file `input_alignments.MAPPED.bam` is produced after `-mapped` split option is specified. Beside `mapped`, the split option can be: `-paired`, `-reference`, or `-tag <tag_name>`.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">BamTools Stats:</span>
+
+## BamTools Stats
 
 The basic usage of the BamTools **stats** is:
 {{< highlight bash >}}
 $ bamtools stats -in input_alignments.bam
 {{< /highlight >}}
-The command **bamtools stats** prints general alignment statistics from the BAM file.
\ No newline at end of file
+The command **bamtools stats** prints general alignment statistics from the BAM file.
diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/_index.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/_index.md
index a4fb2d1c..0246ba99 100644
--- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/_index.md
@@ -4,8 +4,10 @@ description = "How to use SAMtools on HCC machines"
 weight = "52"
 +++
 
+
 The SAM format is a standard format for storing large nucleotide sequence alignments. The BAM format is just the binary form from SAM. [SAMtools] (http://www.htslib.org/) is a toolkit for manipulating alignments in SAM/BAM format, including sorting, merging, indexing and generating alignments in a per-position format.
 
+
 The basic usage of SAMtools is:
 {{< highlight bash >}}
 $ samtools COMMAND [options]
@@ -32,10 +34,12 @@ where **COMMAND** is one of the following SAMtools commands**:**
 - **phase**: phase heterozygotes
 - **bamshuf**: shuffle and group alignments by name
 
+
 For detailed description and more information on a specific command, just type:
 {{< highlight bash >}}
 $ samtools COMMAND
 {{< /highlight >}}
 or check the [SAMtools manual] (http://www.htslib.org/doc/samtools.html).
 
-The page [Running SAMtools Commands](running_samtools_commands) shows how to run SAMtools on HCC.
\ No newline at end of file
+
+The page [Running SAMtools Commands](running_samtools_commands) shows how to run SAMtools on HCC.
diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md
index c363f779..467b7d8a 100644
--- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md
+++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/samtools/running_samtools_commands.md
@@ -4,7 +4,8 @@ description =  "How to run SAMtools commands on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools View:</span>
+
+## SAMtools View
 
 One of the most frequently used SAMtools command is **view**. The basic usage of the **samtools view** is:
 {{< highlight bash >}}
@@ -12,6 +13,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b
 {{< /highlight >}}
 where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively.
 
+
 Running **samtools view** on Tusker with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
 {{% panel header="`samtools_view.submit`"%}}
 {{< highlight bash >}}
@@ -32,58 +34,58 @@ samtools view -bS -@ $SLURM_NTASKS_PER_NODE input_alignments.sam -o output_align
 
 The most intensive SAMtools commands (**samtools view**, **samtools sort**) are multi-threaded, and therefore using the SAMtools option **-@ <number_of_CPUs>** is recommended.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Sort:</span>
+
+## SAMtools Sort
 
 Sorting BAM files is recommended for further analysis of these files. The BAM file is sorted based on its position in the reference, as determined by its alignment. An example of using `4 CPUs` to sort the input file `input_alignments.bam` by the read name follows:
 {{< highlight bash >}}
 $ samtools sort -n -@ 4 input_alignments.bam output_alignments_sorted
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Index:</span>
+
+## SAMtools Index
 
 The **samtools index** command creates a new index file that allows fast look-up of the data in a sorted SAM or BAM file.
 {{< highlight bash >}}
 $ samtools index input_alignments_sorted.bam output_index.bai
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Idxstats:</span>
+
+## SAMtools Idxstats
 
 The **samtools idxstats** command prints stats for the BAM index file. The output is TAB delimited with each line consisting of *reference sequence name*, *sequence length*, *number of mapped reads* and *number of unmapped reads*.
 {{< highlight bash >}}
 $ samtools idxstats input_alignments_sorted.bam
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Merge:</span>
+
+## SAMtools Merge
 
 The **samtools merge** command merges multiple sorted alignments into one output file.
 {{< highlight bash >}}
 $ samtools merge output_alignments_merge.bam input_alignments_sorted_1.bam input_alignments_sorted_2.bam
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Faidx:</span>
+
+## SAMtools Faidx
 
 The command **samtools faidx** indexes the reference sequence in fasta format or extracts subsequence from indexed reference sequence.
 {{< highlight bash >}}
 $ samtools faidx input_reference.fasta
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools Mpileup:</span>
+
+## SAMtools Mpileup
 
 The **samtools mpileup** command generates file in `bcf` or `pileup` format for one or multiple BAM files. For each genomic coordinate, the overlapping read bases and indels at that position in the input BAM file are printed.
 {{< highlight bash >}}
 $ samtools mpileup input_alignments_sorted.bam > output_alignments.bcf
 {{< /highlight >}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SAMtools View:</span>
+
+## SAMtools View
 
 The **samtools tview** command starts an interactive text alignment viewer that can be used to visualize how reads are aligned to specific regions of the reference genome.
 {{< highlight bash >}}
 $ samtools tview input_alignments_sorted.bam
-{{< /highlight >}}
\ No newline at end of file
+{{< /highlight >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md
index cdf537ef..836a3d23 100644
--- a/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md
+++ b/content/guides/running_applications/bioinformatics_tools/data_manipulation_tools/sratoolkit.md
@@ -11,12 +11,13 @@ The SRA Toolkit allows converting data from the SRA format to the following form
 
 The SRA Toolkit contains multiple **"format"-dump** commands, where **format** is the file format the SRA data is converted to **abi-dump**, **fastq-dump**, **illumina-dump**, **sam-dump**, **sff-dump**, and **vdb-dump**.
 
+
 One of the most commonly used commands is **fastq-dump**:
 {{< highlight bash >}}
 $ fastq-dump [options] input_reads.sra
 {{< /highlight >}}
 
-\\
+
 An example of running **fastq-dump** on Tusker to convert SRA file containing paired-end reads is:
 {{% panel header="`sratoolkit.submit`"%}}
 {{< highlight bash >}}
@@ -38,15 +39,16 @@ This script outputs two fastq paired end reads `input_reads_1.fastq` and `input_
 
 All SRAtoolkit commands are single threaded, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` in the SLURM script are set to **1**.
 
-\\
+
 The SRA Toolkit contains multiple **"format"-load** commands, where **format** is the file format of the data that is uploaded to NCBI: `srf-load`, `sff-load`, `refseq-load`, `pacbio-load`, `illumina-load`, `helicos-load`, `fastq-load`, `cg-load`, `bam-load`, and `abi-load`.
 
+
 An example of bam file `input_alignments.bam` uploaded to NCBI is shown below:
 {{< highlight bash >}}
 $ bam-load \-o input_reads.sra input_alignments.bam
 {{< /highlight >}}
 
-\\
+
 Other frequently used SRAtoolkit tools are:
 
 - **prefetch**: allows command-line downloading of SRA, dbGaP, and ADSP data
@@ -78,4 +80,4 @@ Here, set *"/repository/user/main/public/root"* to *"/work/group/username/ncbi/p
 \\
 \\
 You need to do these steps only once.
-{{% /notice %}}
\ No newline at end of file
+{{% /notice %}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/oases.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/oases.md
index 9cfbcc1c..4ea2bb19 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/oases.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/oases.md
@@ -7,6 +7,7 @@ weight = "10"
 
 Velvet by itself generates assembled contigs for DNA data. However, using the Oases extension for Velvet, a transcriptome assembly can be produced. [Oases] (https://www.ebi.ac.uk/~zerbino/oases/) is an extension of Velvet for generating de novo assembly for RNA-Seq data. Oases uses the preliminary assembly produced by Velvet as an input, and constructs transcripts.
 
+
 In order to be able to run Oases, after `velveth`, `velvetg` needs to be run with the `–read_trkg yes` option:
 {{< highlight bash >}}
 $ velvetg output_directory/ -min_contig_lgth 200 -read_trkg yes
@@ -20,9 +21,10 @@ contigs.fa  Graph2  LastGraph  Log  PreGraph  Roadmaps  Sequences  stats.txt
 {{< /highlight >}}
 {{% /panel %}}
 
+
 Oases has a lot of parameters that can be found in its [manual] (https://www.ebi.ac.uk/~zerbino/oases/OasesManual.pdf). While Velvet is multi-threaded, Oases is not.
 
-\\
+
 A simple SLURM script to run Oases on the Velvet output stored in `output_directory/` with minimum transcript length of `200` is shown below:
 {{% panel header="`oases.submit`"%}}
 {{< highlight bash >}}
@@ -41,8 +43,8 @@ oases output_directory/ -min_trans_lgth 200
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Oases Output</span>
+
+### Oases Output
 
 The `output_directory/` after Oases contains the following files:
 {{% panel header="`Output directory after Oases`"%}}
@@ -51,10 +53,11 @@ $ ls output_directory/
 contig-ordering.txt  contigs.fa  Graph2  LastGraph  Log  PreGraph  Roadmaps  Sequences  stats.txt  transcripts.fa
 {{< /highlight >}}
 {{% /panel %}}
+
 Oases produces two additional output files: `transcripts.fa` and `contig-ordering.txt`. The predicted transcript sequences are found in the fasta file `transcripts.fa`.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the Oases (oases/0.2.8) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases on Tusker are shown in the table below:
-{{< readfile file="/static/html/oases.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/oases.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/ray.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/ray.md
index 48593788..cea23364 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/ray.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/ray.md
@@ -3,10 +3,11 @@ title = "Ray"
 description =  "How to run Ray on HCC resources"
 weight = "10"
 +++
- 
+
 
 [Ray] (http://denovoassembler.sourceforge.net/) is a de novo de Bruijn genome assembler that works with next-generation sequencing data (Illumina, 454, SOLiD). Ray is scalable and parallel software that takes advantage of multiple nodes and multiple CPUs using MPI (message passing interface).
 
+
 Ray can be used for building multiple applications:
 
 - de novo genome assembly
@@ -16,6 +17,7 @@ Ray can be used for building multiple applications:
 - taxonomy and gene ontology profiling of samples
 - comparing DNA samples using words
 
+
 In order to see all options available for running Ray, just type:
 {{< highlight bash >}}
 $ mpiexec Ray -help
@@ -30,12 +32,12 @@ or can be stored in a configuration file `.conf` (one option per line):
 $ mpiexec Ray Ray.conf
 {{< /highlight >}}
 
-\\
+
 Ray supports both paired-end (`-p`) and single-end reads (`-s`). Moreover, Ray can detect the input files automatically if the input directory is provided (`-detect-sequence-files input_directory`).
 
 Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`). Ray supports multiple file formats such as `fasta`, `fa`, `fasta.gz`, `fa.gz, `fasta.bz2`, `fa.bz2`, `fastq`, `fq`, `fastq.gz`, `fq.gz`, `fastq.bz2`, `fq.bz2`, `sff`, `csfasta`, `csfa`.
 
-\\
+
 Simple SLURM script for running Ray on Tusker with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below:
 {{% panel header="`ray.submit`"%}}
 {{< highlight bash >}}
@@ -54,12 +56,13 @@ mpiexec Ray -k 31 -p input_reads_pair_1.fastq input_reads_pair_2.fastq -s input_
 {{% /panel %}}
 where **input_reads_pair_1.fastq** and **input_reads_pair_2.fastq** are the paired-end input files in `fastq` format, and **input_reads.fasta** is the single-end input file in `fasta` format.
 
+
 {{% notice note %}}
 It is **not** necessary to specify the number of processes with the `-n` option to `mpiexec`. OpenMPI will determine that automatically from SLURM based on the value of the `--ntasks` option.
 {{% /notice %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Ray Output</span>
+
+### Ray Output
 
 In the output folder (`-o output_directory`) Ray prints a lot of files with information about different steps and statistics from the execution process. Information about all output files can be found in Ray's manual.
 
@@ -70,8 +73,8 @@ One of the most important results are:
 - **Contigs.fasta**: contiguous sequences in FASTA format
 - **OutputNumbers.txt**: overall numbers for the assembly
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the Ray performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray on Tusker are shown in the table below:
-{{< readfile file="/static/html/ray.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/ray.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md
index 7e285d1f..cd3b36d3 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md
@@ -9,6 +9,7 @@ weight = "10"
 
 SOAPdenovo2 has two commands, **SOAPdenovo-63mer** and **SOAPdenovo-127mer**. The first one is suitable for assembly with k-mer values less than 63 bp, requires less memory and runs faster. The latter one works for k-mer values less than 127 bp.
 
+
 In order to see the options available for **SOAPdenovo-63mer** just
 type:
 {{< highlight bash >}}
@@ -17,13 +18,14 @@ $ SOAPdenovo-63mer
 
 SOAPdenovo2 provides a mechanism to run the whole workflow at once, or in 5 separate steps.
 
+
 The basic usage of SOAPdenovo2 is:
 {{< highlight bash >}}
 $ SOAPdenovo-63mer all -s configFile -o output_directory/outputGraph -K <kmer_value> [options]
 {{< /highlight >}}
 where **configFile** is a defined configuration file, **outputGraph** is the prefix of the output files, and **kmer_value** is the value of k-mer used for building the assembly (`<=63` for SOAPdenovo-63mer and `<=127` for SOAPdenovo-127mer).
 
-\\
+
 If you want to run the assembly process step by step, then use the following sequential commands:
 
 {{% panel theme="info" header="SOAPdenovo2 Step 1 Options" %}}
@@ -54,6 +56,7 @@ SOAPdenovo-63mer scaff -g inputGraph [options]
 
 As you can notice from the commands above, in order to run SOAPdenovo2, you first need to create a config file (`configFile`) that contains different information about the read files (`read length`, `insert size`, `reads location`). SOAPdenovo2 accepts read files in 3 formats: fasta, fastq and bam.
 
+
 The example configuration file **configFile** for 2 paired-end fastq files, 1 paired-end fasta file and 1 single-end fastq file looks like:
 {{% panel header="`configFile`"%}}
 {{< highlight bash >}}
@@ -90,6 +93,7 @@ q=input_reads.fq
 
 After creating the configuration file **configFile**, the next step is to run the assembler using this file.
 
+
 Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` on Tusker is shown below:
 {{% panel header="`soapdenovo2.submit`"%}}
 {{< highlight bash >}}
@@ -108,8 +112,8 @@ SOAPdenovo-63mer all -s configFile -K 31 -o output_directory/output31 -p $SLURM_
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">SOAPdenovo2 Output</span>
+
+### SOAPdenovo2 Output
 
 SOAPdenovo2 outputs number of files in its `output_directory/` after each executed step. The final assembly output is in the `.contig` file.
 {{% panel header="`Output directory after SOAPdenovo2`"%}}
@@ -121,10 +125,10 @@ output31.contig         output31.edge.gz           output31.links     output31.p
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the SOAPdenovo2 (soapdenovo2/r240) performance on Tusker, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below:
 {{< readfile file="/static/html/soapdenovo2.html" >}}
 
-In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads.
\ No newline at end of file
+In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads.
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md
index 9d4cb8fb..1118484f 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md
@@ -3,10 +3,11 @@ title = "Trinity"
 description = "How to use Trinity on HCC machines"
 weight = "52"
 +++
- 
+
 
 [Trinity] (https://github.com/trinityrnaseq/trinityrnaseq/wiki) is a method for efficient and robust de novo reconstruction of transcriptomes from RNA-Seq data. Trinity combines three independent software modules: `Inchworm`, `Chrysalis`, and `Butterfly`. All these modules can be applied sequentially to process large RNA-Seq datasets.
 
+
 The basic usage of Trinity is:
 {{< highlight bash >}}
 $ Trinity --seqType [fa|fq] --JM <jellyfish_memory> --left input_reads_pair_1.[fa|fq] --right input_reads_pair_2.[fa|fq] [options]
@@ -18,6 +19,7 @@ Additional Trinity **options** can be found in the Trinity website, or by typing
 $ Trinity
 {{< /highlight >}}
 
+
 Running the Trinity pipeline from beginning to end on large datasets may exceed the walltime limit for a single job. Therefore, Trinity provides a mechanism to run the workflow in four separate steps, where each step resumes from the previous one. The same Trinity command and options are run for each step, with an additional option that is included for the different steps. On the last step, the Trinity command is run as normal.
 
 {{% panel theme="info" header="Step 1 Options" %}}
@@ -47,12 +49,13 @@ Trinity.pl [options]
 Each step may be run as its own job, providing a workaround for the single job walltime limit. To see how to run each step of Trinity as a single job under the SLURM scheduler on HCC, please check:
 {{% children %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the Trinity (trinity/r2014-04-13p1) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity on Tusker are shown in the table below:
 {{< readfile file="/static/html/trinity.html" >}}
 
+
 {{% notice tip %}}
 The Inchworm (step 1) and Chrysalis (step 2) steps can be memory intensive. A basic recommendation is to have **1GB of RAM per 1M ~76 base Illumina paired-end reads**.
-{{% /notice %}}
\ No newline at end of file
+{{% /notice %}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/running_trinity_in_multiple_steps.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/running_trinity_in_multiple_steps.md
index 789f8826..8caa894f 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/running_trinity_in_multiple_steps.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/trinity/running_trinity_in_multiple_steps.md
@@ -4,7 +4,8 @@ description =  "How to run Trinity in multiple steps on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Running Trinity with Paired-End fastq data with 8 CPUs and 100GB of RAM</span>
+
+## Running Trinity with Paired-End fastq data with 8 CPUs and 100GB of RAM
 
 The first step of running Trinity is to run Trinity with the option **--no_run_chrysalis**:
 {{% panel header="`trinity_step1.submit`"%}}
@@ -24,6 +25,7 @@ Trinity --seqType fq --JM 100G --left input_reads_pair_1.fastq --right input_rea
 {{< /highlight >}}
 {{% /panel %}}
 
+
 The second step of running Trinity is to run Trinity with the option **--no_run_quantifygraph**:
 {{% panel header="`trinity_step2.submit`"%}}
 {{< highlight bash >}}
@@ -42,6 +44,7 @@ Trinity --seqType fq --JM 100G --left input_reads_pair_1.fastq --right input_rea
 {{< /highlight >}}
 {{% /panel %}}
 
+
 The third step of running Trinity is to run Trinity with the option **--no_run_butterfly**:
 {{% panel header="`trinity_step3.submit`"%}}
 {{< highlight bash >}}
@@ -60,6 +63,7 @@ Trinity --seqType fq --JM 100G --left input_reads_pair_1.fastq --right input_rea
 {{< /highlight >}}
 {{% /panel %}}
 
+
 The fourth step of running Trinity is to run Trinity without any additional option:
 {{% panel header="`trinity_step4.submit`"%}}
 {{< highlight bash >}}
@@ -78,11 +82,12 @@ Trinity --seqType fq --JM 100G --left input_reads_pair_1.fastq --right input_rea
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Trinity Output</span>
+
+### Trinity Output
 
 Trinity outputs number of files in its `trinity_out/` output directory after each executed step. The output file `Trinity.fasta` is the final Trinity output that contains the assembled transcripts.
 
+
 {{% notice tip %}}
 The Inchworm (step 1) and Chrysalis (step 2) steps can be memory intensive. A basic recommendation is to have **1GB of RAM per 1M ~76 base Illumina paired-end reads**.
-{{% /notice %}}
\ No newline at end of file
+{{% /notice %}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md
index bd734325..2635bdbd 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md
@@ -11,11 +11,12 @@ Velvet has lots of parameters that can be found in its [manual] (https://www.ebi
 
 Velvet supports multiple file formats: `fasta`, `fastq`, `fasta.gz`, `fastq.gz`, `sam`, `bam`, `eland`, `gerald`. Velvet also supports different read categories for different sequencing technologies and libraries, e.g. `short`, `shortPaired`, `short2`, `shortPaired2`, `long`, `longPaired`.
 
+
 Each step of Velvet (**velveth** and **velvetg**) may be run as its own job. The following pages describe how to run Velvet in this manner on HCC and provide example submit scripts:
 {{% children %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+### Useful Information
 
 In order to test the Velvet (velvet/1.2) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet on Tusker are shown in the table below:
-{{< readfile file="/static/html/velvet.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/velvet.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_paired_end_data.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_paired_end_data.md
index dc7fe9ae..2f52cc87 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_paired_end_data.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_paired_end_data.md
@@ -4,7 +4,8 @@ description =  "How to run velvet with paired-end data on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Running Velvet with Paired-End long fastq data with k-mer=43, 8 CPUs and 100GB of RAM</span>
+
+## Running Velvet with Paired-End long fastq data with k-mer=43, 8 CPUs and 100GB of RAM
 
 The first step of running Velvet is to run **velveth**:
 {{% panel header="`velveth.submit`"%}}
@@ -25,6 +26,7 @@ velveth output_directory/ 43 -fastq -longPaired -separate input_reads_pair_1.fas
 {{< /highlight >}}
 {{% /panel %}}
 
+
 After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
 {{% panel header="`velvetg.submit`"%}}
 {{< highlight bash >}}
@@ -45,8 +47,10 @@ velvetg output_directory/ -min_contig_lgth 200
 {{% /panel %}}
 
 Both **velveth** and **velvetg** are multi-threaded.
-\\
-\\
+
+
+### Velvet Output
+
 {{% panel header="`Output directory after velveth`"%}}
 {{< highlight bash >}}
 $ ls output_directory/
@@ -61,4 +65,4 @@ contigs.fa  Graph  LastGraph  Log  PreGraph  Roadmaps  Sequences  stats.txt
 {{< /highlight >}}
 {{% /panel %}}
 
-The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
\ No newline at end of file
+The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_and_paired_end_data.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_and_paired_end_data.md
index 33a53174..fa355b11 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_and_paired_end_data.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_and_paired_end_data.md
@@ -4,7 +4,8 @@ description =  "How to run velvet with single-end and paired-end data on HCC res
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Running Velvet with Single-End and Paired-End short fasta data with k-mer=51, 8 CPUs and 100GB of RAM</span>
+
+## Running Velvet with Single-End and Paired-End short fasta data with k-mer=51, 8 CPUs and 100GB of RAM
 
 The first step of running Velvet is to run **velveth**:
 {{% panel header="`velveth.submit`"%}}
@@ -25,6 +26,7 @@ velveth output_directory/ 51 -fasta -short input_reads.fasta -fasta -shortPaired
 {{< /highlight >}}
 {{% /panel %}}
 
+
 After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
 {{% panel header="`velvetg.submit`"%}}
 {{< highlight bash >}}
@@ -45,8 +47,11 @@ velvetg output_directory/ -min_contig_lgth 200
 {{% /panel %}}
 
 Both **velveth** and **velvetg** are multi-threaded.
-\\
-\\
+
+
+
+### Velvet Output
+
 {{% panel header="`Output directory after velveth`"%}}
 {{< highlight bash >}}
 $ ls output_directory/
@@ -61,4 +66,4 @@ contigs.fa  Graph  LastGraph  Log  PreGraph  Roadmaps  Sequences  stats.txt
 {{< /highlight >}}
 {{% /panel %}}
 
-The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
\ No newline at end of file
+The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
diff --git a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_data.md b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_data.md
index 0545429d..82efa872 100644
--- a/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_data.md
+++ b/content/guides/running_applications/bioinformatics_tools/de_novo_assembly_tools/velvet/running_velvet_with_single_end_data.md
@@ -4,7 +4,8 @@ description =  "How to run velvet with single-end data on HCC resources"
 weight = "10"
 +++
 
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Running Velvet with Single-End short fasta data with k-mer=31, 8 CPUs and 100GB of RAM</span>
+
+## Running Velvet with Single-End short fasta data with k-mer=31, 8 CPUs and 100GB of RAM
 
 The first step of running Velvet is to run **velveth**:
 {{% panel header="`velveth.submit`"%}}
@@ -25,6 +26,7 @@ velveth output_directory/ 31 -fasta -short input_reads.fasta
 {{< /highlight >}}
 {{% /panel %}}
 
+
 After running **velveth**, the next step is to run **velvetg** on the `output_directory/` and files generated from **velveth**:
 {{% panel header="`velvetg.submit`"%}}
 {{< highlight bash >}}
@@ -45,8 +47,10 @@ velvetg output_directory/ -min_contig_lgth 200
 {{% /panel %}}
 
 Both **velveth** and **velvetg** are multi-threaded.
-\\
-\\
+
+
+### Velvet Output
+
 {{% panel header="`Output directory after velveth`"%}}
 {{< highlight bash >}}
 $ ls output_directory/
@@ -61,4 +65,4 @@ contigs.fa  Graph  LastGraph  Log  PreGraph  Roadmaps  Sequences  stats.txt
 {{< /highlight >}}
 {{% /panel %}}
 
-The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
\ No newline at end of file
+The output fasta file `contigs.fa` is the final Velvet output that contains the assembled contigs. More information about the output files is provided in the Velvet manual.
diff --git a/content/guides/running_applications/bioinformatics_tools/downloading_sra_data_from_ncbi.md b/content/guides/running_applications/bioinformatics_tools/downloading_sra_data_from_ncbi.md
index 60186ff0..ce1ac982 100644
--- a/content/guides/running_applications/bioinformatics_tools/downloading_sra_data_from_ncbi.md
+++ b/content/guides/running_applications/bioinformatics_tools/downloading_sra_data_from_ncbi.md
@@ -4,6 +4,7 @@ description = "How to download data from NCBI"
 weight = "52"
 +++
 
+
 One way to download high-volume data from NCBI is to use command line
 utilities, such as **wget**, **ftp** or Aspera Connect **ascp**
 plugin. The Aspera Connect plugin is commonly used high-performance transfer
@@ -13,30 +14,30 @@ This plugin is available on our clusters as a module. In order to use it, load t
 {{< highlight bash >}}
 $ module load aspera-cli
 {{< /highlight >}}
-\\
+
+
 The basic usage of the Aspera plugin is
 {{< highlight bash >}}
 $ ascp -i $ASPERA_PUBLIC_KEY -k 1 -T -l <max_download_rate_in_Mbps>m anonftp@ftp.ncbi.nlm.nih.gov:/<files_to_transfer> <local_work_output_directory>
 {{< /highlight >}}
 where **-k 1** enables resume of partial transfers, **-T** disables encryption for maximum throughput, and **-l** sets the transfer rate.
-\\
-\\
-\\
-**\<files_to_transfer\>** mentioned in the basic usage of Aspera
+
+
+**\<files_to_transfer\>** mentioned in the basic usage of Aspera
 plugin has a specifically defined pattern that needs to be followed:
 {{< highlight bash >}}
 <files_to_transfer> = /sra/sra-instant/reads/ByRun/sra/SRR|ERR|DRR/<first_6_characters_of_accession>/<accession>/<accession>.sra
 {{< /highlight >}}
 where **SRR\|ERR\|DRR** should be either **SRR**, **ERR **or **DRR** and should match the prefix of the target **.sra** file.
-\\
-\\
-\\
+
+
 More **ascp** options can be seen by using:
 {{< highlight bash >}}
 $ ascp --help
 {{< /highlight >}}
-\\
+
+
 For example, if you want to download the **SRR304976** file from NCBI in your $WORK **data/** directory with downloading speed of **1000 Mbps**, you should use the following command:
 {{< highlight bash >}}
 $ ascp -i $ASPERA_PUBLIC_KEY -k 1 -T -l 1000m anonftp@ftp.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR304/SRR304976/SRR304976.sra /work/[groupname]/[username]/data/
-{{< /highlight >}}
\ No newline at end of file
+{{< /highlight >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/cutadapt.md b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/cutadapt.md
index ae0d820b..779a7e64 100644
--- a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/cutadapt.md
+++ b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/cutadapt.md
@@ -4,22 +4,24 @@ description =  "How to run Cutadapt on HCC resources"
 weight = "10"
 +++
 
+
 [Cutadapt] (https://cutadapt.readthedocs.io/en/stable/index.html) is a tool for removing adapter sequences from DNA sequencing data. Although most of the adapters are located at the 3' end of the sequencing read, Cutadapt allows multiple adapter removal from both 3' and 5' ends.
 
+
 The basic usage of Cutadapt is:
 {{< highlight bash >}}
 $ cutadapt [-a|-b|-g] <adapter_sequence> input_reads.[fasta|fastq] > output_reads.[fasta|fastq]
 {{< /highlight >}}
+where **&lt;adapter_sequence&gt;** is the nucleotide sequence of the actual adapter, **input_reads.[fasta|fastq]** is the input file with sequencing data in fasta/fastq format, and respectively, **output_reads.[fasta|fastq]** is the final trimmed file in fasta/fastq format.
 
-where **&lt;adapter_sequence&gt;** is the nucleotide sequence of the actual adapter, **input_reads.[fasta|fastq]** is the input file with sequencing data in fasta/fastq format, and respectively, **output_reads.[fasta|fastq]** is the final trimmed file in fasta/fastq format.
-\\
 The option **-a** allows removal of adapters from the 3' end of the sequencing read. The option **-b** removes adapters ligated to the 5' or 3' end. The option **-g** removes adapter sequences from the 5' end. These options can be used multiple times for different adapters.
 
 More information about the Cutadapt options can be found by typing:
 {{< highlight bash >}}
 $ cutadapt --help
 {{< /highlight >}}
-\\
+
+
 Simple Cutadapt script that trims the adapter sequences **AGGCACACAGGG** and **TGAGACACGCA** from the 3' end and **AACCGGTT** from the 5' end of single-end fasta input file is shown below:
 {{% panel header="`cutadapt.submit`"%}}
 {{< highlight bash >}}
@@ -39,15 +41,15 @@ cutadapt -a AGGCACACAGGG -a TGAGACACGCA -g AACCGGTT input_reads.fasta > output_r
 {{% /panel %}}
 
 Cutadapt is single threaded program, and therefore `#SBATCH --nodes=1` and `#SBATCH --ntasks-per-node=1`.
-\\
-\\
-\\
+
+
 Cutadapt allows paired-end trimming where each pair is trimmed separately in a single pass:
 {{< highlight bash >}}
 $ cutadapt -a ADAPTER_PAIR_1 input_reads_pair_1.fastq > output_reads_pair_1.fastq
 $ cutadapt -a ADAPTER_PAIR_2 input_reads_pair_2.fastq > output_reads_pair_2.fastq
 {{< /highlight >}}
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Cutadapt Output</span>
 
-Beside the fasta/fastq file of reads with removed adapter sequences, Cutadapt also outputs useful statistics per adapter sequence.
\ No newline at end of file
+
+### Cutadapt Output
+
+Beside the fasta/fastq file of reads with removed adapter sequences, Cutadapt also outputs useful statistics per adapter sequence.
diff --git a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/prinseq.md b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/prinseq.md
index 70dde83f..26a236f4 100644
--- a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/prinseq.md
+++ b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/prinseq.md
@@ -4,14 +4,16 @@ description =  "How to run PRINSEQ on HCC resources"
 weight = "10"
 +++
 
+
 [PRINSEQ (PReprocessing and INformation of SEQuence data)] (http://prinseq.sourceforge.net/) is a tool used for filtering, formatting or trimming genome and metagenomic sequence data in fasta/fastq format. Moreover, PRINSEQ generates summary statistics of sequence and quality data.
 
 More information about the PRINSEQ program can be shown with:
 {{< highlight bash >}}
 $ prinseq-lite.pl --help
 {{< /highlight >}}
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">PRINSEQ with single-end fasta data</span>
+
+
+## PRINSEQ with single-end fasta data
 
 The basic usage of PRINSEQ for single-end data is:
 {{< highlight bash >}}
@@ -21,6 +23,7 @@ where **input_reads.[fasta|fastq]** is an input file of sequence data in fasta
 
 The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qual), **3** (fastq), **4** (fastq and input fasta), and **5** (fastq, fasta and qual).
 
+
 Simple PRINSEQ SLURM script for single-end fasta data and fasta output format is shown below:
 {{% panel header="`prinseq_single_end.submit`"%}}
 {{< highlight bash >}}
@@ -40,10 +43,9 @@ prinseq-lite.pl -fasta input_reads.fasta -out_format 1
 {{% /panel %}}
 
 PRINSEQ is single threaded program, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to **1**.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">PRINSEQ for paired-end fastq data</span>
+
+
+## PRINSEQ for paired-end fastq data
 
 The basic usage of PRINSEQ for paired-end data is:
 {{< highlight bash >}}
@@ -53,6 +55,7 @@ where **input_reads_pair_1.[fasta|fastq]** and **input_reads_pair_2.[fasta|fas
 
 The output format (`-out_format`) can be **1** (fasta only), **2** (fasta and qual), **3** (fastq), **4** (fastq and input fasta), and **5** (fastq, fasta and qual).
 
+
 Simple PRINSEQ SLURM script for paired-end fastq data and fastq output format is shown below:
 {{% panel header="`prinseq_paired_end.submit`"%}}
 {{< highlight bash >}}
@@ -73,7 +76,7 @@ prinseq-lite.pl -fastq input_reads_pair_1.fastq -fastq2 input_reads_pair_2.fastq
 
 PRINSEQ is single threaded program, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to **1**.
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">PRINSEQ Output</span>
 
-PRINSEQ gives statistics about the input and filtered sequences, and also outputs files of single-end or paired-end sequences filtered by specified parameters.
\ No newline at end of file
+### PRINSEQ Output
+
+PRINSEQ gives statistics about the input and filtered sequences, and also outputs files of single-end or paired-end sequences filtered by specified parameters.
diff --git a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/scythe.md b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/scythe.md
index 55240958..4cd1db59 100644
--- a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/scythe.md
+++ b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/scythe.md
@@ -4,8 +4,10 @@ description =  "How to run Scythe on HCC resources"
 weight = "10"
 +++
 
+
 [Scythe] (https://github.com/vsbuffalo/scythe) is a 3' end adapter trimmer that uses a Naive Bayesian approach to classify contaminant substrings in sequence reads. 3' ends often include poor quality bases which need to be removed prior the quality-based trimming, mapping, assemblies, and further analysis.
 
+
 The basic usage of Scythe is:
 {{< highlight bash >}}
 $ scythe -a adapter_file.fasta input_reads.fastq -o output_reads.fastq
@@ -18,7 +20,8 @@ More information about Scythe can found by typing:
 {{< highlight bash >}}
 $ scythe --help
 {{< /highlight >}}
-\\
+
+
 Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` for Tusker is shown below:
 {{% panel header="`scythe.submit`"%}}
 {{< highlight bash >}}
@@ -40,16 +43,14 @@ scythe -a ${SCYTHE_HOME}/illumina_adapters.fa input_reads.fastq -o output_reads.
 Scythe is single threaded program, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to **1**. 
 
 The two adapter sequences provided by Scythe are stored in **$SCYTHE_HOME**. Hence, to access the illumina adapter file use: `$SCYTHE_HOME/illumina_adapters.fa`, and to access the TruSeq file use: `$SCYTHE_HOME/truseq_adapters.fasta`.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Scythe Output</span>
+
+
+### Scythe Output
 
 Scythe returns fastq file of reads with removed adapter sequences.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+
+### Useful Information
 
 In order to test the Scythe (scythe/0.991) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe on Tusker are shown in the table below:
-{{< readfile file="/static/html/scythe.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/scythe.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/sickle.md b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/sickle.md
index 63d024da..eb6cc4e7 100644
--- a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/sickle.md
+++ b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/sickle.md
@@ -7,25 +7,24 @@ weight = "10"
 
 [Sickle] (https://github.com/najoshi/sickle) is a windowed adaptive trimming tools for fastq files. Beside sliding window, Sickle uses quality and length thresholds to determine and trim low quality bases at both 3' end and 5' end of the reads.
 
+
 Information about the Sickle command-line options can be shown by typing:
 {{< highlight bash >}}
 $ sickle --help
 {{< /highlight >}}
 
 Sickle is single threaded program.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Sickle for single-end reads</span>
+
+
+## Sickle for single-end reads
 
 The basic usage of Sickle for single-end reads is:
 {{< highlight bash >}}
 $ sickle se -t [solexa|illumina|sanger] -f input_reads.fastq -o output_reads_trimmed.fastq
 {{< /highlight >}}
 where **input_reads.fastq** is the input file of sequencing data in fastq format, and **output_reads_trimmed.fastq** is the trimmed output file. Another required option in `sickle se` is the **-t** option that based on the input data, accepts one of the following quality values: **solexa**, **illumina**, **sanger**.
-\\
-\\
-\\
+
+
 Simple SLURM Sickle script for Illumina single-end reads input file `input_reads.fastq` is shown below:
 {{% panel header="`sickle_single.submit`"%}}
 {{< highlight bash >}}
@@ -44,17 +43,16 @@ sickle se -t illumina -f input_reads.fastq -o output_reads_trimmed.fastq
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Sickle for paired-end reads</span>
+
+## Sickle for paired-end reads
 
 The basic usage of Sickle for paired-end reads is:
 {{< highlight bash >}}
 $ sickle pe -t [solexa|illumina|sanger] -f input_reads_pair_1.fastq -r input_reads_pair_2.fastq -o output_reads_trimmed_pair_1.fastq -p output_reads_trimmed_pair_2.fastq -s output_reads_trimmed_single.fastq
 {{< /highlight >}}
 where **input_reads_pair_1.fastq** and **input_reads_pair_2.fastq** are the input fastq files of the sequencing data, and respectively, **output_reads_trimmed_pair_1.fastq** and **output_reads_trimmed_pair_2.fastq** are the trimmed output files. **sickle pe** also prints **output_reads_trimmed_single.fastq** file that contains reads that passed the filter in one pair, but not in the other read pair. Sickle supports three types of quality values: **solexa**, **illumina**, **sanger**, and this type must be specified using the **-t** option.
-\\
-\\
-\\
+
+
 Simple SLURM Sickle script for Sanger paired-end reads input files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq` is shown below:
 {{% panel header="`sickle_paired.submit`"%}}
 {{< highlight bash >}}
@@ -73,14 +71,13 @@ sickle pe -t sanger -f input_reads_pair_1.fastq -r input_reads_pair_2.fastq -o o
 {{< /highlight >}}
 {{% /panel %}}
 
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Sickle Output</span>
+
+### Sickle Output
 
 Sickle returns fastq file of reads with trimmed low quality bases from both 3' and 5' ends. Sickle reduces the sequence length, while the number of sequences in the output file stays the same.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+
+### Useful Information
 
 In order to test the Sickle (sickle/1.210) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle on Tusker are shown in the table below:
-{{< readfile file="/static/html/sickle.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/sickle.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/tagcleaner.md b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/tagcleaner.md
index b1c99142..05e1f358 100644
--- a/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/tagcleaner.md
+++ b/content/guides/running_applications/bioinformatics_tools/pre_processing_tools/tagcleaner.md
@@ -4,14 +4,15 @@ description =  "How to run TagCleaner on HCC resources"
 weight = "10"
 +++
 
+
 [TagCleaner] (http://tagcleaner.sourceforge.net/) is a tool used to automatically detect and remove tag sequences from genomic and metagenomic sequence data. These additional tag sequences can contain deletions or insertions due to sequencing limitations.
 
 The basic usage of TagCleaner is:
 {{< highlight bash >}}
 $ tagcleaner.pl [-fasta|-fastq] input_reads.[fasta|fastq] [-predict|-tag3|-tag5] [options]
 {{< /highlight >}}
-where **input_reads.[fasta|fastq]** is an input file of sequence data in fasta/fastq format, and **options** are additional parameters that can be found in the [TagCleaner
-manual] (http://tagcleaner.sourceforge.net/manual.html).
+where **input_reads.[fasta|fastq]** is an input file of sequence data in fasta/fastq format, and **options** are additional parameters that can be found in the [TagCleaner manual] (http://tagcleaner.sourceforge.net/manual.html).
+
 
 Required parameter for TagCleaner is the tag sequence. If the tag sequence is unknown, then the **-predict** option will provide the predicted tag sequence to the user. If the tag sequence is known and is found at the 3' end of the read, then the option **-tag3 &lt;tag_sequence&gt;** is used. If the tag sequence is known and is found at the 5' end of the read, the the option **-tag5 &lt;tag_sequence&gt;** is used.
 
@@ -19,7 +20,8 @@ More information about the TagCleaner options can be found by using:
 {{< highlight bash >}}
 $ tagcleaner.pl --help
 {{< /highlight >}}
-\\
+
+
 Simple TagCleaner script for removing known 3' and 5' tag sequences (`NNNCCAAACACACCCAACACA` and `TGTGTTGGGTGTGTTTGGNNN` respectively) is shown below:
 {{% panel header="`tagcleaner.submit`"%}}
 {{< highlight bash >}}
@@ -39,9 +41,8 @@ tagcleaner.pl -fasta input_reads.fasta -tag3 NNNCCAAACACACCCAACACA -tag5 TGTGTTG
 {{% /panel %}}
 
 TagCleaner is single threaded program, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to **1**.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">TagCleaner Output</span>
 
-TagCleaner returns fasta or fastq file of reads with removed tag sequences.
\ No newline at end of file
+
+### TagCleaner Output
+
+TagCleaner returns fasta or fastq file of reads with removed tag sequences.
diff --git a/content/guides/running_applications/bioinformatics_tools/qiime.md b/content/guides/running_applications/bioinformatics_tools/qiime.md
index b9b818c2..dfec13f0 100644
--- a/content/guides/running_applications/bioinformatics_tools/qiime.md
+++ b/content/guides/running_applications/bioinformatics_tools/qiime.md
@@ -4,12 +4,13 @@ description = "How to run QIIME jobs on HCC machines"
 weight = "52"
 +++
 
+
 QIIME (Quantitative Insights Into Microbial Ecology) (http://qiime.org) is a bioinformatics software for conducting microbial community analysis. It is used to analyze raw DNA sequencing data generated from various sequencing technologies (Sanger, Roche/454, Illumina) from fungal, viral, bacterial and archaeal communities. As part of its analysis, QIIME produces lots of statistics, publication quality graphics and different options for viewing the outputs.
 
+
 QIIME consists of a number of scripts that have different functionalities. Some of these include demultiplexing and quality filtering, OTU picking, phylogenetic reconstruction, taxonomic assignment and diversity analyses and visualizations.
-\\
-\\
-\\
+
+
 Some common QIIME scripts are:
 
 - validate_mapping_file.py
@@ -29,7 +30,7 @@ Some common QIIME scripts are:
 - summarize_taxa.py
 - group_significance.py
 
-\\
+
 Sample QIIME submit script to run **pick_open_reference_otus.py** is:
 
 {{% panel header="`qiime.submit`"%}}
@@ -52,4 +53,4 @@ pick_open_reference_otus.py --parallel --jobs_to_start $SLURM_CPUS_ON_NODE -i /w
 
 To run QIIME with this script, update the input sequences option (**-i**) and the output directory path (**-o**).
 
-In the example above, we use the variable **${SLURM_JOB_ID}** as part of the output directory. This ensures each QIIME run will have a unique output directory.
\ No newline at end of file
+In the example above, we use the variable **${SLURM_JOB_ID}** as part of the output directory. This ensures each QIIME run will have a unique output directory.
diff --git a/content/guides/running_applications/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md b/content/guides/running_applications/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md
index 33d19a0f..cfde2a04 100644
--- a/content/guides/running_applications/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md
+++ b/content/guides/running_applications/bioinformatics_tools/reference_based_assembly_tools/cufflinks.md
@@ -17,7 +17,8 @@ More advanced Cufflinks options can be found in [the manual] (http://cole-trapne
 {{< highlight bash >}}
 $ cufflinks -h
 {{< /highlight >}}
-\\
+
+
 An example of how to run Cufflinks on Crane with alignment file in SAM format, output directory `cufflinks_output` and 8 CPUs is shown below:
 {{% panel header="`cufflinks.submit`"%}}
 {{< highlight bash >}}
@@ -35,16 +36,18 @@ module load cufflinks/2.2
 cufflinks input_alignments.sam -o cufflinks_output/ -p ${SLURM_NTASKS_PER_NODE}
 {{< /highlight >}}
 {{% /panel %}}
-\\
+
+
+### Cufflinks Output
+
 The program **cufflinks** produces number of files in its predefined output directory `cufflinks_output/`. Some of the generated files are:
 
 - **transcripts.gtf**: The GTF file contains Cufflinks' assembled isoforms where there is one GTF record per row, and each record represents either a transcript or an exon within a transcript
 - **isoforms.fpkm_tracking**: This file contains the estimated isoform-level expression values in the generic FPKM Tracking Format
 - **genes.fpkm_tracking**: This file contains the estimated gene-level expression values in the generic FPKM Tracking Format
 
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Available commands</span>
+
+### Available commands
 
 Beside **cufflinks**, the Cufflinks package includes the following
 programs:
@@ -80,4 +83,4 @@ Example of **cuffdiff** for the annotated transcripts for the new genome, `new_a
 $ cuffdiff new_alignments.gtf sample_1.sam, sample_2.sam, sample_3.sam -p 8
 {{< /highlight >}}
 
-**cuffdiff** prints multiple output files, such as `FPKM tracking files`, `count tracking files`, `read group tracking files`, `differential expression tests`, `differential splicing tests`, `differential coding output`, `differential promoter use`, `read group info`, and `run info`.
\ No newline at end of file
+**cuffdiff** prints multiple output files, such as `FPKM tracking files`, `count tracking files`, `read group tracking files`, `differential expression tests`, `differential splicing tests`, `differential coding output`, `differential promoter use`, `read group info`, and `run info`.
diff --git a/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md b/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md
index 3e215c32..c3f17b64 100644
--- a/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md
+++ b/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cap3.md
@@ -15,7 +15,8 @@ where `input_reads.fasta` is an input file of sequence reads in fasta format, an
 {{< highlight bash >}}
 $ cap3
 {{< /highlight >}}
-\\
+
+
 An example of how to run basic CAP3 SLURM script on Crane is shown
 below:
 {{% panel header="`cap3.submit`"%}}
@@ -36,18 +37,16 @@ cap3 input_reads.fasta > output.txt
 {{% /panel %}}
 
 CAP3 is single threaded program, and therefore both `#SBATCH --nodes` and `#SBATCH --ntasks-per-node` are set to `1`.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">CAP3 Output</span>
+
+
+### CAP3 Output
 
 CAP3 returns few output files, `input_reads.fasta.cap.singlets`, `input_reads.fasta.cap.contigs`, `input_reads.fasta.cap.contigs.links`, `input_reads.fasta.cap.qual`, `input_reads.fasta.cap.ace`, `input_reads.fasta.cap.info`.
 
 The consensus fasta sequences are saved in the file `input_reads.fasta.cap.contigs`, while the reads that are not used in the assembly are stored in the fasta file `input_reads.fasta.cap.singlets`.
-\\
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">Useful Information</span>
+
+
+### Useful Information
 
 In order to test the CAP3 (cap3/122107) performance on Crane, we created separately three nucleotide datasets, `small.fasta`, `medium.fasta` and `large.fasta`. Some statistics about the input datasets and the time and memory resources used by CAP3 on Crane are shown in the table below:
-{{< readfile file="/static/html/cap3.html" >}}
\ No newline at end of file
+{{< readfile file="/static/html/cap3.html" >}}
diff --git a/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md b/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md
index 73e0b185..b752007b 100644
--- a/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md
+++ b/content/guides/running_applications/bioinformatics_tools/removing_detecting_redundant_sequences/cd_hit.md
@@ -3,11 +3,11 @@ title = "CD-HIT"
 description =  "How to run CD-HIT on HCC resources"
 weight = "10"
 +++
- 
+
 
 CD-HIT (Cluster Database at High Identity with Tolerance), http://weizhong-lab.ucsd.edu/cd-hit, is a program for clustering and comparing nucleotide or protein sequences. CD-HIT is very fast and can handle large DNA/RNA datasets.
 
-Some of the most frequently used executables from the CD-HIT package are: CD-HIT, CD-HIT-2D, CD-HIT-EST, CD-HIT-EST-2D, CD-HIT-454, CD-HIT-PARA, PSI-CD-HIT, CD-HIT-OTU, CD-HIT-LAP and CD-HIT-DUP:
+Some of the most frequently used executables from the CD-HIT package are: CD-HIT, CD-HIT-2D, CD-HIT-EST, CD-HIT-EST-2D, CD-HIT-454, CD-HIT-PARA, PSI-CD-HIT, CD-HIT-OTU, CD-HIT-LAP and CD-HIT-DUP:
 
 - CD-HIT or CD-HIT-EST clusters similar proteins or DNAs into clusters that meet a defined similarity threshold
 - CD-HIT-2D (CD-HIT-EST-2D) compares 2 datasets and identifies the sequences in db2 that are similar to db1 above a given threshold
@@ -18,9 +18,8 @@ Some of the most frequently used executables from the CD-HIT package are: CD-HI
 
 Detailed overview of the whole CD-HIT package and executables can be found in the 
 [CD-HIT user's guide] (http://weizhongli-lab.org/lab-wiki/doku.php?id=cd-hit-user-guide).
-\\
-\\
-\\
+
+
 The basic usage of CD-HIT is:
 {{< highlight bash >}}
 $ cd-hit -i input_reads.fasta -o output [options]
@@ -31,9 +30,8 @@ $ cd-hit
 {{< /highlight >}}
 
 CD-HIT is multi-threaded program, and therefore, using multiple threads is recommended. By setting the CD-HIT parameter `-T 0`, all CPUs defined in the SLURM script will be used. Setting the parameter `-M 0` allows unlimited usage of the available memory.
-\\
-\\
-\\
+
+
 Simple SLURM CD-HIT script for Crane with 8 CPUs is given in addition:
 {{% panel header="`cd-hit.submit`"%}}
 {{< highlight bash >}}
@@ -51,8 +49,8 @@ module load cd-hit/4.6
 cd-hit -i input_reads.fasta -o output -M 0 -T 0
 {{< /highlight >}}
 {{% /panel %}}
-\\
-\\
-<span style="color: rgb(0,0,0);font-size: 20.0px;line-height: 1.5;">CD-HIT Output</span>
 
-CD-HIT prints out 2 files: `output` and `output.clstr`. **`output`** contains the final clustered non-redundant sequences in fasta format, while **`output.clstr`** has an information about the clusters with its associated sequences.
\ No newline at end of file
+
+### CD-HIT Output
+
+CD-HIT prints out 2 files: `output` and `output.clstr`. **`output`** contains the final clustered non-redundant sequences in fasta format, while **`output.clstr`** has an information about the clusters with its associated sequences.
diff --git a/content/quickstarts/setting_up_and_using_duo.md b/content/quickstarts/setting_up_and_using_duo.md
index cd93133b..c4ded731 100644
--- a/content/quickstarts/setting_up_and_using_duo.md
+++ b/content/quickstarts/setting_up_and_using_duo.md
@@ -38,12 +38,12 @@ with a time you will be available.
 
 ### YubiKeys
 
-YubiKey devices are currently a one-time cost of $22 from HCC, or can be
+YubiKey devices are currently a one-time cost of around $25 from HCC, or can be
 purchased from Yubico and added in-person at either HCC location.
 Purchasing a YubiKey from HCC must be done via a University cost object
 transfer (HCC cannot accept cash or credit cards). Please bring the cost
-object number with you if possible. YubiKeys are available from the
-Union Bookstore at UNL for a one-time cost of $25 each. Note that
+object number with you if possible. YubiKeys are also available from the
+Husker Tech store in the UNL City Union. Note that
 YubiKeys are configured for HCC's Duo, and not for general YubiCloud or
 U2F use.
 
@@ -139,8 +139,7 @@ entered manually to complete the login.
 YubiKeys are USB hardware tokens that generate passcodes when pressed.
 They appear as a USB keyboard to the computer they are connected to, and
 so require no driver software with almost all modern operating systems.
-YubiKeys are available from the Union Bookstore at UNL for a one-time
-cost of $25 each. Users may also purchase them directly from
+YubiKeys are available from the Husker Tech store at UNL. Users may also purchase them directly from
 [Yubico](https://store.yubico.com) if desired; this does require stopping 
 by either HCC location in person to have the YubiKey added to the user's account. 
 For your convenience, HCC often carries some YubiKeys as well; these may only be purchased via a
-- 
GitLab