diff --git a/content/applications/app_specific/running_postgres.md b/content/applications/app_specific/running_postgres.md
index 77fd824e539a3e2c17f6ae5ad3ad53b9516ed023..48abee2ef0393cbf6ce57369db2f45d77f4711fd 100644
--- a/content/applications/app_specific/running_postgres.md
+++ b/content/applications/app_specific/running_postgres.md
@@ -5,16 +5,16 @@ description = "How to run a PostgreSQL server within a SLURM job"
 
 This page describes how to run a PostgreSQL server instance with a SLURM job
 on HCC resources. Many software packages require the use of an SQL type database
-as part of their workflows. This example shows how to start an PostgreSQL server
+as part of their workflows. This example shows how to start a PostgreSQL server
 inside of a SLURM job on HCC resources. The database will be available as long as
 the SLURM job containing it is running, and other jobs may then be submitted to
-connect to and use it. The database files are stored on the clusters filesystem
+connect to and use it. The database files are stored on the clusters' filesystem
 (here `$COMMON` is used), so that even when the containing SLURM job ends the data
 is persistent. That is, you can submit a subsequent identical PostgreSQL server job
 and data that was previously imported in to the database will still be there.
 
 {{% notice warning %}}
-One **one** instance of the database server job can run at a time. Submitting multiple
+Only **one** instance of the database server job can run at a time. Submitting multiple
 server jobs simultaneously will result in undefined behavior and database corruption.
 {{% /notice %}}
 
@@ -114,9 +114,8 @@ values before submitting subsequent analysis jobs.
 ### Submitting jobs that require PostgreSQL
 
 The simplest way to manage jobs that need the database is to manually submit them after the PostgreSQL SLURM job
-has started. However, this is not terribly convenient. However, this is not terribly convenient. A better way is
-to use the dependency feature of SLURM. Submit the PostgreSQL job first and make a note of the job id. In the
-submit script(s) of the analysis jobs, add the line
+has started. However, this is not terribly convenient. A better way is to use the dependency feature of SLURM. 
+Submit the PostgreSQL job first and make a note of the job id. In the submit script(s) of the analysis jobs, add the line
 
 {{< highlight batch >}}
 #SBATCH --dependency=after:<job id>