From ca36729b10901f83fb1810f593405a24ec7718e2 Mon Sep 17 00:00:00 2001 From: Natasha Pavlovikj <natasha.pavlovikj@huskers.unl.edu> Date: Wed, 8 Jul 2020 17:45:16 +0000 Subject: [PATCH] Remove duplicate sentence --- content/applications/app_specific/running_postgres.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/content/applications/app_specific/running_postgres.md b/content/applications/app_specific/running_postgres.md index 77fd824e..48abee2e 100644 --- a/content/applications/app_specific/running_postgres.md +++ b/content/applications/app_specific/running_postgres.md @@ -5,16 +5,16 @@ description = "How to run a PostgreSQL server within a SLURM job" This page describes how to run a PostgreSQL server instance with a SLURM job on HCC resources. Many software packages require the use of an SQL type database -as part of their workflows. This example shows how to start an PostgreSQL server +as part of their workflows. This example shows how to start a PostgreSQL server inside of a SLURM job on HCC resources. The database will be available as long as the SLURM job containing it is running, and other jobs may then be submitted to -connect to and use it. The database files are stored on the clusters filesystem +connect to and use it. The database files are stored on the clusters' filesystem (here `$COMMON` is used), so that even when the containing SLURM job ends the data is persistent. That is, you can submit a subsequent identical PostgreSQL server job and data that was previously imported in to the database will still be there. {{% notice warning %}} -One **one** instance of the database server job can run at a time. Submitting multiple +Only **one** instance of the database server job can run at a time. Submitting multiple server jobs simultaneously will result in undefined behavior and database corruption. {{% /notice %}} @@ -114,9 +114,8 @@ values before submitting subsequent analysis jobs. ### Submitting jobs that require PostgreSQL The simplest way to manage jobs that need the database is to manually submit them after the PostgreSQL SLURM job -has started. However, this is not terribly convenient. However, this is not terribly convenient. A better way is -to use the dependency feature of SLURM. Submit the PostgreSQL job first and make a note of the job id. In the -submit script(s) of the analysis jobs, add the line +has started. However, this is not terribly convenient. A better way is to use the dependency feature of SLURM. +Submit the PostgreSQL job first and make a note of the job id. In the submit script(s) of the analysis jobs, add the line {{< highlight batch >}} #SBATCH --dependency=after:<job id> -- GitLab