Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
H
HCC docs
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Releases
Monitor
Service Desk
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Holland Computing Center
HCC docs
Commits
039ab18b
Commit
039ab18b
authored
4 years ago
by
Adam Caprez
Browse files
Options
Downloads
Plain Diff
Merge branch 'patch-4' into 'master'
Remove duplicate sentence See merge request
!230
parents
f759816a
ca36729b
No related branches found
Branches containing commit
No related tags found
1 merge request
!230
Remove duplicate sentence
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
content/applications/app_specific/running_postgres.md
+5
-6
5 additions, 6 deletions
content/applications/app_specific/running_postgres.md
with
5 additions
and
6 deletions
content/applications/app_specific/running_postgres.md
+
5
−
6
View file @
039ab18b
...
...
@@ -5,16 +5,16 @@ description = "How to run a PostgreSQL server within a SLURM job"
This page describes how to run a PostgreSQL server instance with a SLURM job
on HCC resources. Many software packages require the use of an SQL type database
as part of their workflows. This example shows how to start a
n
PostgreSQL server
as part of their workflows. This example shows how to start a PostgreSQL server
inside of a SLURM job on HCC resources. The database will be available as long as
the SLURM job containing it is running, and other jobs may then be submitted to
connect to and use it. The database files are stored on the clusters filesystem
connect to and use it. The database files are stored on the clusters
'
filesystem
(here
`$COMMON`
is used), so that even when the containing SLURM job ends the data
is persistent. That is, you can submit a subsequent identical PostgreSQL server job
and data that was previously imported in to the database will still be there.
{{% notice warning %}}
On
e
**one**
instance of the database server job can run at a time. Submitting multiple
On
ly
**one**
instance of the database server job can run at a time. Submitting multiple
server jobs simultaneously will result in undefined behavior and database corruption.
{{% /notice %}}
...
...
@@ -114,9 +114,8 @@ values before submitting subsequent analysis jobs.
### Submitting jobs that require PostgreSQL
The simplest way to manage jobs that need the database is to manually submit them after the PostgreSQL SLURM job
has started. However, this is not terribly convenient. However, this is not terribly convenient. A better way is
to use the dependency feature of SLURM. Submit the PostgreSQL job first and make a note of the job id. In the
submit script(s) of the analysis jobs, add the line
has started. However, this is not terribly convenient. A better way is to use the dependency feature of SLURM.
Submit the PostgreSQL job first and make a note of the job id. In the submit script(s) of the analysis jobs, add the line
{{
<
highlight
batch
>
}}
#SBATCH --dependency=after:<job id>
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment