Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
H
HCC docs
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Releases
Monitor
Service Desk
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Holland Computing Center
HCC docs
Commits
ea5ab9fd
Verified
Commit
ea5ab9fd
authored
8 months ago
by
Adam Caprez
Browse files
Options
Downloads
Patches
Plain Diff
Expand guest partition section.
parent
3d44e7be
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
content/submitting_jobs/partitions/_index.md
+43
-3
43 additions, 3 deletions
content/submitting_jobs/partitions/_index.md
with
43 additions
and
3 deletions
content/submitting_jobs/partitions/_index.md
+
43
−
3
View file @
ea5ab9fd
...
@@ -86,10 +86,50 @@ or in the general queue, wherever resources become available first
...
@@ -86,10 +86,50 @@ or in the general queue, wherever resources become available first
(taking into account FairShare). Unless there are specific reasons to limit jobs
(taking into account FairShare). Unless there are specific reasons to limit jobs
to owned resources, this method is recommended to maximize job throughput.
to owned resources, this method is recommended to maximize job throughput.
### Guest Partition
### Guest Partition
(s)
The
`guest`
partition can be used by users and groups that do not own
The
`guest`
partition can be used by users and groups that do not own
dedicated resources on Swan. Jobs running in the
`guest`
partition
dedicated resources on Swan. Jobs running in the
`guest`
partition
will run on the owned resources with Intel OPA interconnect. The jobs
will run on the owned resources with Intel OPA interconnect. The jobs
are preempted when the resources are needed by the resource owners and
are preempted when the resources are needed by the resource owners:
are restarted on another node.
guest jobs will be killed and returned to the queue in a pending state
until they can be started on another node.
HCC recommends verifying job behavior will support the restart and
modifying job scripts if necessary.
To submit your job to the guest partition add the line
{{% panel theme="info" header="Submit to guest partition" %}}
{{
<
highlight
bash
>
}}
#SBATCH --partition=guest
{{
<
/
highlight
>
}}
{{% /panel %}}
to your submit script.
Owned GPU resources may also be accessed in an opportunistic manner by
submitting to the
`guest_gpu`
partition. Similar to
`guest`
, jobs are
preempted when the GPU resources are needed by the owners. To submit
your job to the
`guest_gpu`
partition, add the lines
{{% panel theme="info" header="Submit to guest_gpu partition" %}}
{{
<
highlight
bash
>
}}
#SBATCH --partition=guest_gpu
#SBATCH --gres=gpu
{{
<
/
highlight
>
}}
{{% /panel %}}
to your SLURM script.
#### Preventing job restart
By default, jobs on the
`guest`
partition will be restarted elsewhere when they
are preempted. To prevent preempted jobs from being restarted add the line
{{% panel theme="info" header="Prevent job restart on guest partition" %}}
{{
<
highlight
bash
>
}}
#SBATCH --no-requeue
{{
<
/
highlight
>
}}
{{% /panel %}}
to your SLURM submit file.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment