Skip to content
Snippets Groups Projects
Commit 927e8a63 authored by Caughlin Bohn's avatar Caughlin Bohn
Browse files

Updating things to be Swan and not Crane

parent 4da93468
Branches
No related tags found
1 merge request!362Updating things to be Swan and not Crane
Showing
with 16 additions and 51 deletions
......@@ -56,7 +56,7 @@ exhausted.
**Why this ratio?**
All nodes in the Crane batch partition can meet this CPU to memory
All nodes in the Swan batch partition can meet this CPU to memory
ratio.
**Why have this ratio?**
......
......@@ -103,7 +103,7 @@ To start the workflow, submit Job A first:
{{% panel theme="info" header="Submit Job A" %}}
{{< highlight batch >}}
[demo01@login.crane demo01]$ sbatch JobA.submit
[demo01@login.swan demo01]$ sbatch JobA.submit
Submitted batch job 666898
{{< /highlight >}}
{{% /panel %}}
......@@ -113,9 +113,9 @@ dependency:
{{% panel theme="info" header="Submit Jobs B and C" %}}
{{< highlight batch >}}
[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobB.submit
[demo01@login.swan demo01]$ sbatch -d afterok:666898 JobB.submit
Submitted batch job 666899
[demo01@login.crane demo01]$ sbatch -d afterok:666898 JobC.submit
[demo01@login.swan demo01]$ sbatch -d afterok:666898 JobC.submit
Submitted batch job 666900
{{< /highlight >}}
{{% /panel %}}
......@@ -124,7 +124,7 @@ Finally, submit Job D as depending on both jobs B and C:
{{% panel theme="info" header="Submit Job D" %}}
{{< highlight batch >}}
[demo01@login.crane demo01]$ sbatch -d afterok:666899:666900 JobD.submit
[demo01@login.swan demo01]$ sbatch -d afterok:666899:666900 JobD.submit
Submitted batch job 666901
{{< /highlight >}}
{{% /panel %}}
......@@ -135,7 +135,7 @@ of the dependency.
{{% panel theme="info" header="Squeue Output" %}}
{{< highlight batch >}}
[demo01@login.crane demo01]$ squeue -u demo01
[demo01@login.swan demo01]$ squeue -u demo01
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
666899 batch JobB demo01 PD 0:00 1 (Dependency)
666900 batch JobC demo01 PD 0:00 1 (Dependency)
......
......@@ -35,7 +35,7 @@ sacct
Lists all jobs by the current user and displays information such as
JobID, JobName, State, and ExitCode.
{{< figure src="/images/21070053.png" height="150" >}}
{{< figure src="/images/sacct_generic.png" height="150" >}}
Coupling this command with the --format flag will allow you to see more
than the default information about a job. Fields to display should be
......@@ -47,7 +47,7 @@ a job, this command can be used:
sacct --format JobID,JobName,Elapsed,MaxRSS
{{< /highlight >}}
{{< figure src="/images/21070054.png" height="150" >}}
{{< figure src="/images/sacct_format.png" height="150" >}}
Additional arguments and format field information can be found in
[the SLURM documentation](https://slurm.schedmd.com/sacct.html).
......@@ -87,17 +87,17 @@ where `<NODE_ID>` is replaced by the name of the node where the monitored
job is running. This information can be found out by looking at the
squeue output under the `NODELIST` column.
{{< figure src="/images/21070055.png" width="700" >}}
{{< figure src="/images/srun_node_id.png" width="700" >}}
### Using `top` to monitor running jobs
Once the interactive job begins, you can run `top` to view the processes
on the node you are on:
{{< figure src="/images/21070056.png" height="400" >}}
{{< figure src="/images/srun_top.png" height="400" >}}
Output for `top` displays each running process on the node. From the above
image, we can see the various MATLAB processes being run by user
cathrine98. To filter the list of processes, you can type `u` followed
hccdemo. To filter the list of processes, you can type `u` followed
by the username of the user who owns the processes. To exit this screen,
press `q`.
......@@ -156,7 +156,7 @@ at the end of your submit script.
`mem_report` can also be run as part of an interactive job:
{{< highlight bash >}}
[demo13@c0218.crane ~]$ mem_report
[demo13@c0218.swan ~]$ mem_report
Current memory usage for job 25745709 is: 2.57 MBs
Maximum memory usage for job 25745709 is: 3.27 MBs
{{< /highlight >}}
......
+++
title = "Available Partitions"
description = "Listing of partitions on Crane and Swan."
description = "Listing of partitions on Swan."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
weight=70
+++
Partitions are used on Crane and Swan to distinguish different
Partitions are used on Swan to distinguish different
resources. You can view the partitions with the command `sinfo`.
### Crane:
[Full list for Crane]({{< relref "crane_available_partitions" >}})
### Swan:
[Full list for Swan]({{< relref "swan_available_partitions" >}})
......@@ -38,7 +34,7 @@ priority so it will run as soon as possible.
Overall limitations of maximum job wall time. CPUs, etc. are set for
all jobs with the default setting (when thea "–qos=" section is omitted)
and "short" jobs (described as above) on Crane and Swan.
and "short" jobs (described as above) on Swan.
The limitations are shown in the following form.
| | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User |
......@@ -93,7 +89,7 @@ to owned resources, this method is recommended to maximize job throughput.
### Guest Partition
The `guest` partition can be used by users and groups that do not own
dedicated resources on Crane or Swan. Jobs running in the `guest` partition
dedicated resources on Swan. Jobs running in the `guest` partition
will run on the owned resources with Intel OPA interconnect. The jobs
are preempted when the resources are needed by the resource owners and
are restarted on another node.
......@@ -107,24 +103,3 @@ interconnect. They are suitable for serial or single node parallel jobs.
The nodes in this partition are subjected to be drained and move to our
Openstack cloud when more cloud resources are needed without notice in
advance.
### Use of Infiniband or OPA
Crane nodes use either Infiniband or Intel Omni-Path interconnects in
the batch partition. Most users don't need to worry about which one to
choose. Jobs will automatically be scheduled for either of them by the
scheduler. However, if the user wants to use one of the interconnects
exclusively, the SLURM constraint keyword is available. Here are the
examples:
{{% panel theme="info" header="SLURM Specification: Omni-Path" %}}
{{< highlight bash >}}
#SBATCH --constraint=opa
{{< /highlight >}}
{{% /panel %}}
{{% panel theme="info" header="SLURM Specification: Infiniband" %}}
{{< highlight bash >}}
#SBATCH --constraint=ib
{{< /highlight >}}
{{% /panel %}}
+++
title = "Available Partitions for Crane"
description = "List of available partitions for crane.unl.edu."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++
### Crane:
{{< table url="http://crane-head.unl.edu:8192/slurm/partitions/json" >}}
static/images/OOD_Active_jobs_1.png

282 KiB | W: | H:

static/images/OOD_Active_jobs_1.png

278 KiB | W: | H:

static/images/OOD_Active_jobs_1.png
static/images/OOD_Active_jobs_1.png
static/images/OOD_Active_jobs_1.png
static/images/OOD_Active_jobs_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Active_jobs_2.png

47.9 KiB | W: | H:

static/images/OOD_Active_jobs_2.png

34.1 KiB | W: | H:

static/images/OOD_Active_jobs_2.png
static/images/OOD_Active_jobs_2.png
static/images/OOD_Active_jobs_2.png
static/images/OOD_Active_jobs_2.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Dashboard_1.png

298 KiB | W: | H:

static/images/OOD_Dashboard_1.png

296 KiB | W: | H:

static/images/OOD_Dashboard_1.png
static/images/OOD_Dashboard_1.png
static/images/OOD_Dashboard_1.png
static/images/OOD_Dashboard_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Delete_desktop_1.png

93.3 KiB | W: | H:

static/images/OOD_Delete_desktop_1.png

61.7 KiB | W: | H:

static/images/OOD_Delete_desktop_1.png
static/images/OOD_Delete_desktop_1.png
static/images/OOD_Delete_desktop_1.png
static/images/OOD_Delete_desktop_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Desktop_1.png

293 KiB | W: | H:

static/images/OOD_Desktop_1.png

316 KiB | W: | H:

static/images/OOD_Desktop_1.png
static/images/OOD_Desktop_1.png
static/images/OOD_Desktop_1.png
static/images/OOD_Desktop_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Files_menu_1.png

225 KiB | W: | H:

static/images/OOD_Files_menu_1.png

284 KiB | W: | H:

static/images/OOD_Files_menu_1.png
static/images/OOD_Files_menu_1.png
static/images/OOD_Files_menu_1.png
static/images/OOD_Files_menu_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Interactive_apps_1.png

367 KiB | W: | H:

static/images/OOD_Interactive_apps_1.png

212 KiB | W: | H:

static/images/OOD_Interactive_apps_1.png
static/images/OOD_Interactive_apps_1.png
static/images/OOD_Interactive_apps_1.png
static/images/OOD_Interactive_apps_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Interactive_apps_2.png

68.3 KiB | W: | H:

static/images/OOD_Interactive_apps_2.png

124 KiB | W: | H:

static/images/OOD_Interactive_apps_2.png
static/images/OOD_Interactive_apps_2.png
static/images/OOD_Interactive_apps_2.png
static/images/OOD_Interactive_apps_2.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Job_composer_1.png

336 KiB | W: | H:

static/images/OOD_Job_composer_1.png

279 KiB | W: | H:

static/images/OOD_Job_composer_1.png
static/images/OOD_Job_composer_1.png
static/images/OOD_Job_composer_1.png
static/images/OOD_Job_composer_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Job_composer_2.png

119 KiB | W: | H:

static/images/OOD_Job_composer_2.png

44.2 KiB | W: | H:

static/images/OOD_Job_composer_2.png
static/images/OOD_Job_composer_2.png
static/images/OOD_Job_composer_2.png
static/images/OOD_Job_composer_2.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Jupyter_1.png

50.1 KiB | W: | H:

static/images/OOD_Jupyter_1.png

79.9 KiB | W: | H:

static/images/OOD_Jupyter_1.png
static/images/OOD_Jupyter_1.png
static/images/OOD_Jupyter_1.png
static/images/OOD_Jupyter_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Shell_1.png

365 KiB | W: | H:

static/images/OOD_Shell_1.png

278 KiB | W: | H:

static/images/OOD_Shell_1.png
static/images/OOD_Shell_1.png
static/images/OOD_Shell_1.png
static/images/OOD_Shell_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Shell_2.png

126 KiB | W: | H:

static/images/OOD_Shell_2.png

39 KiB | W: | H:

static/images/OOD_Shell_2.png
static/images/OOD_Shell_2.png
static/images/OOD_Shell_2.png
static/images/OOD_Shell_2.png
  • 2-up
  • Swipe
  • Onion skin
static/images/OOD_Templates_1.png

144 KiB | W: | H:

static/images/OOD_Templates_1.png

57.1 KiB | W: | H:

static/images/OOD_Templates_1.png
static/images/OOD_Templates_1.png
static/images/OOD_Templates_1.png
static/images/OOD_Templates_1.png
  • 2-up
  • Swipe
  • Onion skin
static/images/Putty-win10X11.png

33.6 KiB | W: | H:

static/images/Putty-win10X11.png

19.5 KiB | W: | H:

static/images/Putty-win10X11.png
static/images/Putty-win10X11.png
static/images/Putty-win10X11.png
static/images/Putty-win10X11.png
  • 2-up
  • Swipe
  • Onion skin
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment