Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • FAQ
  • RDPv10
  • UNL_OneDrive
  • atticguidelines
  • data_share
  • globus-auto-backups
  • good-hcc-practice-rep-workflow
  • hchen2016-faq-home-is-full
  • ipynb-doc
  • master
  • rclone-fix
  • sislam2-master-patch-51693
  • sislam2-master-patch-86974
  • site_url
  • test
15 results

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
  • 26-add-screenshots-for-newer-rdp-v10-client
  • 28-overview-page-for-connecting-2
  • AddExamples
  • OMCCLUNG2-master-patch-74599
  • RDPv10
  • globus-auto-backups
  • gpu_update
  • master
  • mtanash2-master-patch-75717
  • mtanash2-master-patch-83333
  • mtanash2-master-patch-87890
  • mtanash2-master-patch-96320
  • patch-1
  • patch-2
  • patch-3
  • runTime
  • submitting-jobs-overview
  • tharvill1-master-patch-26973
18 results
Show changes
Showing
with 697 additions and 118 deletions
{
"table_generated": "1970-01-01T00:00:00UTC",
"table_header": [
"Partition",
"Owner",
"Node total(NODExCPU/MEM/FEATURE)",
"Description",
"SLURM Specification",
"Max Job Run Time",
"Max CPUs Per User",
"Max Jobs PerUser"
],
"table_data": [
[
"batch",
"shared",
"91 total (4x40/1510.0GB/opa; 85x56/250.0GB/cx6; 2x112/503.085GB/cx6)",
"(default, no specification)",
"#SBATCH --partition=batch",
"7-00:00:00",
"opa:160 cx6:4760",
"1000"
]
]
}
\ No newline at end of file
| Image Name | Username to Connect | Access instructions | Description |
| ------------------------------- | --------------------- | ------------------------------------ | --------------------------------------------------------------------------------- |
| Alma Linux 8.10 | `almalinux` | `ssh -l almalinux <ip address>` | AlmaLinux is a free and open source Linux distribution that is binary-compatible with Red Hat Enterprise Linux (RHEL).|
| Alma Linux 8.10 Xfce | `almalinux` | [X2Go instructions](/anvil/connecting_to_linux_instances_using_x2go/) | AlmaLinux 8.10 with the Xfce Desktop Environment pre-installed.|
| Cloudera 5.12 GNOME | `cloudera` | [X2Go instructions](/anvil/connecting_to_linux_instances_using_x2go/) | Cloudera 5.12 QuickStart VM. *Note*: Follow the X2Go instructions, but choose GNOME for the Session type instead of Xfce.|
| Fedora 39 Cloud | `fedora` | `ssh -l fedora <ip address>` | Fedora is a Linux distribution developed by the community-supported Fedora Project and sponsored by the Red Hat company.|
| Fedora 40 Cloud | `fedora` | `ssh -l fedora <ip address>` | Fedora is a Linux distribution developed by the community-supported Fedora Project and sponsored by the Red Hat company.|
| Fedora 40 Xfce | `fedora` | [X2Go instructions](/anvil/connecting_to_linux_instances_using_x2go/) | Fedora 40 with the Xfce Desktop Environment pre-installed.|
| Windows 10 | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSC edition with remote desktop access.|
| Windows 10 Matlab | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSB with Matlab r2013b, r2014b, r2015b, r2016b, r2017a pre-installed. |
| Windows 10 Matlab r2018b/r2019b | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSC with Matlab r2018b and r2019b pre-installed. |
| Windows 10 SAS | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSC with SAS 9.4 pre-installed. |
| Windows 10 Mathematica | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSB with Mathematica 10.4 and 11.0 pre-installed.|
| Windows 10 Mathematica 12.0 | `.\cloud-user` | [Windows instructions](/anvil/connecting_to_windows_instances) | Windows 10 LTSC with Mathematica 12.0 pre-installed.|
| Ubuntu Cloud 20.04 LTS | `ubuntu` | `ssh -l ubuntu <ip address>` | Ubuntu Cloud Image from the 20.04 Long Term Support release.|
| Ubuntu Cloud 22.04 LTS | `ubuntu` | `ssh -l ubuntu <ip address>` | Ubuntu Cloud Image from the 22.04 Long Term Support release.|
| Ubuntu 22.04 LTS Xfce | `ubuntu` | [X2Go instructions](/anvil/connecting_to_linux_instances_using_x2go/) | Ubuntu 22.04 with the Xfce Desktop Environment pre-installed.|
| MariaDB Server 10.5 | `centos`/`root` | `ssh -l centos <ip address>`/[MySQL instructions](/anvil/using_mysql) | CentOS 7 based image with a pre-configured MariaDB server. |
The following guidelines apply to any accounts and groups created for the purposes of courses and coursework:
1) User accounts added to a class group will be removed from the class group one (1) calendar week after the end of the semester.
User accounts solely used in class groups will be locked at this time as well. All files/data in user account directories
($HOME, $WORK, and $COMMON) contained within the class group will be removed at that time. It will be the students’ responsibility
to copy any files they would like to keep to a more permanent location before that time. Each student who applies to join
the class group must read and agree to these guidelines in advance. The class group owner’s account will be unaffected.
Additionally, HCC should be notified in advance of any other accounts, such as teaching assistants, that should not be removed.
If a student account is also a member of another, non-coursework, HCC group (e.g., a research group of a faculty member),
the user account and the associated files/data under the research group will not be affected. Students will be notified
throughout the semester (up to (3) three emails) that they will need to backup their data elsewhere, and it is encouraged
that instructors provide reminders in-class.
2) All Anvil instances created for the class, excluding the group owner’s instances, will be removed one (1) calendar week after the semester ends.
All files/data/software associated with the instances will be removed at that time.
3) The class group will be disabled one (1) calendar week after the end of the semester. To re-use a class group for a
future semester, the group owner will need to request that the class group be “renewed” using the Class Group Renewal
form at least one (1) calendar week before the group will be used. If the group owner plans to teach a new class
using HCC resources, they should apply for a new group specific for the class using the course catalog code as
the preferred group name, such as “csce123”. As part of the renewal form, HCC encourages optionally uploading a
class roster to help expedite the account approval process to avoid the need to manually approve each account.
!!! failure "$COMMON Retirement - **{{ hcc.common.retirement_date }}**"
The Common filesystem will be retired from service on **{{ hcc.common.retirement_date }}**!!!
Any jobs on Swan trying to start with the `{{ hcc.common.variable }}` filesystem or `--licenses=common` will not be able to be submitted.
**Any data under **`{{ hcc.common.path }}`** or **`{{ hcc.common.variable }}`** will be lost if it is not moved to another location before the retirement date of {{ hcc.common.retirement_date }}.**
It is strongly recommended to begin moving data and updating workflows immediately to avoid lossing data.
More information on the retirement is available on the [Common Retirement FAQ Page](/FAQ/common_retirement). If you have any questions about the retirement, please email <a href="mailto:hcc-support@unl.edu" class="external-link">{{ hcc.support_email }}</a>.
### Workshop Recommendations
To make the most of this workshop, we strongly encourage active participation throughout the entire session, including participating in the hands-on exercises and asking questions throughout the workshop.
If you are stuck or not sure on something at any point of the workshop, please let us know and we would be happy to help.
Participation can be accomplished by using the sticky notes, raising hands, or if you are remote, using the Zoom Reactions or use the Zoom chat box.
### Sticky Notes and Reactions
Throughout the workshop, we will use sticky notes and Zoom reactions to indicate status. These help instructors and helpers in the workshop assist with various issues and provide formative feedback during the workshops.
!!! tip "Please make sure the sticky note is visibly on both the front and back of your laptop"
- **Yellow Sticky Note / Green Check Reaction**: All good to go
- **Red Sticky Note / Red "X" Reaction**: I need help or I am not ready to continue
{% include "./zoom_reactions_05-2025.md" %}
### Workshop Recommendations
To make the most of this workshop, we strongly encourage active participation throughout the entire session, including participating in the hands-on exercises and asking questions throughout the workshop.
If you are stuck or not sure on something at any point of the workshop, please let us know and we would be happy to help.
Participation can be accomplished by using the sticky notes or raising hands.
### Sticky Notes and Reactions
Throughout the workshop, we will use sticky notes to indicate status. These help instructors and helpers in the workshop assist with various issues and provide formative feedback during the workshops.
!!! tip "Please make sure the sticky note is visibly on both the front and back of your laptop"
- **Yellow Sticky Note**: All good to go
- **Red Sticky Note**: I need help or I am not ready to continue
!!! warning "Please make sure to complete the setup steps below each week."
1. Ensure your HCC Account is active.
2. Sign into Swan's Open OnDemand Portal: {{ hcc.swan.ood.url }}
3. When you have signed into the Swan Open OnDemand Portal, please put up your Yellow Sticky Note or Green Checkmark.
**If you have any issues logging in or accesing Swan, please visit our [event troubleshooting guide here](/Events/general_materials/#troubleshooting-logging-into-swan)**
### Workshop Recommendations
To make the most of this workshop, we strongly encourage active participation throughout the entire session, including participating in the hands-on exercises and asking questions throughout the workshop.
If you are stuck or not sure on something at any point of the workshop, please let us know and we would be happy to help.
Participation can be accomplished by using the Zoom Reactions or use the Zoom chat box.
??? note "How to use Zoom Reactions"
1. In the menu bar at the bottom of the Zoom window, click on `React`.
![](/images/events/guides/zoom_menu_bar_05-2025.png)
2. In the pop up panel, select the appropriate reaction.
![](/images/events/guides/zoom_react_panel_05-2025.png)
Full Details: https://support.zoom.com/hc/en/article?id=zm_kb&sysparm_article=KB0063323
:root > * {
--md-primary-fg-color: #D00000;
--md-primary-fg-color--light: #FEFDFA;
--md-primary-fg-color--dark: #14151a;
}
/* Add padding to bottom of content to prevent overlap */
.md-main {
padding-bottom: 80px; /* adjust based on footer height */
}
title: Submitting Jobs
---
title: "Application Specific Guides"
weight: 100
---
!!! note
All of the examples below are available in a single repository at https://github.com/unlhcc/job-examples.
## In-depth guides for running applications on HCC resources
This repository contains scripts, data, and submit files for running many popular applications on the Holland Computing Center clusters.
#### Applications included:
- **BLAST** – includes advanced examples
- **MATLAB**
- **Python**
- **R** – includes advanced examples
- **Mathematica**
- **Gaussian**
- **Jupyter Notebook** – including Python and R scripts
- **GAMESS**
- **And More...**
<br>
<br>
- [Application Examples](https://github.com/unlhcc/job-examples)
- [Submitting ANSYS Jobs](./submitting_ansys_jobs/)
- [Submitting MATLAB Jobs](./submitting_matlab_jobs/)
- [Submitting R Jobs](./submitting_r_jobs/)
For assistance or support regarding these examples or any HCC resources, please contact us at **[hcc-support@unl.edu](mailto:hcc-support@unl.edu)**.
+++ ---
title = "Submitting ANSYS Jobs" title: Submitting ANSYS Jobs
description = "How to submit ANSYS jobs on HCC resources." summary: "How to submit ANSYS jobs on HCC resources."
+++ ---
!!! note
The ANSYS software licenses are managed by College of Engineering. To arrange access to ANSYS on HCC resources, contact Paul Pokorny ([paul.pokorny@unl.edu](mailto:paul.pokorny@unl.edu)).
ANSYS license is purchased by College of Engineering. Only users from CoE can access ANSYS on HCC resources. To be added to the 'ansys' group, send an email to hcc-support@unl.edu.
The number of ANSYS tasks is restricted by its license. The number of ANSYS tasks is restricted by its license.
For research computations, users need to add the line below to their job submission file. For research computations, users need to add the line below to their job submission file.
{{< highlight batch >}} ```code
#SBATCH --licenses=ansys_research #SBATCH --licenses=ansys_research
{{< /highlight >}} ```
For teaching purposes, users need to add For teaching purposes, users need to add
{{< highlight batch >}} ```bat
#SBATCH --licenses=ansys_teaching #SBATCH --licenses=ansys_teaching
{{< /highlight >}} ```
### Running ANSYS scripts in batch ### Running ANSYS scripts in batch
{{% panel theme="info" header="ANSYS.submit" %}} !!! note "ANSYS.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --ntasks=1 #SBATCH --ntasks=1
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
...@@ -30,12 +32,12 @@ For teaching purposes, users need to add ...@@ -30,12 +32,12 @@ For teaching purposes, users need to add
module load ansys/19.2 module load ansys/19.2
YOUR_ANSYS_COMMAND YOUR_ANSYS_COMMAND
{{< /highlight >}} ```
{{% /panel %}}
Details of SLURM job submission can be found at [SUBMITTING JOBS]({{< relref "submitting_jobs" >}}). Details of SLURM job submission can be found at [SUBMITTING JOBS](/submitting_jobs/).
### Running ANSYS interactively ### Running ANSYS interactively
1. To use graphical user interface, users need to first setup X11 forwarding. [HOW TO SETUP X11 FORWARDING]({{< relref "how_to_setup_x11_forwarding" >}}) 1. To use graphical user interface, users need to first setup X11 forwarding. [HOW TO SETUP X11 FORWARDING](/connecting/how_to_setup_x11_forwarding/)
1. Start an interactie job using srun. NOTE: users need to add \--licenses=ansys_research or \--licenses=ansys_teaching to the srun command. [SUBMITTING AN INTERACTIVE JOB]({{< relref "submitting_an_interactive_job" >}}) 1. Start an interactie job using srun. NOTE: users need to add \--licenses=ansys_research or \--licenses=ansys_teaching to the srun command. [SUBMITTING AN INTERACTIVE JOB](../../creating_an_interactive_job/)
1. After the interactive job starts, execute "module load ansys/19.2", then run the ANSYS command, e.g. fluent, from command line. The GUI will show up if steps 1-2 are configured correctly. 1. After the interactive job starts, execute "module load ansys/19.2", then run the ANSYS command, e.g. fluent, from command line. The GUI will show up if steps 1-2 are configured correctly.
+++ ---
title = "Submitting MATLAB Jobs" title: Submitting MATLAB Jobs
description = "How to submit MATLAB jobs on HCC resources." summary: "How to submit MATLAB jobs on HCC resources."
+++ ---
Submitting Matlab jobs is very similar to Submitting Matlab jobs is very similar to
[submitting MPI jobs]({{< relref "submitting_an_mpi_job" >}}) or [submitting MPI jobs](../../submitting_an_mpi_job/) or
[serial jobs]({{< relref "/guides/submitting_jobs/_index.md" >}}) [serial jobs](/submitting_jobs/)
(depending if you are using parallela matlab). (depending if you are using parallela matlab).
### Submit File ### Submit File
...@@ -13,9 +13,9 @@ Submitting Matlab jobs is very similar to ...@@ -13,9 +13,9 @@ Submitting Matlab jobs is very similar to
The submit file will need to be modified to allow Matlab to work. The submit file will need to be modified to allow Matlab to work.
Specifically, these two lines should be added before calling matlab: Specifically, these two lines should be added before calling matlab:
{{% panel theme="info" header="serial_matlab.submit" %}} !!! note "serial_matlab.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=03:15:00 #SBATCH --time=03:15:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=[job_name] #SBATCH --job-name=[job_name]
...@@ -24,16 +24,16 @@ Specifically, these two lines should be added before calling matlab: ...@@ -24,16 +24,16 @@ Specifically, these two lines should be added before calling matlab:
module load matlab/r2014b module load matlab/r2014b
matlab -nodisplay -r "[matlab script name], quit" matlab -nodisplay -r "[matlab script name], quit"
{{< /highlight >}} ```
{{% /panel %}}
### Parallel Matlab .m file ### Parallel Matlab .m file
The submit file: The submit file:
{{% panel theme="info" header="parallel_matlab.submit" %}} !!! note "parallel_matlab.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --nodes=1 #SBATCH --nodes=1
#SBATCH --ntasks-per-node=5 #SBATCH --ntasks-per-node=5
#SBATCH --time=03:15:00 #SBATCH --time=03:15:00
...@@ -44,8 +44,8 @@ The submit file: ...@@ -44,8 +44,8 @@ The submit file:
module load matlab/r2014b module load matlab/r2014b
matlab -nodisplay -r "[matlab script name], quit" matlab -nodisplay -r "[matlab script name], quit"
{{< /highlight >}} ```
{{% /panel %}}
#### Matlab File Additions #### Matlab File Additions
...@@ -53,9 +53,9 @@ In addition to the changes in the submit file, if you are running ...@@ -53,9 +53,9 @@ In addition to the changes in the submit file, if you are running
parallel Matlab, you will also need to add to the .m file the additional parallel Matlab, you will also need to add to the .m file the additional
lines: lines:
{{< highlight batch >}} ```matlab
... ...
i=str2num(getenv('SLURM_TASKS_PER_NODE')); i=str2num(getenv('SLURM_TASKS_PER_NODE'));
parpool(i); parpool(i);
... ...
{{< /highlight >}} ```
+++ ---
title = "Submitting R Jobs" title: Submitting R Jobs
description = "How to submit R jobs on HCC resources." summary: "How to submit R jobs on HCC resources."
+++ ---
Submitting an R job is very similar to submitting a serial job shown Submitting an R job is very similar to submitting a serial job shown
on [Submitting Jobs]({{< relref "/guides/submitting_jobs/_index.md" >}}). on [Submitting Jobs](/submitting_jobs/).
- [Running R scripts in batch](#running-r-scripts-in-batch) - [Running R scripts in batch](#running-r-scripts-in-batch)
- [Running R scripts using `R CMD BATCH`](#running-r-scripts-using-r-cmd-batch) - [Running R scripts using `R CMD BATCH`](#running-r-scripts-using-r-cmd-batch)
- [Running R scripts using `Rscript`](#running-r-scripts-using-rscript) - [Running R scripts using `Rscript`](#running-r-scripts-using-rscript)
- [Multicore (parallel) R submission](#multicore-parallel-r-submission) - [Multicore (parallel) R submission](#multicore-parallel-r-submission)
- [Multinode R submission with Rmpi](#multinode-r-submission-with-rmpi) - [Multinode R submission with Rmpi](#multinode-r-submission-with-rmpi)
- [Adding packages](#adding-packages)
- [Installing packages interactively](#installing-packages-interactively)
- [Installing packages using R CMD INSTALL](#installing-packages-using-r-cmd-install)
### Running R scripts in batch ### Running R scripts in batch
...@@ -28,17 +25,17 @@ When utilizing `R CMD BATCH` all output will be directed to an `.Rout` ...@@ -28,17 +25,17 @@ When utilizing `R CMD BATCH` all output will be directed to an `.Rout`
file named after your script unless otherwise specified. For file named after your script unless otherwise specified. For
example: example:
{{% panel theme="info" header="serial_R.submit" %}} !!! note "serial_R.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TestJob #SBATCH --job-name=TestJob
module load R/3.5 module load R/3.5
R CMD BATCH Rcode.R R CMD BATCH Rcode.R
{{< /highlight >}} ```
{{% /panel %}}
In the above example, output for the job will be found in the file In the above example, output for the job will be found in the file
`Rcode.Rout`. Notice that we did not specify output and error files in `Rcode.Rout`. Notice that we did not specify output and error files in
...@@ -48,17 +45,17 @@ the `.Rout` file. To direct output to a specific location, follow your ...@@ -48,17 +45,17 @@ the `.Rout` file. To direct output to a specific location, follow your
directed to, as follows: directed to, as follows:
{{% panel theme="info" header="serial_R.submit" %}} !!! note "serial_R.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TestJob #SBATCH --job-name=TestJob
module load R/3.5 module load R/3.5
R CMD BATCH Rcode.R Rcodeoutput.txt R CMD BATCH Rcode.R Rcodeoutput.txt
{{< /highlight >}} ```
{{% /panel %}}
In this example, output from running the script `Rcode.R` will be placed In this example, output from running the script `Rcode.R` will be placed
in the file `Rcodeoutput.txt`. in the file `Rcodeoutput.txt`.
...@@ -67,17 +64,17 @@ To pass arguments to the script, they need to be specified after `R CMD ...@@ -67,17 +64,17 @@ To pass arguments to the script, they need to be specified after `R CMD
BATCH` but before the script to be executed, and preferably preceded BATCH` but before the script to be executed, and preferably preceded
with `--args` as follows: with `--args` as follows:
{{% panel theme="info" header="serial_R.submit" %}} !!! note "serial_R.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TestJob #SBATCH --job-name=TestJob
module load R/3.5 module load R/3.5
R CMD BATCH "--args argument1 argument2 argument3" Rcode.R Rcodeoutput.txt R CMD BATCH "--args argument1 argument2 argument3" Rcode.R Rcodeoutput.txt
{{< /highlight >}} ```
{{% /panel %}}
#### Running R scripts using `Rscript` #### Running R scripts using `Rscript`
...@@ -88,9 +85,9 @@ in a manner similar to other programs. This gives the user larger ...@@ -88,9 +85,9 @@ in a manner similar to other programs. This gives the user larger
control over where to direct the output. For example, to run our script control over where to direct the output. For example, to run our script
using `Rscript` the submit script could look like the following: using `Rscript` the submit script could look like the following:
{{% panel theme="info" header="serial_R.submit" %}} !!! note "serial_R.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TestJob #SBATCH --job-name=TestJob
...@@ -99,22 +96,22 @@ using `Rscript` the submit script could look like the following: ...@@ -99,22 +96,22 @@ using `Rscript` the submit script could look like the following:
module load R/3.5 module load R/3.5
Rscript Rcode.R Rscript Rcode.R
{{< /highlight >}} ```
{{% /panel %}}
In the above example, STDOUT will be directed to the output file In the above example, STDOUT will be directed to the output file
`TestJob.%J.stdout` and STDERR directed to `TestJob.%J.stderr`. You `TestJob.%J.stdout` and STDERR directed to `TestJob.%J.stderr`. You
will notice that the example is very similar to to the will notice that the example is very similar to to the
[serial example]({{< relref "/guides/submitting_jobs/_index.md" >}}). [serial example](/submitting_jobs/).
The important line is the `module load` command. The important line is the `module load` command.
That tells the cluster to load the R framework into the environment so jobs may use it. That tells the cluster to load the R framework into the environment so jobs may use it.
To pass arguments to the script when using `Rscript`, the arguments To pass arguments to the script when using `Rscript`, the arguments
will follow the script name as in the example below: will follow the script name as in the example below:
{{% panel theme="info" header="serial_R.submit" %}} !!! note "serial_R.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
#SBATCH --mem-per-cpu=1024 #SBATCH --mem-per-cpu=1024
#SBATCH --job-name=TestJob #SBATCH --job-name=TestJob
...@@ -123,20 +120,20 @@ will follow the script name as in the example below: ...@@ -123,20 +120,20 @@ will follow the script name as in the example below:
module load R/3.5 module load R/3.5
Rscript Rcode.R argument1 argument2 argument3 Rscript Rcode.R argument1 argument2 argument3
{{< /highlight >}} ```
{{% /panel %}}
--- ---
### Multicore (parallel) R submission ### Multicore (parallel) R submission
Submitting a multicore R job to SLURM is very similar to Submitting a multicore R job to SLURM is very similar to
[Submitting an OpenMP Job]({{< relref "submitting_an_openmp_job" >}}), [Submitting an OpenMP Job](../../submitting_an_openmp_job/),
since both are running multicore jobs on a single node. Below is an example: since both are running multicore jobs on a single node. Below is an example:
{{% panel theme="info" header="parallel_R.submit" %}} !!! note "parallel_R.submit"
{{< highlight batch >}} ```bash
#!/bin/sh #!/bin/bash
#SBATCH --ntasks-per-node=16 #SBATCH --ntasks-per-node=16
#SBATCH --nodes=1 #SBATCH --nodes=1
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
...@@ -147,8 +144,8 @@ since both are running multicore jobs on a single node. Below is an example: ...@@ -147,8 +144,8 @@ since both are running multicore jobs on a single node. Below is an example:
module load R/3.5 module load R/3.5
R CMD BATCH Rcode.R R CMD BATCH Rcode.R
{{< /highlight >}} ```
{{% /panel %}}
The above example will submit a single job which can use up to 16 cores. The above example will submit a single job which can use up to 16 cores.
...@@ -157,26 +154,26 @@ performance will suffer.  For example, when using the ...@@ -157,26 +154,26 @@ performance will suffer.  For example, when using the
[parallel](http://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf) [parallel](http://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf)
package function mclapply: package function mclapply:
{{% panel theme="info" header="parallel.R" %}} !!! note "parallel.R"
{{< highlight R >}} ``` R
library("parallel") library("parallel")
... ...
mclapply(rep(4, 5), rnorm, mc.cores=16) mclapply(rep(4, 5), rnorm, mc.cores=16)
{{< /highlight >}} ```
{{% /panel %}}
--- ---
### Multinode R submission with Rmpi ### Multinode R submission with Rmpi
Submitting a multinode MPI R job to SLURM is very similar to Submitting a multinode MPI R job to SLURM is very similar to
[Submitting an MPI Job]({{< relref "submitting_an_mpi_job" >}}), [Submitting an MPI Job](../../submitting_an_mpi_job/),
since both are running multicore jobs on a multiple nodes. since both are running multicore jobs on a multiple nodes.
Below is an example of running Rmpi on Crane on 2 nodes and 32 cores: Below is an example of running Rmpi on Swan on 2 nodes and 32 cores:
{{% panel theme="info" header="Rmpi.submit" %}} !!! note "Rmpi.submit"
{{< highlight batch >}} ```bat
#!/bin/sh #!/bin/bash
#SBATCH --nodes=2 #SBATCH --nodes=2
#SBATCH --ntasks-per-node=16 #SBATCH --ntasks-per-node=16
#SBATCH --time=00:30:00 #SBATCH --time=00:30:00
...@@ -188,10 +185,10 @@ Below is an example of running Rmpi on Crane on 2 nodes and 32 cores: ...@@ -188,10 +185,10 @@ Below is an example of running Rmpi on Crane on 2 nodes and 32 cores:
module load compiler/gcc/4.9 openmpi/1.10 R/3.5 module load compiler/gcc/4.9 openmpi/1.10 R/3.5
export OMPI_MCA_mtl=^psm export OMPI_MCA_mtl=^psm
mpirun -n 1 R CMD BATCH Rmpi.R mpirun -n 1 R CMD BATCH Rmpi.R
{{< /highlight >}} ```
{{% /panel %}}
When you run Rmpi job on Crane, please use the line `export
When you run Rmpi job on Swan, please use the line `export
OMPI_MCA_mtl=^psm` in your submit script. Regardless of how may cores your job uses, the Rmpi package should OMPI_MCA_mtl=^psm` in your submit script. Regardless of how may cores your job uses, the Rmpi package should
always be run with `mpirun -n 1` because it spawns additional always be run with `mpirun -n 1` because it spawns additional
processes dynamically. processes dynamically.
...@@ -199,8 +196,8 @@ processes dynamically. ...@@ -199,8 +196,8 @@ processes dynamically.
Please find below an example of Rmpi R script provided by Please find below an example of Rmpi R script provided by
[The University of Chicago Research Computing Center](https://rcc.uchicago.edu/docs/software/environments/R/index.html#rmpi): [The University of Chicago Research Computing Center](https://rcc.uchicago.edu/docs/software/environments/R/index.html#rmpi):
{{% panel theme="info" header="Rmpi.R" %}} !!! note "Rmpi.R"
{{< highlight R >}} ``` R
library(Rmpi) library(Rmpi)
# initialize an Rmpi environment # initialize an Rmpi environment
...@@ -218,57 +215,8 @@ mpi.remote.exec(paste("I am", id, "of", ns, "running on", host)) ...@@ -218,57 +215,8 @@ mpi.remote.exec(paste("I am", id, "of", ns, "running on", host))
# close down the Rmpi environment # close down the Rmpi environment
mpi.close.Rslaves(dellog = FALSE) mpi.close.Rslaves(dellog = FALSE)
mpi.exit() mpi.exit()
{{< /highlight >}} ```
{{% /panel %}}
--- ---
### Adding packages
There are two options to install packages. The first is to run R on the
login node and run R interactively to install packages. The second is to
use the `R CMD INSTALL` command.
{{% notice info %}}
All R packages must be installed from the login node. R libraries are
stored in user's home directories which are not writable from the worker
nodes.
{{% /notice %}}
#### Installing packages interactively
1. Load the R module with the command `module load R`
- Note that each version of R uses its own user libraries. To
install packages under a specific version of R, specify which
version by using the module load command followed by the version
number. For example, to load R version 3.5, you would use the
command `module load R/3.5`
2. Run R interactively using the command `R`
3. From within R, use the `install.packages()` command to install
desired packages. For example, to install the package `ggplot2`
use the command `install.packages("ggplot2")`
Some R packages, require external compilers or additional libraries. If
you see an error when installing your package you might need to load
additional modules to make these compilers or libraries available. For
more information about this, refer to the package documentation.
#### Installing packages using R CMD INSTALL
To install packages using `R CMD INSTALL` the zipped package must
already be downloaded to the cluster. You can download package source
using `wget`. Then the `R CMD INSTALL` command can be used when
pointed to the full path of the source tar file. For example, to install
ggplot2 the following commands are used:
{{< highlight bash >}}
# Download the package source:
wget https://cran.r-project.org/src/contrib/ggplot2_2.2.1.tar.gz
# Install the package:
R CMD INSTALL ./ggplot2_2.2.1.tar.gz
{{< /highlight >}}
Additional information on using the `R CMD INSTALL` command can be
found in the R documentation which can be seen by typing `?INSTALL`
within the R console.
---
title: Creating an Interactive Job
summary: "How to run an interactive job on HCC resources."
weight: 20
---
!!! note
The `/home` directories are not intended for active job I/O.
Output from run your processing should be directed to either `/work` or `/common`.
Submitting an interactive job is done with the command `srun`.
```bash
$ srun --pty $SHELL
```
This command will allocate the **default resources of 1GB of RAM, 1 hour of running time, and a single CPU core**. Oftentimes, these resources are not enough. If the job is terminated, there is a high chance that the reason is exceeded resources, so please make sure you set the memory and time requirements appropriately.
Submitting an interactive job to allocate 4 CPU cores per node for 3 hours with RAM memory of 1GB per core on the general `batch` partition:
```bash
$ srun --nodes=1 --ntasks-per-node=4 --mem-per-cpu=1024 --time=3:00:00 --pty $SHELL
```
Submitting an interactive job is useful if you require extra resources
to run some processing by hand. It is also very useful to debug your
processing.
An interactive job is scheduled onto a worker node just like a regular
job. You can provide options to the interactive job just as you would a
regular SLURM job. The default job runtime is 1 hour, and can be
increased by including the `--time` argument.
### Interactive job for Apptainer
Running Apptainer via an interactive job requires at least 4GBs of RAM:
```bash
$ srun --mem=4gb --nodes=1 --ntasks-per-node=4 --pty $SHELL
```
If you get any memory-related errors, continue to increase the requested memory amount.
### Priority for short jobs
To run short jobs for testing and development work, a job can specify a
different quality of service (QoS). The *short* QoS increases a jobs
priority so it will run as soon as possible.
| SLURM Specification |
|---------------------|
| `--qos=short` |
!!! warning "Limits per user for 'short' QoS"
- 6 hour job run time
- 2 jobs of 16 CPUs or fewer
- No more than 256 CPUs in use for *short* jobs from all users
!!! note "Using the short QoS"
```bash
srun --qos=short --nodes=1 --ntasks-per-node=1 --mem-per-cpu=1024 --pty $SHELL
```
+++ ---
title = "HCC Acknowledgment Credit" title: HCC Acknowledgment Credit
description = "Details on the Acknowledgment Credit system." summary: "Details on the Acknowledgment Credit system."
+++ weight: 90
---
!!! note
To submit an acknowledgement and receive the credit, please use the form here: https://hcc.unl.edu/acknowledgement-submission.
!!! note
The following text provides a detailed description of how the Acknowledgment Credit works.
As a quickstart, add the line
`#SBATCH --qos=ac_<group>`
to your submit script, replacing `<group>` with your group name. Run the `hcc-ac` program to check the remaining balance.
{{% notice note %}}
To submit an acknowledgement and receive the credit, please use the form
here: https://hcc.unl.edu/acknowledgement-submission.
{{% /notice %}}
### What is HCC Acknowledgment Credit? ### What is HCC Acknowledgment Credit?
...@@ -45,7 +55,7 @@ exhausted. ...@@ -45,7 +55,7 @@ exhausted.
**Why this ratio?** **Why this ratio?**
All nodes in the Crane batch partition can meet this CPU to memory All nodes in the Swan batch partition can meet this CPU to memory
ratio. ratio.
**Why have this ratio?** **Why have this ratio?**
...@@ -72,47 +82,46 @@ Column description of the hcc-ac utility ...@@ -72,47 +82,46 @@ Column description of the hcc-ac utility
| per-CPU AvgMEM | The per-CPU average memory size available for the CPU time remaining in the qos. If CPU time is consumed faster than memory time, this value will increase. If memory time is consumed faster than CPU time, this value will decrease. | | per-CPU AvgMEM | The per-CPU average memory size available for the CPU time remaining in the qos. If CPU time is consumed faster than memory time, this value will increase. If memory time is consumed faster than CPU time, this value will decrease. |
##### Example of how to use the awarded time for the 'demo' group. ### Example of how to use the awarded time for the 'demo' group.
The awarded time is reduced down to 10 minutes to show consumption The awarded time is reduced down to 10 minutes to show consumption
changes with differing job resource requirements: changes with differing job resource requirements:
All times are in days-hours:minutes:seconds as used in Slurm's '--time=' All times are in days-hours:minutes:seconds as used in Slurm's '--time='
argument. argument.
{{% panel theme="info" header="Default output" %}} !!! note "Default output"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ hcc-ac [demo01@login.hcc_cluster ~]$ hcc-ac
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM | | Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| ac_demo | 0-00:10:00 | 0-00:10:00 | 4.0GB | | ac_demo | 0-00:10:00 | 0-00:10:00 | 4.0GB |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
{{< /highlight >}} ```
{{% /panel %}}
Use the Slurm quality of service argument '--qos' to gain access to the Use the Slurm quality of service argument '--qos' to gain access to the
awarded time with increased priority: awarded time with increased priority:
{{% panel theme="info" header="**--qos=ac_demo**" %}} !!! note "**--qos=ac_demo**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=1 --mem=8g --time=1:00 /bin/sleep 60 [demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=1 --mem=8g --time=1:00 /bin/sleep 60
{{< /highlight >}} ```
{{% /panel %}}
\*\***job runs for 60 seconds**\*\* \*\***job runs for 60 seconds**\*\*
{{% panel theme="info" header="**After 60 second job**" %}} !!! note "**After 60 second job**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ hcc-ac [demo01@login.hcc_cluster ~]$ hcc-ac
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM | | Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| ac_demo | 0-00:09:00 | 0-00:08:00 | 3.556GB | | ac_demo | 0-00:09:00 | 0-00:08:00 | 3.556GB |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
{{< /highlight >}} ```
{{% /panel %}}
1 CPU minute and 2 4GB memory minutes were consumed by the prior srun 1 CPU minute and 2 4GB memory minutes were consumed by the prior srun
job. job.
...@@ -135,24 +144,24 @@ against 4GB: ...@@ -135,24 +144,24 @@ against 4GB:
ie, 9 \* 3.556 \~= 8 \* 4 ie, 9 \* 3.556 \~= 8 \* 4
{{% panel theme="info" header="**--ntasks=4**" %}} !!! note "**--ntasks=4**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=4 --mem-per-cpu=2G --time=1:00 /bin/sleep 60 [demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=4 --mem-per-cpu=2G --time=1:00 /bin/sleep 60
{{< /highlight >}} ```
{{% /panel %}}
\*\***job runs for 60 seconds**\*\* \*\***job runs for 60 seconds**\*\*
{{% panel theme="info" header="**After 60 second job**" %}} !!! note "**After 60 second job**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ hcc-ac [demo01@login.hcc_cluster ~]$ hcc-ac
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM | | Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| ac_demo | 0-00:05:00 | 0-00:06:00 | 4.8GB | | ac_demo | 0-00:05:00 | 0-00:06:00 | 4.8GB |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
{{< /highlight >}} ```
{{% /panel %}}
4 CPU minutes and 2 4GB minutes were consumed by the prior srun job. 4 CPU minutes and 2 4GB minutes were consumed by the prior srun job.
...@@ -163,34 +172,34 @@ ie, 9 \* 3.556 \~= 8 \* 4 ...@@ -163,34 +172,34 @@ ie, 9 \* 3.556 \~= 8 \* 4
6 / 5 \* 4 == 4.8 6 / 5 \* 4 == 4.8
{{% panel theme="info" header="**Insufficient Time**" %}} !!! note "**Insufficient Time**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=5 --mem-per-cpu=5000M --time=1:00 /bin/sleep 60 [demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=5 --mem-per-cpu=5000M --time=1:00 /bin/sleep 60
srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits) srun: error: Unable to allocate resources: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)
{{< /highlight >}} ```
{{% /panel %}}
An example of a job requesting more resources than what remains An example of a job requesting more resources than what remains
available in the qos. available in the qos.
{{% panel theme="info" header="**Corrected Memory Requirement**" %}} !!! note "**Corrected Memory Requirement**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=5 --mem-per-cpu=4800M --time=1:00 /bin/sleep 60 [demo01@login.hcc_cluster ~]$ srun --qos=ac_demo --ntasks=5 --mem-per-cpu=4800M --time=1:00 /bin/sleep 60
{{< /highlight >}} ```
{{% /panel %}}
\*\***job runs for 60 seconds**\*\* \*\***job runs for 60 seconds**\*\*
{{% panel theme="info" header="**Exhausted QoS**" %}} !!! note "**Exhausted QoS**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ hcc-ac [demo01@login.hcc_cluster ~]$ hcc-ac
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM | | Slurm qos | CPUx1 time | MEMx4GB time | per-CPU AvgMEM |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
| ac_demo | exhausted | exhausted | 0.0GB | | ac_demo | exhausted | exhausted | 0.0GB |
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
{{< /highlight >}} ```
{{% /panel %}}
All remaining time was used. Any further submissions to the qos will be All remaining time was used. Any further submissions to the qos will be
**denied at submission time**. **denied at submission time**.
...@@ -199,10 +208,10 @@ All of the above **srun** arguments work the same with **sbatch** within ...@@ -199,10 +208,10 @@ All of the above **srun** arguments work the same with **sbatch** within
the submit file header. the submit file header.
{{% panel theme="info" header="**Submit File Example**" %}} !!! note "**Submit File Example**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ cat submit_test.slurm [demo01@login.hcc_cluster ~]$ cat submit_test.slurm
#!/bin/batch #!/bin/bash
#SBATCH --ntasks=4 #SBATCH --ntasks=4
#SBATCH --qos=ac_demo #SBATCH --qos=ac_demo
#SBATCH --ntasks=5 #SBATCH --ntasks=5
...@@ -211,8 +220,8 @@ the submit file header. ...@@ -211,8 +220,8 @@ the submit file header.
/bin/sleep 60 /bin/sleep 60
[demo01@login.hcc_cluster ~]$ sbatch ./submit_test.slurm [demo01@login.hcc_cluster ~]$ sbatch ./submit_test.slurm
{{< /highlight >}} ```
{{% /panel %}}
CPU and memory time in the qos are only consumed when jobs run against CPU and memory time in the qos are only consumed when jobs run against
the qos. Therefore it is possible for more jobs to be submitted the qos. Therefore it is possible for more jobs to be submitted
...@@ -234,8 +243,8 @@ argument to size the job against what time remains in the qos. ...@@ -234,8 +243,8 @@ argument to size the job against what time remains in the qos.
For example, with the same 10 minute limit: For example, with the same 10 minute limit:
{{% panel theme="info" header="**--test-only job to see if it fits within qos time limits**" %}} !!! note "**--test-only job to see if it fits within qos time limits**"
{{< highlight batch >}} ```bat
[demo01@login.hcc_cluster ~]$ hcc-ac [demo01@login.hcc_cluster ~]$ hcc-ac
+-----------+--------------+--------------+----------------+ +-----------+--------------+--------------+----------------+
...@@ -266,5 +275,5 @@ allocation failure: Job violates accounting/QOS policy (job submit limit, user's ...@@ -266,5 +275,5 @@ allocation failure: Job violates accounting/QOS policy (job submit limit, user's
[demo01@login.hcc_cluster ~]$ srun --test-only --qos=ac_demo --ntasks=1 --time=3:00 --mem-per-cpu=12G [demo01@login.hcc_cluster ~]$ srun --test-only --qos=ac_demo --ntasks=1 --time=3:00 --mem-per-cpu=12G
srun: Job <number> to start at YYYY-MM-DDTHH:MM:SS using 1 processors on compute_node srun: Job <number> to start at YYYY-MM-DDTHH:MM:SS using 1 processors on compute_node
{{< /highlight >}} ```
{{% /panel %}}