@@ -146,3 +146,26 @@ Maximum memory usage for job 25745709 is: 3.27 MBs
When `cgget` and `mem_report` are used as part of the submit script, the respective output
is printed in the generated SLURM log files, unless otherwise specified.
### Monitoring queued Jobs:
The queue on our HCC is a fair-share, which means your jobs priority depends on how long the job has been waiting in the queue, past usage of the cluster, your job size, memory and time requested, etc. Also this will be affected by the amount of jobs waiting on the queue and how much resources are available on the cluster. The more you submitted jobs on the queue the lower priority to run your jobs on the cluster will increase.
You can check when your jobs will be running on the cluster using the command:
{{<highlightbash>}}
sacct -u <user_id> --format=start
{{</highlight>}}
To check the start running time for a specific job then you can use the following command:
{{<highlightbash>}}
sacct -u <user_id> --job=<job_id> --format=start
{{</highlight>}}
Finally, To check your fairsahre score by running the following command:
{{<highlightbash>}}
sshare --account=<group_name> -a
{{</highlight>}}
After you run the above command you will be able to see your fair-share score. If your fairshare score is 1.0, then it is indicate that your account has not run any jobs recently (unused). If your faireshare score is 0.5, then that means (Average utilization). The Account on average is using exactly as much as their granted Share. If your fairshae score is between 0.5 > fairshare > 0, that means (Over-utilization). The Account has overused their granted Share. Finally, if your fairshare score is 0. That means (No share left). The Account has vastly overused their granted Share. If there is no contention for resources, the jobs will still start.
There is another way to run your job faster which is by having [Priority Access](https://hcc.unl.edu/priority-access-pricing).