Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
Loading items

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
Loading items
Show changes
Showing
with 77 additions and 350 deletions
+++
title = "Reusing SSH connections in Linux/Mac"
description = "Reusing connections makes it easier to use multiple terminals"
weight = "37"
+++
To make it more convenient for users who use multiple terminal sessions
simultaneously, SSH can reuse an existing connection if connecting from
Linux or Mac. After the initial login, subsequent terminals can use
that connection, eliminating the need to enter the username and password
each time for every connection. To enable this feature, add the
following lines to your `~/.ssh/config `file:
{{% panel header="`~/.ssh/config`"%}}
{{< highlight bash >}}
Host *
ControlMaster auto
ControlPath /tmp/%r@%h:%p
ControlPersist 2h
{{< /highlight >}}
{{% /panel %}}
{{% notice info%}}
You may not have an existing `~/.ssh/config.` If not, simply create the
file and set the permissions appropriately first:
`touch ~/.ssh/config && chmod 600 ~/.ssh/config`
{{% /notice %}}
This will enable connection reuse when connecting to any host via SSH or
SCP.
+++
title = "Submitting Jobs"
description = "How to submit jobs to HCC resources"
weight = "10"
+++
Crane and Rhino are managed by
the [SLURM](https://slurm.schedmd.com) resource manager.
In order to run processing on Crane, you
must create a SLURM script that will run your processing. After
submitting the job, SLURM will schedule your processing on an available
worker node.
Before writing a submit file, you may need to
[compile your application]({{< relref "/guides/running_applications/compiling_source_code" >}}).
- [Ensure proper working directory for job output](#ensure-proper-working-directory-for-job-output)
- [Creating a SLURM Submit File](#creating-a-slurm-submit-file)
- [Submitting the job](#submitting-the-job)
- [Checking Job Status](#checking-job-status)
- [Checking Job Start](#checking-job-start)
- [Next Steps](#next-steps)
### Ensure proper working directory for job output
{{% notice info %}}
Because the /home directories are not writable from the worker nodes, all SLURM job output should be directed to your /work path.
{{% /notice %}}
{{% panel theme="info" header="Manual specification of /work path" %}}
{{< highlight bash >}}
$ cd /work/[groupname]/[username]
{{< /highlight >}}
{{% /panel %}}
The environment variable `$WORK` can also be used.
{{% panel theme="info" header="Using environment variable for /work path" %}}
{{< highlight bash >}}
$ cd $WORK
$ pwd
/work/[groupname]/[username]
{{< /highlight >}}
{{% /panel %}}
Review how /work differs from /home [here.]({{< relref "/guides/handling_data/_index.md" >}})
### Creating a SLURM Submit File
{{% notice info %}}
The below example is for a serial job. For submitting MPI jobs, please
look at the [MPI Submission Guide.]({{< relref "submitting_an_mpi_job" >}})
{{% /notice %}}
A SLURM submit file is broken into 2 sections, the job description and
the processing. SLURM job description are prepended with `#SBATCH` in
the submit file.
**SLURM Submit File**
{{< highlight batch >}}
#!/bin/sh
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=1024 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=hello-world
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
module load example/test
hostname
sleep 60
{{< /highlight >}}
- **time**
Maximum walltime the job can run. After this time has expired, the
job will be stopped.
- **mem-per-cpu**
Memory that is allocated per core for the job. If you exceed this
memory limit, your job will be stopped.
- **mem**
Specify the real memory required per node in MegaBytes. If you
exceed this limit, your job will be stopped. Note that for you
should ask for less memory than each node actually has. For Crane, the
max is 500GB.
- **job-name**
The name of the job. Will be reported in the job listing.
- **partition**
The partition the job should run in. Partitions determine the job's
priority and on what nodes the partition can run on. See the
[Partitions]({{< relref "/guides/submitting_jobs/partitions/_index.md" >}}) page for a list of possible partitions.
- **error**
Location of the stderr will be written for the job. `[groupname]`
and `[username]` should be replaced your group name and username.
Your username can be retrieved with the command `id -un` and your
group with `id -ng`.
- **output**
Location of the stdout will be written for the job.
More advanced submit commands can be found on the [SLURM Docs](https://slurm.schedmd.com/sbatch.html).
You can also find an example of a MPI submission on [Submitting an MPI Job]({{< relref "submitting_an_mpi_job" >}}).
### Submitting the job
Submitting the SLURM job is done by command `sbatch`. SLURM will read
the submit file, and schedule the job according to the description in
the submit file.
Submitting the job described above is:
{{% panel theme="info" header="SLURM Submission" %}}
{{< highlight batch >}}
$ sbatch example.slurm
Submitted batch job 24603
{{< /highlight >}}
{{% /panel %}}
The job was successfully submitted.
### Checking Job Status
Job status is found with the command `squeue`. It will provide
information such as:
- The State of the job:
- **R** - Running
- **PD** - Pending - Job is awaiting resource allocation.
- Additional codes are available
on the [squeue](http://slurm.schedmd.com/squeue.html)
page.
- Job Name
- Run Time
- Nodes running the job
Checking the status of the job is easiest by filtering by your username,
using the `-u` option to squeue.
{{< highlight batch >}}
$ squeue -u <username>
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
24605 batch hello-wo <username> R 0:56 1 b01
{{< /highlight >}}
Additionally, if you want to see the status of a specific partition, for
example if you are part of a [partition]({{< relref "/guides/submitting_jobs/partitions/_index.md" >}}),
you can use the `-p` option to `squeue`:
{{< highlight batch >}}
$ squeue -p esquared
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
73435 esquared MyRandom tingting R 10:35:20 1 ri19n10
73436 esquared MyRandom tingting R 10:35:20 1 ri19n12
73735 esquared SW2_driv hroehr R 10:14:11 1 ri20n07
73736 esquared SW2_driv hroehr R 10:14:11 1 ri20n07
{{< /highlight >}}
#### Checking Job Start
You may view the start time of your job with the
command `squeue --start`. The output of the command will show the
expected start time of the jobs.
{{< highlight batch >}}
$ squeue --start --user lypeng
JOBID PARTITION NAME USER ST START_TIME NODES NODELIST(REASON)
5822 batch Starace lypeng PD 2013-06-08T00:05:09 3 (Priority)
5823 batch Starace lypeng PD 2013-06-08T00:07:39 3 (Priority)
5824 batch Starace lypeng PD 2013-06-08T00:09:09 3 (Priority)
5825 batch Starace lypeng PD 2013-06-08T00:12:09 3 (Priority)
5826 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5827 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5828 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5829 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority)
5830 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority)
5831 batch Starace lypeng PD 2013-06-08T00:14:09 3 (Priority)
5832 batch Starace lypeng PD N/A 3 (Priority)
{{< /highlight >}}
The output shows the expected start time of the jobs, as well as the
reason that the jobs are currently idle (in this case, low priority of
the user due to running numerous jobs already).
#### Removing the Job
Removing the job is done with the `scancel` command. The only argument
to the `scancel` command is the job id. For the job above, the command
is:
{{< highlight batch >}}
$ scancel 24605
{{< /highlight >}}
### Next Steps
{{% children %}}
---
title: "Redirector"
---
<script>
// Redirector for hcc-docs links
// Search for URL parameter 'q' and redirect to top match
var lunrIndex;
function getQueryVariable(variable) {
var query = window.location.search.substring(1);
var vars = query.split('&');
for (var i = 0; i < vars.length; i++) {
var pair = vars[i].split('=');
if (pair[0] === variable) {
return decodeURIComponent(pair[1].replace(/\+/g, '%20'));
}
}
}
// Initialize lunrjs using our generated index file
function initLunr() {
// First retrieve the index file
return $.getJSON(baseurl + "/index.json")
.done(function(index) {
pagesIndex = index;
// Set up lunrjs by declaring the fields we use
// Also provide their boost level for the ranking
lunrIndex = new lunr.Index
lunrIndex.ref("uri");
lunrIndex.field('title', {
boost: 15
});
lunrIndex.field('tags', {
boost: 10
});
lunrIndex.field("content", {
boost: 5
});
// Feed lunr with each file and let lunr actually index them
pagesIndex.forEach(function(page) {
lunrIndex.add(page);
});
lunrIndex.pipeline.remove(lunrIndex.stemmer)
})
.fail(function(jqxhr, textStatus, error) {
var err = textStatus + ", " + error;
console.error("Error getting Hugo index file:", err);
});
}
function search(query) {
// Find the item in our index corresponding to the lunr one to have more info
return lunrIndex.search(query).map(function(result) {
return pagesIndex.filter(function(page) {
return page.uri === result.ref;
})[0];
});
}
initLunr().then(function() {
var searchTerm = getQueryVariable('q');
// Replace non-word chars with space. lunr doesn't like quotes.
searchTerm = searchTerm.replace(/[\W_]+/g," ");
var results = search(searchTerm);
if (!results.length) {
window.location = baseurl;
} else {
window.location = results[0].uri;
}
});
</script>
ignore: "facilities.md"
hide: true
---
title: "2012"
summary: "Historical listing of various HCC events for the year 2012."
---
Historical listing of HCC Events
----------
{{ children('Events/2012') }}
+++
title = "Nebraska Supercomputing Symposium '12"
description = "Nebraska Supercomputing Symposium '12."
+++
{{< figure src="/images/2012-11-07.jpg" width="300" class="img-border">}}
{{< figure src="/images/2012-11-07hdr.jpg" width="300" class="img-border">}}
---
title: Nebraska Supercomputing Symposium '12
summary: "Nebraska Supercomputing Symposium '12."
---
<img src="/images/2012-11-07.jpg" width="300" class="img-border">
<img src="/images/2012-11-07hdr.jpg" width="300" class="img-border">
{{< figure src="/images/PIVOT_Logo.png" width="100" class="img-border">}} Talks
<img src="/images/PIVOT_Logo.png" width="100" class="img-border>
Talks
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- [Intro slides](https://unl.box.com/s/4gctct2jgu2e78y7efld7w39xzbd1j4d)
......
+++
title = "Supercomputing Mini Workshop 2012"
description = "Supercomputing Mini Workshop 2012."
+++
---
title: Supercomputing Mini Workshop 2012
summary: "Supercomputing Mini Workshop 2012."
---
Presentation
============
......
---
title: "2013"
summary: "Historical listing of various HCC events for the year 2013."
---
Historical listing of HCC Events
----------
{{ children('Events/2013') }}
+++
title = "HCC OSG Workshop, June 2013"
description = "HCC OSG Workstop, June 2013."
+++
---
title: HCC OSG Workshop, June 2013
summary: "HCC OSG Workstop, June 2013."
---
Location: Unity Room/212 in the Jackie Gaughan Multicultural Center
......
+++
title = "Supercomputing Mini Workshop 2013"
description = "Supercomputing Mini Workshop 2013."
+++
---
title: Supercomputing Mini Workshop 2013
summary: "Supercomputing Mini Workshop 2013."
---
Supercomputing Mini Workshop - February 27, 2013
================================================
{{% notice info %}}The materials found on this page were applicable at the time of the event. When referencing these,
!!! noteThe materials found on this page were applicable at the time of the event. When referencing these,
please check current documentation to ensure the resources are still available.A list of currently available resources can be found on the
[Submitting Jobs page](https://hcc.unl.edu/docs/#resource-capabilities){{% /notice %}}
[Submitting Jobs page](https://hcc.unl.edu/docs/#resource-capabilities)
In this hour-long mini workshop, you will obtain hands-on experience
performing a simple calculation (summing from 1 to 16) with a
......@@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. 
**Logging In**
``` syntaxhighlighter-pre
ssh tusker.unl.edu -l demoXXXX
ssh crane.unl.edu -l demoXXXX
```
**[Cypwin Link](http://cygwin.com/install.html)**
......@@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder
``` syntaxhighlighter-pre
$ ls
$ scp -r ./demo_code <username>@tusker.unl.edu:/work/demo/<username>
$ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username>
<enter password>
```
......@@ -59,7 +59,7 @@ Serial Job
First, you need to login to the cluster
``` syntaxhighlighter-pre
$ ssh <username>@tusker.unl.edu
$ ssh <username>@crane.unl.edu
<enter password>
```
......@@ -133,14 +133,14 @@ code.  It uses MPI for communication between the parallel processes.
$ mpif90 fortran_mpi.f90 -o fortran_mpi.x
```
Next, we will submit the MPI application to the Tusker cluster scheduler
Next, we will submit the MPI application to the cluster scheduler
using the file `submit_tusker.mpi`.
``` syntaxhighlighter-pre
$ qsub submit_tusker.mpi
```
The Tusker cluster scheduler will pick machines (possibly several,
The cluster scheduler will pick machines (possibly several,
depending on availability) to run the parallel MPI application. You can
check the status of the job the same way you did with the Serial job:
......
+++
title = "August 2014 UNO Workshop"
description = "August 2014 UNO Workshop."
+++
---
title: August 2014 UNO Workshop
summary: "August 2014 UNO Workshop."
---
When: August 29, 2014
......
---
title: "2014"
summary: "Historical listing of various HCC events for the year 2014."
---
Historical listing of HCC Events
----------
{{ children('Events/2014') }}
+++
title = "July 2014 Bioinformatics Workshop"
description = "July 2014 Bioinformatics Workshop"
+++
---
title: July 2014 Bioinformatics Workshop
summary: "July 2014 Bioinformatics Workshop"
---
Monday - Wednesday, July 28 -30 (lunch will be provided)
......
+++
title = "HCC Bioinformatics Workshop, June 2013"
description = "HCC Bioinformatics Workshop, June 2013."
+++
---
title: HCC Bioinformatics Workshop, June 2013
summary: "HCC Bioinformatics Workshop, June 2013."
---
Thursday, June 26 \| 10am-4pm (lunch will be provided)
......@@ -23,16 +23,16 @@ Filley Hall, Room 302
[Click to download ppt of
presentation.](https://unl.box.com/s/3raqcvv00x0m4qbn7ubukitu8b7q9e5r)
presentation.](https://uofnelincoln.sharepoint.com/:p:/s/UNL-HollandComputingCenter/EQ8ETtcUhFtAsyk9XT40QQ0BAWTj1m-WAnN2F4r9H_bzRQ?e=8wyeDr)
[Click to download PDF of
presentation.](https://unl.box.com/s/bkozojpnt8hcwnnsc6eww1muy4fv9wb3)
presentation.](https://uofnelincoln.sharepoint.com/:b:/s/UNL-HollandComputingCenter/EfMi5y35e7tDtRiWMM_vQWABRyoub4njO7AnE0qupD8k6Q?e=fNu2Gs)
[Click to download BLAST presentation](https://unl.box.com/s/e8juelxpeomhehhm2zg57102s9aeqhuy)
[Click to download BLAST presentation](https://uofnelincoln.sharepoint.com/:b:/s/UNL-HollandComputingCenter/EYV5vp-DdLtKjLMqjTpwjxIBtfRe2QAvpiJU-LzPhnriLw?e=LPzhST)
[Click to download example query
file](https://unl.box.com/s/nk0bybekzqon368pxbfd77xykbuk1ywb)
file](https://uofnelincoln.sharepoint.com/:u:/s/UNL-HollandComputingCenter/EeazfOASF8VFsmjappZeqv8BBpxP0GuSdCVB51jBEbkZrw?e=4C8aL5)
+++
title = "October 2014 HCC Workshop - CoE"
description = "October 2014 HCC Workshop - CoE."
+++
---
title: October 2014 HCC Workshop - CoE
summary: "October 2014 HCC Workshop - CoE."
---
When: October 10th, 2014 3:00 - 5:00
......