Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • FAQ
  • RDPv10
  • UNL_OneDrive
  • atticguidelines
  • data_share
  • globus-auto-backups
  • good-hcc-practice-rep-workflow
  • hchen2016-faq-home-is-full
  • ipynb-doc
  • master
  • rclone-fix
  • sislam2-master-patch-51693
  • sislam2-master-patch-86974
  • site_url
  • test
15 results

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
  • 26-add-screenshots-for-newer-rdp-v10-client
  • 28-overview-page-for-connecting-2
  • AddExamples
  • OMCCLUNG2-master-patch-74599
  • RDPv10
  • globus-auto-backups
  • gpu_update
  • master
  • mtanash2-master-patch-75717
  • mtanash2-master-patch-83333
  • mtanash2-master-patch-87890
  • mtanash2-master-patch-96320
  • patch-1
  • patch-2
  • patch-3
  • runTime
  • submitting-jobs-overview
  • tharvill1-master-patch-26973
18 results
Show changes
Showing
with 24 additions and 1486 deletions
+++
title = "Submitting HTCondor Jobs"
description = "How to submit HTCondor Jobs on HCC resources."
+++
If you require features of HTCondor, such as DAGMan or Pegasus,
[HTCondor](http://research.cs.wisc.edu/htcondor/) can
submit jobs using HTCondor's PBS integration. This can
be done by adding `grid_resource = pbs` to the submit file. An example
submission script is below:
{{% panel theme="info" header="submit.condor" %}}
{{< highlight batch >}}
universe = grid
grid_resource = pbs
executable = test.sh
output = stuff.out
error = stuff.err
log = stuff.log
batch_queue = guest
queue
{{< /highlight >}}
{{% /panel %}}
The above script will translate the condor submit file into a SLURM
submit file, and execute the `test.sh` executable on a worker node.
{{% notice warning %}}
The `/home` directories are read only on the worker nodes. You
have to submit your jobs from the `/work` directory just as you would in
SLURM.
{{% /notice %}}
### Using Pegasus
If you are using [Pegasus](http://pegasus.isi.edu),
instructions on using the *glite* interface (as shown above) are
available on the
[User Guide](http://pegasus.isi.edu/wms/docs/latest/execution_environments.php#glite).
+++
title = "The Open Science Grid"
description = "How to utilize the Open Science Grid (OSG)."
weight = "40"
+++
If you find that you are not getting access to the volume of computing
resources needed for your research through HCC, you might also consider
submitting your jobs to the Open Science Grid (OSG).
### What is the Open Science Grid?
The [Open Science Grid](http://opensciencegrid.org) advances
science through open distributed computing. The OSG is a
multi-disciplinary partnership to federate local, regional, community
and national cyber infrastructures to meet the needs of research and
academic communities at all scales. HCC participates in the OSG as a
resource provider and a resource user. We provide HCC users with a
gateway to running jobs on the OSG.
The map below shows the Open Science Grid sites located across the U.S.
{{< figure src="/images/17044917.png" >}}
This help document is divided into four sections, namely:
- [Characteristics of an OSG friendly job]({{< relref "characteristics_of_an_osg_friendly_job" >}})
- [How to submit an OSG Job with HTCondor]({{< relref "how_to_submit_an_osg_job_with_htcondor" >}})
- [A simple example of submitting an HTCondorjob]({{< relref "a_simple_example_of_submitting_an_htcondor_job" >}})
- [Using Distributed Environment Modules on OSG]({{< relref "using_distributed_environment_modules_on_osg" >}})
+++
title = "A simple example of submitting an HTCondor job"
description = "A simple example of submitting an HTCondor job."
+++
This page describes a complete example of submitting an HTCondor job.
1. SSH to Tusker or Crane
{{% panel theme="info" header="ssh command" %}}
[apple@localhost]ssh apple@crane.unl.edu
{{% /panel %}}
{{% panel theme="info" header="output" %}}
[apple@login.crane~]$
{{% /panel %}}
2. Write a simple python program in a file "hello.py" that we wish to
run using HTCondor
{{% panel theme="info" header="edit a python code named 'hello.py'" %}}
[apple@login.crane ~]$ vim hello.py
{{% /panel %}}
Then in the edit window, please input the code below:
{{% panel theme="info" header="hello.py" %}}
#!/usr/bin/env python
import sys
import time
i=1
while i<=6:
print i
i+=1
time.sleep(1)
print 2**8
print "hello world received argument = " +sys.argv[1]
{{% /panel %}}
This program will print 1 through 6 on stdout, then print the number
256, and finally print `hello world received argument = <Command
Line Argument Sent to the hello.py>`
3. Write an HTCondor submit script named "hello.submit"
{{% panel theme="info" header="hello.submit" %}}
Universe = vanilla
Executable = hello.py
Output = OUTPUT/hello.out.$(Cluster).$(Process).txt
Error = OUTPUT/hello.error.$(Cluster).$(Process).txt
Log = OUTPUT/hello.log.$(Cluster).$(Process).txt
notification = Never
Arguments = $(Process)
PeriodicRelease = ((JobStatus==5) && (CurentTime - EnteredCurrentStatus) > 30)
OnExitRemove = (ExitStatus == 0)
Queue 4
{{% /panel %}}
4. Create an OUTPUT directory to receive all output files that
generated by your job (OUTPUT folder is used in the submit script
above )
{{% panel theme="info" header="create output directory" %}}
[apple@login.crane ~]$ mkdir OUTPUT
{{% /panel %}}
5. Submit your job
{{% panel theme="info" header="condor_submit" %}}
[apple@login.crane ~]$ condor_submit hello.submit
{{% /panel %}}
{{% panel theme="info" header="Output of submit" %}}
Submitting job(s)
....
4 job(s) submitted to cluster 1013054.
{{% /panel %}}
6. Check status of `condor_q`
{{% panel theme="info" header="condor_q" %}}
[apple@login.crane ~]$ condor_q
{{% /panel %}}
{{% panel theme="info" header="Output of `condor_q`" %}}
-- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?...
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
720587.0 logan 12/15 10:48 33+14:41:17 H 0 0.0 continuous.cron 20
720588.0 logan 12/15 10:48 200+02:40:08 H 0 0.0 checkprogress.cron
1012864.0 jthiltge 2/15 16:48 0+00:00:00 H 0 0.0 test.sh
1013054.0 jennyshao 4/3 17:58 0+00:00:00 R 0 0.0 hello.py 0
1013054.1 jennyshao 4/3 17:58 0+00:00:00 R 0 0.0 hello.py 1
1013054.2 jennyshao 4/3 17:58 0+00:00:00 I 0 0.0 hello.py 2
1013054.3 jennyshao 4/3 17:58 0+00:00:00 I 0 0.0 hello.py 3
7 jobs; 0 completed, 0 removed, 0 idle, 4 running, 3 held, 0 suspended
{{% /panel %}}
Listed below are the three status of the jobs as observed in the
above output
| Symbol | Representation |
|--------|------------------|
| H | Held |
| R | Running |
| I | Idle and waiting |
7. Explanation of the `$(Cluster)` and `$(Process)` in HTCondor script
`$(Cluster)` and `$(Process)` are variables that are available in the
variable name space in the HTCondor script. `$(Cluster)` means the
prefix of your job ID and `$(Process)` varies from `0` through number of
jobs called with `Queue - 1`. If your job is a single job, then
`$(Cluster) =` `<job ID>` else, your job ID is combined with `$(Cluster)` and
`$(Process)`.
In this example, `$(Cluster)`="1013054" and `$(Process)` varies from "0"
to "3" for the above HTCondor script.
In majority of the cases one will use these variables for modifying
the behavior of each individual task of the HTCondor submission, for
example one may vary the input/output file/parameters for the run
program. In this example we are simply passing the `$(Process)` as
arguments as `sys.argv[1]` in `hello.py`.
The lines of interest for this discussion from file the HTCondor
script "hello.submit" are listed below in the code section :
{{% panel theme="info" header="for `$(Process)`" %}}
Output= hello.out.$(Cluster).$(Process).txt
Arguments = $(Process)
Queue 4
{{% /panel %}}
The line of interest for this discussion from file "hello.py" is
listed in the code section below:
{{% panel theme="info" header="for `$(Process)`" %}}
print "hello world received argument = " +sys.argv[1]
{{% /panel %}}
8. Viewing the results of your job
After your job is completed you may use Linux "cat" or "vim" command
to view the job output.
For example in the file `hello.out.1013054.2.txt`, "1013054" means
`$(Cluster)`, and "2" means `$(Process)` the output looks like.
**example of one output file "hello.out.1013054.2.txt"**
{{% panel theme="info" header="example of one output file `hello.out.1013054.2.txt`" %}}
1
2
3
4
5
6
256
hello world received argument = 2
{{% /panel %}}
9. Please see the link below for one more example:
http://research.cs.wisc.edu/htcondor/tutorials/intl-grid-school-3/submit_first.html
Next: [Using Distributed Environment Modules on OSG]({{< relref "using_distributed_environment_modules_on_osg" >}})
+++
title = "How to submit an OSG job with HTCondor"
description = "How to submit an OSG job with HTCondor"
+++
{{% notice info%}}Jobs can be submitted to the OSG from Crane or Tusker, so
there is no need to logon to a different submit host or get a grid
certificate!
{{% /notice %}}
### What is HTCondor?
The [HTCondor](http://research.cs.wisc.edu/htcondor)
project provides software to schedule individual applications,
workflows, and for sites to manage resources. It is designed to enable
High Throughput Computing (HTC) on large collections of distributed
resources for users and serves as the job scheduler used on the OSG.
Jobs are submitted from either the Crane or Tusker login nodes to the
OSG using an HTCondor submission script. For those who are used to
submitting jobs with SLURM, there are a few key differences to be aware
of:
### When using HTCondor
- All files (scripts, code, executables, libraries, etc) that are
needed by the job are transferred to the remote compute site when
the job is scheduled. Therefore, all of the files required by the
job must be specified in the HTCondor submit script. Paths can be
absolute or relative to the local directory from which the job is
submitted. The main executable (specified on the `Executable` line
of the submit script) is transferred automatically with the job.
All other files need to be listed on the `transfer_input_files`
line (see example below).
- All files that are created by
the job on the remote host will be transferred automatically back to
the submit host when the job has completed. This includes
temporary/scratch and intermediate files that are not removed by
your job. If you do not want to keep these files, clean up the work
space on the remote host by removing these files before the job
exits (this can be done using a wrapper script for example).
Specific output file names can be specified with the
`transfer_input_files` option. If these files do
not exist on the remote
host when the job exits, then the job will not complete successfully
(it will be place in the *held* state).
- HTCondor scripts can queue
(submit) as many jobs as you like. All jobs queued from a single
submit script will be identical except for the `Arguments` used.
The submit script in the example below queues 5 jobs with the first
set of specified arguments, and 1 job with the second set of
arguments. By default, `Queue` when it is not followed by a number
will submit 1 job.
For more information and advanced usage, see the
[HTCondor Manual](http://research.cs.wisc.edu/htcondor/manual/v8.3/index.html).
### Creating an HTCondor Script
HTCondor, much like Slurm, needs a script to tell it how to do what the
user wants. The example below is a basic script in a file say
'applejob.txt' that can be used to handle most jobs submitted to
HTCondor.
{{% panel theme="info" header="Example of a HTCondor script" %}}
{{< highlight batch >}}
#with executable, stdin, stderr and log
Universe = vanilla
Executable = a.out
Arguments = file_name 12
Output = a.out.out
Error = a.out.err
Log = a.out.log
Queue
{{< /highlight >}}
{{% /panel %}}
The table below explains the various attributes/keywords used in the above script.
| Attribute/Keyword | Explanation |
| ----------------- | ----------------------------------------------------------------------------------------- |
| # | Lines starting with '#' are considered as comments by HTCondor. |
| Universe | is the way HTCondor manages different ways it can run, or what is called in the HTCondor documentation, a runtime environment. The vanilla universe is where most jobs should be run. |
| Executable | is the name of the executable you want to run on HTCondor. |
| Arguments | are the command line arguments for your program. For example, if one was to run `ls -l /` on HTCondor. The Executable would be `ls` and the Arguments would be `-l /`. |
| Output | is the file where the information printed to stdout will be sent. |
| Error | is the file where the information printed to stderr will be sent. |
| Log | is the file where information about your HTCondor job will be sent. Information like if the job is running, if it was halted or, if running in the standard universe, if the file was check-pointed or moved. |
| Queue | is the command to send the job to HTCondor's scheduler. |
Suppose you would like to submit a job e.g. a Monte-Carlo simulation,
where the same program needs to be run several times with the same
parameters the script above can be used with the following modification.
Modify the `Queue` command by giving it the number of times the job must
be run (and hence queued in HTCondor). Thus if the `Queue` command is
changed to `Queue 5`, a.out will be run 5 times with the exact same
parameters.
In another scenario if you would like to submit the same job but with
different parameters, HTCondor accepts files with multiple `Queue`
statements. Only the parameters that need to be changed should be
changed in the HTCondor script before calling the `Queue`.
Please see "A simple example " in next chapter for the detail use of
`$(Process)`
{{% panel theme="info" header="Another Example of a HTCondor script" %}}
{{< highlight batch >}}
#with executable, stdin, stderr and log
#and multiple Argument parameters
Universe = vanilla
Executable = a.out
Arguments = file_name 10
Output = a.out.$(Process).out
Error = a.out.$(Process).err
Log = a.out.$(Process).log
Queue
Arguments = file_name 20
Queue
Arguments = file_name 30
Queue
{{< /highlight >}}
{{% /panel %}}
### How to Submit and View Your job
The steps below describe how to submit a job and other important job
management tasks that you may need in order to monitor and/or control
the submitted job:
1. How to submit a job to OSG - assuming that you named your HTCondor
script as a file applejob.txt
{{< highlight bash >}}[apple@login.crane ~] $ condor_submit applejob{{< /highlight >}}
You will see the following output after submitting the job
{{% panel theme="info" header="Example of condor_submit" %}}
Submitting job(s)
......
6 job(s) submitted to cluster 1013038
{{% /panel %}}
2. How to view your job status - to view the job status of your
submitted jobs use the following shell command
*Please note that by providing a user name as an argument to the
`condor_q` command you can limit the list of submitted jobs to the
ones that are owned by the named user*
{{< highlight bash >}}[apple@login.crane ~] $ condor_q apple{{< /highlight >}}
The code section below shows a typical output. You may notice that
the column ST represents the status of the job (H: Held and I: Idle
or waiting)
{{% panel theme="info" header="Example of condor_q" %}}
-- Schedd: login.crane.hcc.unl.edu : <129.93.227.113:9619?...
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
1013034.4 apple 3/26 16:34 0+00:21:00 H 0 0.0 sjrun.py INPUT/INP
1013038.0 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP
1013038.1 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP
1013038.2 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP
1013038.3 apple 4/3 11:34 0+00:00:00 I 0 0.0 sjrun.py INPUT/INP
...
16 jobs; 0 completed, 0 removed, 12 idle, 0 running, 4 held, 0 suspended
{{% /panel %}}
3. How to release a job - in a few cases a job may get held because of
reasons such as authentication failure or other non-fatal errors, in
those cases you may use the shell command below to release the job
from the held status so that it can be rescheduled by the HTCondor.
*Release one job:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_release 1013034.4{{< /highlight >}}
*Release all jobs of a user apple:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_release apple{{< /highlight >}}
4. How to delete a submitted job - if you want to delete a submitted
job you may use the shell commands as listed below
*Delete one job:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_rm 1013034.4{{< /highlight >}}
*Delete all jobs of a user apple:*
{{< highlight bash >}}[apple@login.crane ~] $ condor_rm apple{{< /highlight >}}
5. How to get help form HTCondor command
You can use man to get detail explanation of HTCondor command
{{% panel theme="info" header="Example of help of condor_q" %}}
[apple@glidein ~]man condor_q
{{% /panel %}}
{{% panel theme="info" header="Output of `man condor_q`" %}}
just-man-pages/condor_q(1) just-man-pages/condor_q(1)
Name
condor_q Display information about jobs in queue
Synopsis
condor_q [ -help ]
condor_q [ -debug ] [ -global ] [ -submitter submitter ] [ -name name ] [ -pool centralmanagerhost-
name[:portnumber] ] [ -analyze ] [ -run ] [ -hold ] [ -globus ] [ -goodput ] [ -io ] [ -dag ] [ -long ]
[ -xml ] [ -attributes Attr1 [,Attr2 ... ] ] [ -format fmt attr ] [ -autoformat[:tn,lVh] attr1 [attr2
...] ] [ -cputime ] [ -currentrun ] [ -avgqueuetime ] [ -jobads file ] [ -machineads file ] [ -stream-
results ] [ -wide ] [ {cluster | cluster.process | owner | -constraint expression ... } ]
Description
condor_q displays information about jobs in the Condor job queue. By default, condor_q queries the local
job queue but this behavior may be modified by specifying:
* the -global option, which queries all job queues in the pool
* a schedd name with the -name option, which causes the queue of the named schedd to be queried
{{% /panel %}}
Next: [A simple example of submitting an HTCondorjob]({{< relref "a_simple_example_of_submitting_an_htcondor_job" >}})
+++
title = "Using Distributed Environment Modules on OSG"
description = "Using Distributed Environment Modules on OSG"
+++
Many commonly used software packages and libraries are provided on the
OSG through the `module` command. OSG modules are made available
through the OSG Application Software Installation Service (OASIS). The
set of modules provided on OSG can differ from those on the HCC
clusters. To switch to the OSG modules environment on an HCC machine:
{{< highlight bash >}}
[apple@login.crane~]$ source osg_oasis_init
{{< /highlight >}}
Use the module avail command to see what software and libraries are
available:
{{< highlight bash >}}
[apple@login.crane~]$ module avail
------------------- /cvmfs/oasis.opensciencegrid.org/osg/modules/modulefiles/Core --------------------
abyss/2.0.2 gnome_libs/1.0 pegasus/4.7.1
ant/1.9.4 gnuplot/4.6.5 pegasus/4.7.3
ANTS/1.9.4 graphviz/2.38.0 pegasus/4.7.4 (D)
ANTS/2.1.0 (D) grass/6.4.4 phenix/1.10
apr/1.5.1 gromacs/4.6.5 poppler/0.24.1 (D)
aprutil/1.5.3 gromacs/5.0.0 (D) poppler/0.32
arc-lite/2015 gromacs/5.0.5.cuda povray/3.7
atlas/3.10.1 gromacs/5.0.5 proj/4.9.1
atlas/3.10.2 (D) gromacs/5.1.2-cuda proot/2014
autodock/4.2.6 gsl/1.16 protobuf/2.5
{{< /highlight >}}
Loading modules is done with the `module load` command:
{{< highlight bash >}}
[apple@login.crane~]$ module load python/2.7
{{< /highlight >}}
There are two things required in order to use modules in your HTCondor
job.
1. Create a *wrapper script* for the job. This script will be the
executable for your job and will load the module before running the
main application.
2. Include the following requirements in the HTCondor submission
script:
{{< highlight batch >}}Requirements = (HAS_MODULES =?= TRUE){{< /highlight >}}
or
{{< highlight batch >}}Requirements = [Other requirements ] && (HAS_MODULES =?= TRUE){{< /highlight >}}
### A simple example using modules on OSG
The following example will demonstrate how to use modules on OSG with an
R script that implements a Monte-Carlo estimation of Pi (`mcpi.R`).
First, create a file called `mcpi.R`:
{{% panel theme="info" header="mcpi.R" %}}{{< highlight R >}}
montecarloPi <- function(trials) {
count = 0
for(i in 1:trials) {
if((runif(1,0,1)^2 + runif(1,0,1)^2)<1) {
count = count + 1
}
}
return((count*4)/trials)
}
montecarloPi(1000)
{{< /highlight >}}{{% /panel %}}
Next, create a wrapper script called `R-wrapper.sh` to load the required
modules (`R` and `libgfortran`), and execute the R script:
{{% panel theme="info" header="R-wrapper.sh" %}}{{< highlight bash >}}
#!/bin/bash
EXPECTED_ARGS=1
if [ $# -ne $EXPECTED_ARGS ]; then
echo "Usage: R-wrapper.sh file.R"
exit 1
else
module load R
module load libgfortran
Rscript $1
fi
{{< /highlight >}}{{% /panel %}}
This script takes the name of the R script (`mcpi.R`) as it's argument
and executes it in batch mode (using the `Rscript` command) after
loading the `R` and `libgfortran` modules.
Make the script executable:
{{< highlight bash >}}[apple@login.crane~]$ chmod a+x R-script.sh{{< /highlight >}}
Finally, create the HTCondor submit script, `R.submit`:
{{% panel theme="info" header="R.submit" %}}{{< highlight batch >}}
universe = vanilla
log = mcpi.log.$(Cluster).$(Process)
error = mcpi.err.$(Cluster).$(Process)
output = mcpi.out.$(Cluster).$(Process)
executable = R-wrapper.sh
transfer_input_files = mcpi.R
arguments = mcpi.R
Requirements = (HAS_MODULES =?= TRUE)
queue 100
{{< /highlight >}}{{% /panel %}}
This script will queue 100 identical jobs to estimate the value of Pi.
Notice that the wrapper script is transferred automatically with the
job because it is listed as the executable. However, the R script
(`mcpi.R`) must be listed after `transfer_input_files` in order to be
transferred with the job.
Submit the jobs with the `condor_submit` command:
{{< highlight bash >}}[apple@login.crane~]$ condor_submit R.submit{{< /highlight >}}
Check on the status of your jobs with `condor_q`:
{{< highlight bash >}}[apple@login.crane~]$ condor_q{{< /highlight >}}
When your jobs have completed, find the average estimate for Pi from all
100 jobs:
{{< highlight bash >}}
[apple@login.crane~]$ grep "[1]" mcpi.out.* | awk '{sum += $2} END { print "Average =", sum/NR}'
Average = 3.13821
{{< /highlight >}}
+++
title = "Quickstarts"
weight = "10"
+++
The quick start guides require that you already have a HCC account. You
can get a HCC account by applying on the
[HCC website] (http://hcc.unl.edu/newusers/)
{{% children %}}
+++
title = "How to Connect"
description = "What is a cluster and what is HPC"
weight = "9"
+++
High-Performance Computing is the use of groups of computers to solve computations a user or group would not be able to solve in a reasonable time-frame on their own desktop or laptop. This is often achieved by splitting one large job amongst numerous cores or 'workers'. This is similar to how a skyscraper is built by numerous individuals rather than a single person. Many fields take advantage of HPC including bioinformatics, chemistry, materials engineering, and newer fields such as educational psychology and philosophy.
{{< figure src="/images/cluster.png" height="450" >}}
HPC clusters consist of four primary parts, the login node, management node, workers, and a central storage array. All of these parts are bound together with a scheduler such as HTCondor or SLURM.
</br></br>
#### Login Node:
Users will automatically land on the login node when they log in to the clusters. You will [submit jobs] ({{< ref "/guides/submitting_jobs" >}}) using one of the schedulers and pull the results of your jobs. Running jobs on the login node directly will be stopped so others can use the login node to submit jobs.
</br></br>
#### Management Node:
The management node does as it sounds, it manages the cluster and provides a central point to manage the rest of the systems.
</br></br>
#### Worker Nodes:
The worker nodes are what run and process your jobs that are submitted from the schedulers. Through the use of the schedulers, more work can be efficiently done by squeezing in all jobs possible for the resources requested throughout the nodes. They also allow for fair use computing by making sure one user or group is not using the entire cluster at once and allowing others to use the clusters.
</br></br>
#### Central Storage Array:
The central storage array allows all of the nodes within the cluster to have access to the same files without needing to transfer them around. HCC has three arrays mounted on the clusters with more details [here]({{< ref "/guides/handling_data" >}}).
+++
title = "Basic Linux commands"
description = "Simple commands you'll want to know"
weight = "32"
+++
Basic commands
--------------
###### [[Jump to the Video Tutorial]](#tutorial-video)
Holland clusters all run on the Linux operating system, similarly to how
your personal computer might run Windows or Mac OS. However, unlike
Windows or Mac OS, our systems do not utilize a graphical user interface
where you can use a mouse to navigate and initiate commands. Instead, we
use a command line interface, where the user types in commands which are
then processed and text output is displayed on the screen. The default
shell used is Bash. Bash may seem complicated to learn at first, but
with just a small handful of commands, you can do anything that you
would usually do with a mouse. **In fact, once people become proficient
in Bash, many of them prefer it over graphical interfaces due to its
versatility and performance.**
Below, we have compiled a list of common commands and usage examples.
For a more information, check out one of these references:
- [Software Carpentry’s "Introduction to the Bash Shell" Lesson](https://eharstad.github.io/shell-novice) -
a great walkthrough of the basics of Bash designed for novice users
- [Linux Users Guide](http://www.comptechdoc.org/os/linux/usersguide) -
detailed information about the Linux command line and how to utilize
it
- [Linux Command Line Cheat Sheet](https://www.cheatography.com/davechild/cheat-sheets/linux-command-line) -
a quick reference for Linux commands. Offers a PDF version that you
can print out.
Linux Commands Reference List:
------------------------------
<table>
<tbody>
<tr class="odd">
<td>ls</td>
<td>list: Lists the files and directories located in the current directory</td>
<td><ul>
<li>ls</li>
<li>ls -a
<ul>
<li>shows all the files in the directory, including hidden ones</li>
</ul></li>
<li>ls -l
<ul>
<li>shows contents in a list format including information such as file size, file permissions and date the file was modified</li>
</ul></li>
<li>ls *.txt
<ul>
<li>shows all files in the current directory which end with .txt</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>cd</td>
<td>change directory: this allows users to navigate in or out of file directories</td>
<td><ul>
<li>cd &lt;folder path&gt;</li>
<li>cd folder_name
<ul>
<li>navigates into directory &quot;folder_name&quot; located in the current directory</li>
</ul></li>
<li>cd ..
<ul>
<li>navigates out of a directory and into the parent directory</li>
</ul>
cd $HOME (or $WORK)
<ul>
<li>navigates to a user's home (or work) directory</li>
</ul></li>
</ul></td>
</tr>
<tr class="odd">
<td>mv</td>
<td>move: used to move a file or directory to another location</td>
<td><ul>
<li>mv &lt;current file(s)&gt; &lt;target file(s)&gt;</li>
<li>mv * ../
<ul>
<li>moves all files from the current directory into the parent directory</li>
</ul></li>
<li>mv old_filename new_filename
<ul>
<li>renames the file &quot;old_filename&quot; to &quot;new_filename&quot;</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>cp</td>
<td>copy: used to copy a file or directory to another location</td>
<td><ul>
<li>cp &lt;current file(s)&gt; &lt;target file(s)&gt;</li>
<li>cp * ../
<ul>
<li>copies all files in the current directory and puts the copies into the parent directory</li>
</ul></li>
<li>cp -r ./orig_folder ./new_folder<br />
<ul>
<li>copies all files and directories within orig_folder into new_folder (-r indicates this is a recursive copy, so all sub-directories and files within orig_folder will be included in new_folder)</li>
</ul></li>
</ul></td>
</tr>
<tr class="odd">
<td>man</td>
<td><p>manual: displays documentation for commands</p>
<p><strong>Note:</strong> Use up and down arrows to scroll through the text. To exit the manual display, press 'q'</p></td>
<td><ul>
<li>man &lt;command name&gt;</li>
<li>man ls
<ul>
<li>displays documentation for the ls command</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>mkdir</td>
<td>make directory: creates a directory with the specified name</td>
<td><ul>
<li>mkdir &lt;new_folder&gt;
<ul>
<li>creates the directory &quot;new_folder&quot; within the current directory</li>
</ul></li>
</ul></td>
</tr>
<tr class="odd">
<td>rmdir</td>
<td><p>remove directory: deletes a directory with the specified name</p>
<p><strong>Note:</strong> rmdir only works on empty directories</p></td>
<td><ul>
<li>rmdir &lt;folder_name&gt;
<ul>
<li>removes the directory &quot;folder_name&quot; if the directory is empty</li>
</ul></li>
<li>rmdir *
<ul>
<li>removes all directories within the current directory</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>rm</td>
<td>remove: deletes file or files with the specified name(s)</td>
<td><ul>
<li>rm &lt;file_name&gt;
<ul>
<li>deletes the file &quot;file_name&quot;</li>
</ul></li>
<li>rm *
<ul>
<li>deletes all files in the current directory</li>
</ul></li>
</ul></td>
</tr>
<tr class="odd">
<td><p>nano</p></td>
<td><p>nano text editor: opens the nano text editor</p>
<p><strong>Note:</strong> To access the menu options, ^ indicates the control (CTRL) key.</p></td>
<td><ul>
<li>nano
<ul>
<li>opens the text editor in a blank file</li>
</ul></li>
<li>nano &lt;file_name&gt;
<ul>
<li>opens the text editor with &quot;file_name&quot; open. If &quot;file_name&quot; does not exist, it will be created if the file is saved</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>clear</td>
<td>clear: clears the screen of all input/output</td>
<td><ul>
<li>clear</li>
</ul></td>
</tr>
<tr class="odd">
<td>less</td>
<td><p>less: opens an extended view of a file</p>
<p><strong>Note:</strong> Use up and down arrows to scroll through the text. To exit the extended view, press 'q'</p></td>
<td><ul>
<li>less &lt;file_name&gt;
<ul>
<li>opens an extended view of the file &quot;file_name&quot;</li>
</ul></li>
</ul></td>
</tr>
<tr class="even">
<td>cat</td>
<td>concatenate: sends file contents to standard input - used frequently with pipes</td>
<td><ul>
<li>cat &lt;file_name&gt;
<ul>
<li>prints the contents of the file &quot;file_name&quot;</li>
</ul></li>
<li>cat *.txt
<ul>
<li>prints the contents of all files in the current directory that end in &quot;.txt&quot;</li>
</ul></li>
</ul></td>
</tr>
</tbody>
</table>
Tutorial Video
--------------
{{< youtube B0VdKiHNjU4 >}}
+++
title = "For Mac/Linux Users"
description = "Quickstart Guide for Mac/Linux Users"
weight = "22"
+++
##### Use of Duo two-factor authentication is **required** to access HCC resources.
##### Please see [Setting up and Using Duo]({{< relref "setting_up_and_using_duo" >}}).
---
- [Access to HCC Supercomputers] (#access-to-hcc-supercomputers)
- [File Transferring with HCC Supercomputers] (#file-transfering)
- [SCP] (#using-the-scp-command)
- [CyberDuck] (#using-cyberduck)
- [Mac Tutorial Video](#mac-tutorial-video)
- [Linux Tutorial Video](#linux-tutorial-video)
This quick start will help you configure your personal computer to work
with the HCC supercomputers.
If you are running Windows, please use the quickstart [For Windows
Users]({{< relref "for_windows_users" >}}).
Access to HCC Supercomputers
-------------------------------
For Mac/Linux users, use the system program Terminal to access to the
HCC supercomputers. In the Terminal prompt,
type `ssh <username>@tusker.unl.edu` and the corresponding password
to get access to the HCC cluster **Tusker**. Note that &lt;username&gt;
should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
or go to http://hcc.unl.edu/newusers.
To use the **Crane** cluster, replace tusker.unl.edu with with crane.unl.edu.
{{< highlight bash >}}
$ ssh <username>@tusker.unl.edu
$ <password>
{{< /highlight >}}
File Transferring with HCC Supercomputers
-----------------------------------------
### Using the SCP command
For Mac/Linux users, file transferring between your personal computer
and the HCC supercomputers can be achieved through the command `scp`.
Here we use **Tusker** for example. **The following commands should be
executed from your computer. **
**Uploading from local to remote**
{{< highlight bash >}}
$ scp -r ./<folder name> <username>@tusker.unl.edu:/work/<group name>/<username>
{{< /highlight >}}
The above command line transfers a folder from the current directory
(`./`) of the your computer to the `$WORK` directory of the HCC
supercomputer, Tusker. Note that you need to replace `<group name>`
and `<username>` with your HCC group name and username.
**Downloading from remote to local**
{{< highlight bash >}}
$ scp -r <username>@tusker.unl.edu:/work/<group name>/<username>/<folder name> ./
{{< /highlight >}}
The above command line transfers a folder from the `$WORK` directory of
the HCC supercomputer, Tusker, to the current directory (`./`) of the
your computer.
### Using Cyberduck
---------------
If you wish to use a GUI, be aware that not all programs will function
correctly with Duo two-factor authentication. Mac users are recommended
to use [Cyberduck](http://cyberduck.io). It is compatible with Duo, but a
few settings need to be changed.
Under **Preferences - General**, change the default protocol to SFTP:
{{< figure src="/images/7274497.png" height="450" >}}
Under **Preferences - Transfers**, reuse the browser connection for file
transfers. This will avoid the need to reenter your password for every
file transfer:
{{< figure src="/images/7274498.png" height="450" >}}
Finally, under **Preferences - SFTP**, set the file transfer method to
SCP:
{{< figure src="/images/7274499.png" height="450" >}}
To add an HCC machine, in the bookmarks pane click the "+" icon:
{{< figure src="/images/7274500.png" height="450" >}}
Ensure the type of connection is SFTP. Enter the hostname of the machine
you wish to connect to (tusker.unl.edu, crane.unl.edu) in the **Server**
field, and your HCC username in the **Username** field. The
**Nickname** field is arbitrary, so enter whatever you prefer.
{{< figure src="/images/7274501.png" height="450" >}}
After you add the bookmark, double-click it to connect.
{{< figure src="/images/7274505.png" height="450" >}}
Enter your HCC username and password in the dialog box that will appear
and click *Login*.
{{< figure src="/images/7274508.png" height="450" >}}
A second login dialogue will now appear. Notice the text has changed to
say Duo two-factor.
{{< figure src="/images/7274510.png" height="450" >}}
Clear the **Password** field in the dialogue. If you are using the Duo
Mobile app, enter '1' to have a push notification send to your phone or
tablet. If you are using a Yubikey, ensure the cursor is active in the
**Password** field, and press the button on the Yubikey.
{{< figure src="/images/7274509.png" height="450" >}}
The login should complete and you can simply drag and drop files to or
from the window.
{{< figure src="/images/7274511.png" height="450" >}}
Mac Tutorial Video
------------------
{{< youtube ulfcmRGfqxU >}}
Linux Tutorial Video
--------------------
{{< youtube K0i3swpwtdc >}}
+++
title = "For Windows Users"
description = "Quickstart for Windows Users"
weight = "20"
+++
##### Use of Duo two-factor authentication is **required** to access HCC resources.
##### Please see [Setting up and Using Duo]({{< relref "setting_up_and_using_duo" >}}).
---
- [Access to HCC Supercomputers] (#access-to-hcc-supercomputers)
- [For Windows 10 Users](#windows-10)
- [For Windows 7 and 8.1 Users](#windows-7-and-8-1)
- [File Transferring with HCC Supercomputers] (#file-transferring)
- [SCP - Command Line] (#scp)
- [WinSCP - GUI] (#winscp)
- [Tutorial Video](#tutorial-video)
Access to HCC Supercomputers
-------------------------------
{{% notice info %}}
If you are on a Mac, please use the quickstart for [For Mac/Linux
Users]({{< relref "for_maclinux_users" >}}).
{{% /notice %}}
### Windows 10
--------------
For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the
HCC supercomputers. In the Command Prompt,
type `ssh <username>@tusker.unl.edu` and the corresponding password
to get access to the HCC cluster **Tusker**. Note that &lt;username&gt;
should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
or go to http://hcc.unl.edu/newusers.
To use the **Crane** cluster, replace tusker.unl.edu with crane.unl.edu.
{{< highlight bash >}}
C:\> ssh <username>@tusker.unl.edu
C:\> <password>
{{< /highlight >}}
### Windows 7 and 8.1
--------------
This quick start will help you configure your personal computer to work
with the HCC supercomputers. Here we use the two third party application
**PuTTY** and **WinSCP** for demonstration.
PuTTY: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe)
Here we use the HCC cluster **Tusker** for demonstration. To use the
**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host Name, then click
**Open**.
{{< figure src="/images/3178523.png" height="450" >}}
2. On the second screen, click on **Yes**.
{{< figure src="/images/3178524.png" height="300" >}}
3. On the third screen, enter your HCC account **username**. If you do
not have a HCC account, please contact an HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
or go to http://hcc.unl.edu/newusers.
{{% notice info %}}Replace `jingchao` with your username.{{% /notice %}}
{{< figure src="/images/8127261.png" height="450" >}}
4. On the next screen, enter your HCC account **password**.
{{% notice info %}}**Note that PuTTY will not show the characters as you type for security reasons.**{{% /notice %}}
{{< figure src="/images/8127262.png" height="450" >}}
5. After you input the correct
password, you will be asked to choose a Duo authentication
method.
6. If you have a Yubikey set up by HCC, please hold the Yubikey for \~1
second. Then you will be brought to your home directory similar as
below.
{{< figure src="/images/8127266.png" height="450" >}}
7. If you set up Duo via a smart phone, please type "1" in your
terminal and press "Enter". (Duo-Push is the most cost-effective way
for Duo authentication, we recommend all user use this option if
that is applicable.)
8. Check your smart phone for Duo login request. Press "Approve" if you
can verify the request. If you find any Duo login request that is
not initiated by yourself, deny it and report this incident
immediately to {{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu).
{{< figure src="/images/8127263.png" height="450" >}}
9. After you approve the Duo login request, you will be brought to your
home directory similar as below.
{{< figure src="/images/8127264.png" height="450" >}}
File Transferring with HCC Supercomputers
-----------------------------------------
{{% notice info%}}
For best results when transfering data to and from the clusters, refer to [Handling Data]({{< ref "/guides/handling_data" >}})
{{%/notice%}}
### SCP
For Windows users, file transferring between your personal computer
and the HCC supercomputers can be achieved through the command `scp`.
Here we use **Tusker** for example. **The following commands should be
executed from your computer. **
**Uploading from local to remote**
{{< highlight bash >}}
C:\> scp -r .\<folder name> <username>@tusker.unl.edu:/work/<group name>/<username>
{{< /highlight >}}
The above command line transfers a folder from the current directory
(`.\`) of the your computer to the `$WORK` directory of the HCC
supercomputer, Tusker. Note that you need to replace `<group name>`
and `<username>` with your HCC group name and username.
**Downloading from remote to local**
{{< highlight bash >}}
C:\> scp -r <username>@tusker.unl.edu:/work/<group name>/<username>/<folder name> .\
{{< /highlight >}}
The above command line transfers a folder from the `$WORK` directory of
the HCC supercomputer, Tusker, to the current directory (`.\`) of the
your computer.
### WinSCP
WinSCP: http://winscp.net/eng/download.php
Usually it is convenient to upload and download files between your personal computer
and the HCC supercomputers through a Graphic User Interface (GUI).
Download and install the third party application **WinSCP**
to connect the file systems between your personal computer and the HCC supercomputers.
Below is a step-by-step installation guide. Here we use the HCC cluster **Tusker**
for demonstration. To use the **Crane** cluster, replace `tusker.unl.edu`
with `crane.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host name, enter your
HCC account username and password for User name and Password. Then
click on **Login**.
{{< figure src="/images/3178530.png" height="450" >}}
2. On the second screen, click on **Yes**.
{{< figure src="/images/3178531.png" >}}
3. Choose option "1" and press "Enter". Or simply press your Yubikey if
you have one.
{{< figure src="/images/8127268.png" >}}
4. On the third screen, click on **Remote**. Under Remote, choose Go To
and Open Directory/Bookmark. Alternatively, you can use the keyboard
shortcut "Ctrl + O".
{{< figure src="/images/3178532.png" height="450" >}}
5. On the final screen, type `/work/<group name>/<username>` for Open
directory. Use your HCC group name and username to replace
`<group name>` and `<username>`. Then click on **OK**.
{{< figure src="/images/3178533.png" height="450" >}}
6. Now you can drop and drag the files between your personal computer
and the HCC supercomputers.
{{< figure src="/images/3178539.png" height="450" >}}
Tutorial Video
--------------
{{< youtube -Vh7SyC-3mA >}}
+++
title = "Reusing SSH connections in Linux/Mac"
description = "Reusing connections makes it easier to use multiple terminals"
weight = "37"
+++
To make it more convenient for users who use multiple terminal sessions
simultaneously, SSH can reuse an existing connection if connecting from
Linux or Mac. After the initial login, subsequent terminals can use
that connection, eliminating the need to enter the username and password
each time for every connection. To enable this feature, add the
following lines to your `~/.ssh/config `file:
{{% panel header="`~/.ssh/config`"%}}
{{< highlight bash >}}
Host *
ControlMaster auto
ControlPath /tmp/%r@%h:%p
ControlPersist 2h
{{< /highlight >}}
{{% /panel %}}
{{% notice info%}}
You may not have an existing `~/.ssh/config.` If not, simply create the
file and set the permissions appropriately first:
`touch ~/.ssh/config && chmod 600 ~/.ssh/config`
{{% /notice %}}
This will enable connection reuse when connecting to any host via SSH or
SCP.
+++
title = "Submitting Jobs"
description = "How to submit jobs to HCC resources"
weight = "10"
+++
Crane and Tusker are managed by
the [SLURM](https://slurm.schedmd.com) resource manager.
In order to run processing on Crane or Tusker, you
must create a SLURM script that will run your processing. After
submitting the job, SLURM will schedule your processing on an available
worker node.
Before writing a submit file, you may need to
[compile your application]({{< relref "/guides/running_applications/compiling_source_code" >}}).
- [Ensure proper working directory for job output](#ensure-proper-working-directory-for-job-output)
- [Creating a SLURM Submit File](#creating-a-slurm-submit-file)
- [Submitting the job](#submitting-the-job)
- [Checking Job Status](#checking-job-status)
- [Checking Job Start](#checking-job-start)
- [Next Steps](#next-steps)
### Ensure proper working directory for job output
{{% notice info %}}
Because the /home directories are not writable from the worker nodes, all SLURM job output should be directed to your /work path.
{{% /notice %}}
{{% panel theme="info" header="Manual specification of /work path" %}}
{{< highlight bash >}}
$ cd /work/[groupname]/[username]
{{< /highlight >}}
{{% /panel %}}
The environment variable `$WORK` can also be used.
{{% panel theme="info" header="Using environment variable for /work path" %}}
{{< highlight bash >}}
$ cd $WORK
$ pwd
/work/[groupname]/[username]
{{< /highlight >}}
{{% /panel %}}
Review how /work differs from /home [here.]({{< relref "/guides/handling_data/_index.md" >}})
### Creating a SLURM Submit File
{{% notice info %}}
The below example is for a serial job. For submitting MPI jobs, please
look at the [MPI Submission Guide.]({{< relref "submitting_an_mpi_job" >}})
{{% /notice %}}
A SLURM submit file is broken into 2 sections, the job description and
the processing. SLURM job description are prepended with `#SBATCH` in
the submit file.
**SLURM Submit File**
{{< highlight batch >}}
#!/bin/sh
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=1024 # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=hello-world
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
module load example/test
hostname
sleep 60
{{< /highlight >}}
- **time**
Maximum walltime the job can run. After this time has expired, the
job will be stopped.
- **mem-per-cpu**
Memory that is allocated per core for the job. If you exceed this
memory limit, your job will be stopped.
- **mem**
Specify the real memory required per node in MegaBytes. If you
exceed this limit, your job will be stopped. Note that for you
should ask for less memory than each node actually has. For
instance, Tusker has 1TB, 512GB and 256GB of RAM per node. You may
only request 1000GB of RAM for the 1TB node, 500GB of RAM for the
512GB nodes, and 250GB of RAM for the 256GB nodes. For Crane, the
max is 500GB.
- **job-name**
The name of the job. Will be reported in the job listing.
- **partition**
The partition the job should run in. Partitions determine the job's
priority and on what nodes the partition can run on. See the
[Partitions]({{< relref "/guides/submitting_jobs/partitions/_index.md" >}}) page for a list of possible partitions.
- **error**
Location of the stderr will be written for the job. `[groupname]`
and `[username]` should be replaced your group name and username.
Your username can be retrieved with the command `id -un` and your
group with `id -ng`.
- **output**
Location of the stdout will be written for the job.
More advanced submit commands can be found on the [SLURM Docs](https://slurm.schedmd.com/sbatch.html).
You can also find an example of a MPI submission on [Submitting an MPI Job]({{< relref "submitting_an_mpi_job" >}}).
### Submitting the job
Submitting the SLURM job is done by command `sbatch`. SLURM will read
the submit file, and schedule the job according to the description in
the submit file.
Submitting the job described above is:
{{% panel theme="info" header="SLURM Submission" %}}
{{< highlight batch >}}
$ sbatch example.slurm
Submitted batch job 24603
{{< /highlight >}}
{{% /panel %}}
The job was successfully submitted.
### Checking Job Status
Job status is found with the command `squeue`. It will provide
information such as:
- The State of the job:
- **R** - Running
- **PD** - Pending - Job is awaiting resource allocation.
- Additional codes are available
on the [squeue](http://slurm.schedmd.com/squeue.html)
page.
- Job Name
- Run Time
- Nodes running the job
Checking the status of the job is easiest by filtering by your username,
using the `-u` option to squeue.
{{< highlight batch >}}
$ squeue -u <username>
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
24605 batch hello-wo <username> R 0:56 1 b01
{{< /highlight >}}
Additionally, if you want to see the status of a specific partition, for
example if you are part of a [partition]({{< relref "/guides/submitting_jobs/partitions/_index.md" >}}),
you can use the `-p` option to `squeue`:
{{< highlight batch >}}
$ squeue -p esquared
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
73435 esquared MyRandom tingting R 10:35:20 1 ri19n10
73436 esquared MyRandom tingting R 10:35:20 1 ri19n12
73735 esquared SW2_driv hroehr R 10:14:11 1 ri20n07
73736 esquared SW2_driv hroehr R 10:14:11 1 ri20n07
{{< /highlight >}}
#### Checking Job Start
You may view the start time of your job with the
command `squeue --start`. The output of the command will show the
expected start time of the jobs.
{{< highlight batch >}}
$ squeue --start --user lypeng
JOBID PARTITION NAME USER ST START_TIME NODES NODELIST(REASON)
5822 batch Starace lypeng PD 2013-06-08T00:05:09 3 (Priority)
5823 batch Starace lypeng PD 2013-06-08T00:07:39 3 (Priority)
5824 batch Starace lypeng PD 2013-06-08T00:09:09 3 (Priority)
5825 batch Starace lypeng PD 2013-06-08T00:12:09 3 (Priority)
5826 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5827 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5828 batch Starace lypeng PD 2013-06-08T00:12:39 3 (Priority)
5829 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority)
5830 batch Starace lypeng PD 2013-06-08T00:13:09 3 (Priority)
5831 batch Starace lypeng PD 2013-06-08T00:14:09 3 (Priority)
5832 batch Starace lypeng PD N/A 3 (Priority)
{{< /highlight >}}
The output shows the expected start time of the jobs, as well as the
reason that the jobs are currently idle (in this case, low priority of
the user due to running numerous jobs already).
#### Removing the Job
Removing the job is done with the `scancel` command. The only argument
to the `scancel` command is the job id. For the job above, the command
is:
{{< highlight batch >}}
$ scancel 24605
{{< /highlight >}}
### Next Steps
{{% children %}}
---
title: "Redirector"
---
<script>
// Redirector for hcc-docs links
// Search for URL parameter 'q' and redirect to top match
var lunrIndex;
function getQueryVariable(variable) {
var query = window.location.search.substring(1);
var vars = query.split('&');
for (var i = 0; i < vars.length; i++) {
var pair = vars[i].split('=');
if (pair[0] === variable) {
return decodeURIComponent(pair[1].replace(/\+/g, '%20'));
}
}
}
// Initialize lunrjs using our generated index file
function initLunr() {
// First retrieve the index file
return $.getJSON(baseurl + "/index.json")
.done(function(index) {
pagesIndex = index;
// Set up lunrjs by declaring the fields we use
// Also provide their boost level for the ranking
lunrIndex = new lunr.Index
lunrIndex.ref("uri");
lunrIndex.field('title', {
boost: 15
});
lunrIndex.field('tags', {
boost: 10
});
lunrIndex.field("content", {
boost: 5
});
// Feed lunr with each file and let lunr actually index them
pagesIndex.forEach(function(page) {
lunrIndex.add(page);
});
lunrIndex.pipeline.remove(lunrIndex.stemmer)
})
.fail(function(jqxhr, textStatus, error) {
var err = textStatus + ", " + error;
console.error("Error getting Hugo index file:", err);
});
}
function search(query) {
// Find the item in our index corresponding to the lunr one to have more info
return lunrIndex.search(query).map(function(result) {
return pagesIndex.filter(function(page) {
return page.uri === result.ref;
})[0];
});
}
initLunr().then(function() {
var searchTerm = getQueryVariable('q');
// Replace non-word chars with space. lunr doesn't like quotes.
searchTerm = searchTerm.replace(/[\W_]+/g," ");
var results = search(searchTerm);
if (!results.length) {
window.location = baseurl;
} else {
window.location = results[0].uri;
}
});
</script>
ignore: "facilities.md"
hide: true
---
title: "2012"
summary: "Historical listing of various HCC events for the year 2012."
---
Historical listing of HCC Events
----------
{{ children('Events/2012') }}
+++ ---
title = "Nebraska Supercomputing Symposium '12" title: Nebraska Supercomputing Symposium '12
description = "Nebraska Supercomputing Symposium '12." summary: "Nebraska Supercomputing Symposium '12."
+++ ---
<img src="/images/2012-11-07.jpg" width="300" class="img-border">
<img src="/images/2012-11-07hdr.jpg" width="300" class="img-border">
<img src="/images/2012-11-07.jpg?effects=drop-shadow" width="300" />
<img src="/images/2012-11-07hdr.jpg?effects=drop-shadow" width="300" />
<img src="/images/PIVOT_Logo.png" width="100" /> Talks <img src="/images/PIVOT_Logo.png" width="100" class="img-border>
Talks
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
- [Intro slides](https://unl.box.com/s/4gctct2jgu2e78y7efld7w39xzbd1j4d) - [Intro slides](https://unl.box.com/s/4gctct2jgu2e78y7efld7w39xzbd1j4d)
......
+++ ---
title = "Supercomputing Mini Workshop 2012" title: Supercomputing Mini Workshop 2012
description = "Supercomputing Mini Workshop 2012." summary: "Supercomputing Mini Workshop 2012."
+++ ---
Presentation Presentation
============ ============
......