Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • FAQ
  • RDPv10
  • UNL_OneDrive
  • atticguidelines
  • data_share
  • globus-auto-backups
  • good-hcc-practice-rep-workflow
  • hchen2016-faq-home-is-full
  • ipynb-doc
  • master
  • rclone-fix
  • sislam2-master-patch-51693
  • sislam2-master-patch-86974
  • site_url
  • test
15 results

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
  • 26-add-screenshots-for-newer-rdp-v10-client
  • 28-overview-page-for-connecting-2
  • RDPv10
  • gpu_update
  • master
  • overview-page-for-handling-data
  • patch-1
  • patch-10
  • patch-11
  • patch-12
  • patch-2
  • patch-3
  • patch-4
  • patch-5
  • patch-6
  • patch-7
  • patch-8
  • patch-9
  • runTime
  • submitting-jobs-overview
20 results
Show changes
Showing
with 0 additions and 1719 deletions
+++
title = "Running OLAM at HCC"
description = "How to run the OLAM (Ocean Land Atmosphere Model) on HCC resources."
+++
### OLAM compilation
##### pgi/11 compilation with mpi and openmp enabled
1. Load modules:
{{< highlight bash >}}
module load compiler/pgi/11 openmpi/1.6 szip/2.1 zlib/1.2 NCL/6.1dist
{{< /highlight >}}
2. Edit the `include.mk` file.
{{% panel theme="info" header="include.mk" %}}
{{< highlight batch >}}
#----------------- LINUX Intel Fortran ifort/gcc ---------------
F_COMP=mpif90
# If the compiler supports (and the user wants to use)
# the module IEEE_ARITHMETIC, uncomment below
IEEE_ARITHMETIC=yes
# If using MPI libraries:
OLAM_MPI=yes
# If parallel hdf5 is supported, uncomment the next line
OLAM_PARALLEL_HDF5=yes
# If you use the ED2 model, uncomment the next line
#USE_ED2=yes
MPI_PATH=/util/opt/openmpi/1.6/pgi/11
PAR_INCS=-I$(MPI_PATH)/include:$(MPI_PATH)/lib
PAR_LIBS=-L$(MPI_PATH)/lib -lmpi
# OPTIMIZED:
F_OPTS=-O3 -traceback -mp
#F_OPTS=-xHost -O3 -fno-alias -ip -openmp -traceback
#F_OPTS=-g -O3 -xHost -traceback
# DEBUG:
#F_OPTS=-g -fp-model precise -check bounds -traceback \
# -debug extended -check uninit -ftrapuv
# FORTRAN FLAGS FOR BIG FILES WHICH WOULD HAVE EXCESSIVE COMPILATION TIME
#SLOW_FFLAGS=-O1 -g -no-ip -traceback
C_COMP=mpicc
#C_COMP=mpicc
C_OPTS=-DUNDERSCORE -DLITTLE
NCARG_DIR=/util/src/ncl_ncarg/ncl_ncarg-6.1.2/lib
LIBNCARG=-L$(NCARG_DIR) -lncarg -lncarg_gks -lncarg_c \
-L/usr/lib64 -lX11 -ldl -lpthread -lgfortran -lcairo
HDF5_LIBS=-L/util/opt/hdf5/1.8.13/openmpi/1.6/pgi/11/lib -lhdf5_fortran -lhdf5 -lz -lm
HDF5_INCS=-I/util/opt/hdf5/1.8.13/openmpi/1.6/pgi/11/include
NETCDF_LIBS=-L/util/opt/netcdf/4.2/pgi/11/lib -lnetcdf
NETCDF_INCS=-I/util/opt/netcdf/4.2/pgi/11/include
LOADER=$(F_COMP)
LOADER_OPTS=-mp
#LOADER_OPTS=-static-intel $(F_OPTS)
# For Apple OSX: the stack size needs to be increased at link time
# LOADER_OPTS=-static-intel $(F_OPTS) -Wl,-stack_size -Wl,0x10000000
# to allow ifort compiler to link with pg-compiled ncar graphics:
# LIBS=-z muldefs -L/opt/pgi/linux86-64/5.2/lib -lpgftnrtl -lpgc
## IMPORTANT: Need to specify this flag in ED2
#USE_HDF5=1
{{< /highlight >}}
{{% /panel %}}
3. Command: `make clean`
4. Command: `make -j 8`
##### intel/12 compilation with mpi and openmp enabled
1. Load modules:
{{< highlight bash >}}
module load compiler/intel/12 openmpi/1.6 szip/2.1 zlib/1.2
{{< /highlight >}}
2. Edit the `include.mk` file.
{{% panel theme="info" header="include.mk" %}}
{{< highlight batch >}}
#----------------- LINUX Intel Fortran ifort/gcc ---------------
F_COMP=mpif90
# If the compiler supports (and the user wants to use)
# the module IEEE_ARITHMETIC, uncomment below
IEEE_ARITHMETIC=yes
# If using MPI libraries:
OLAM_MPI=yes
# If parallel hdf5 is supported, uncomment the next line
OLAM_PARALLEL_HDF5=yes
# If you use the ED2 model, uncomment the next line
#USE_ED2=yes
MPI_PATH=/util/opt/openmpi/1.6/intel/12
PAR_INCS=-I$(MPI_PATH)/include:$(MPI_PATH)/lib
PAR_LIBS=-L$(MPI_PATH)/lib -lmpi
# OPTIMIZED:
F_OPTS=-O3 -traceback -openmp
#F_OPTS=-xHost -O3 -fno-alias -ip -openmp -traceback
#F_OPTS=-g -O3 -xHost -traceback
# DEBUG:
#F_OPTS=-g -fp-model precise -check bounds -traceback \
# -debug extended -check uninit -ftrapuv
# FORTRAN FLAGS FOR BIG FILES WHICH WOULD HAVE EXCESSIVE COMPILATION TIME
#SLOW_FFLAGS=-O1 -g -no-ip -traceback
C_COMP=mpicc
#C_COMP=mpicc
C_OPTS=-DUNDERSCORE -DLITTLE
NCARG_DIR=/util/src/ncl_ncarg/ncl_ncarg-6.1.2/lib
LIBNCARG=-L$(NCARG_DIR) -lncarg -lncarg_gks -lncarg_c \
-L/usr/lib64 -lX11 -ldl -lpthread -lgfortran -lcairo
HDF5_LIBS=-L/util/opt/hdf5/1.8.13/openmpi/1.6/intel/12/lib -lhdf5_fortran -lhdf5 -lz -lm
HDF5_INCS=-I/util/opt/hdf5/1.8.13/openmpi/1.6/intel/12/include
NETCDF_LIBS=-L/util/opt/netcdf/4.2/intel/12/lib -lnetcdf
NETCDF_INCS=-I/util/opt/netcdf/4.2/intel/12/include
LOADER=$(F_COMP)
LOADER_OPTS=-openmp
#LOADER_OPTS=-static-intel $(F_OPTS)
# For Apple OSX: the stack size needs to be increased at link time
# LOADER_OPTS=-static-intel $(F_OPTS) -Wl,-stack_size -Wl,0x10000000
# to allow ifort compiler to link with pg-compiled ncar graphics:
# LIBS=-z muldefs -L/opt/pgi/linux86-64/5.2/lib -lpgftnrtl -lpgc
## IMPORTANT: Need to specify this flag in ED2
#USE_HDF5=1
{{< /highlight >}}
{{% /panel %}}
3. Command: `make clean`
4. Command: `make -j 8`
### OLAM compilation on Crane
##### Intel/15 compiler with OpenMPI/1.10
1. Load modules:
{{< highlight bash >}}
module load compiler/intel/15 openmpi/1.10 NCL/6.1 netcdf/4.4 phdf5/1.8 szip/2.1 zlib/1.2
{{< /highlight >}}
2. Edit the `include.mk` file:
{{% panel theme="info" header="include.mk" %}}
{{< highlight batch >}}
#----------------- LINUX Intel Fortran ifort/gcc ---------------
F_COMP=/util/opt/hdf5/1.8/openmpi/1.10/intel/15/bin/h5pfc
# If the compiler supports (and the user wants to use)
# the module IEEE_ARITHMETIC, uncomment below
IEEE_ARITHMETIC=yes
# If using MPI libraries:
OLAM_MPI=yes
# If parallel hdf5 is supported, uncomment the next line
OLAM_PARALLEL_HDF5=yes
# If you use the ED2 model, uncomment the next line
#USE_ED2=yes
#MPI_PATH=/usr/local/mpich
PAR_INCS=-I/util/opt/openmpi/1.10/intel/15/include
PAR_LIBS=-L/util/opt/openmpi/1.10/intel/15/lib
# OPTIMIZED:
F_OPTS=-xHost -O3 -fno-alias -ip -openmp -traceback
#F_OPTS=-g -O3 -xHost -traceback
# DEBUG:
#F_OPTS=-g -fp-model precise -check bounds -traceback \
# -debug extended -check uninit -ftrapuv
# EXTRA OPTIONS FOR FIXED-SOURCE CODE
FIXED_SRC_FLAGS=-fixed -132
# FORTRAN FLAGS FOR BIG FILES WHICH WOULD HAVE EXCESSIVE COMPILATION TIME
SLOW_FFLAGS=-O1 -g -no-ip -traceback
#C_COMP=icc
C_COMP=mpicc
C_OPTS=-O3 -DUNDERSCORE -DLITTLE
NCARG_DIR=/util/opt/NCL/6.1/lib
LIBNCARG=-L$(NCARG_DIR) -lncarg -lncarg_gks -lncarg_c \
-L/usr/lib64 -lX11 -ldl -lpng -lpthread -lgfortran -lcairo
HDF5_LIBS=-L/util/opt/hdf5/1.8/openmpi/1.10/intel/15/lib
HDF5_INCS=-I/util/opt/hdf5/1.8/openmpi/1.10/intel/15/include
NETCDF_LIBS=-L/util/opt/netcdf/4.4/intel/15/lib -lnetcdf
NETCDF_INCS=-I/util/opt/netcdf/4.4/intel/15/include
LOADER=$(F_COMP)
LOADER_OPTS=-static-intel $(F_OPTS)
# For Apple OSX: the stack size needs to be increased at link time
# LOADER_OPTS=-static-intel $(F_OPTS) -Wl,-stack_size -Wl,0x10000000
# to allow ifort compiler to link with pg-compiled ncar graphics:
# LIBS=-z muldefs -L/opt/pgi/linux86-64/5.2/lib -lpgftnrtl -lpgc
## IMPORTANT: Need to specify this flag in ED2
USE_HDF5=1
{{< /highlight >}}
{{% /panel %}}
3. Command: `make clean`
4. Command: `make -j 8`
### Sample SLURM submit scripts
##### PGI compiler:
{{% panel theme="info" header="Sample submit script for PGI compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
module load compiler/pgi/11 openmpi/1.6 szip/2.1 zlib/1.2
mpirun /path/to/olam-4.2c-mpi
{{< /highlight >}}
{{% /panel %}}
##### Intel compiler:
{{% panel theme="info" header="Sample submit script for Intel compiler" %}}
{{< highlight batch >}}
#!/bin/sh
#SBATCH --ntasks=8 # 8 cores
#SBATCH --mem-per-cpu=1024 # Minimum memory required per CPU (in megabytes)
#SBATCH --time=03:15:00 # Run time in hh:mm:ss
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
module load compiler/intel/12 openmpi/1.6 szip/2.1 zlib/1.2
mpirun /path/to/olam-4.2c-mpi
{{< /highlight >}}
{{% /panel %}}
+++
title = "Running Theano"
description = "How to run the Theano on HCC resources."
+++
Theano is available on HCC resources via the modules system. Both CPU and GPU
versions are available on Crane. Additionally, installs for both Python
2.7 and 3.6 are provided.
### Initial Setup
Theano attempts to write to a `~/.theano` directory in some
circumstances, which can cause errors as the `/home` filesystem is
read-only on HCC machines. As a workaround, create the directory on
`/work` and make a symlink from `/home`:
{{% panel theme="info" header="Create & symlink .theano directory" %}}
{{< highlight bash >}}
mkdir -p $WORK/.theano
ln -s $WORK/.theano $HOME/.theano
{{< /highlight >}}
{{% /panel %}}
This only needs to be done once on each HCC machine.
### Running the CPU version
To use the CPU version, simply load the module and run your Python code.
You can choose between the Python 2.7, 3.5 or 3.6 environments:
{{% panel theme="info" header="Python 2.7 version" %}}
{{< highlight bash >}}
module load theano/py27/1.0
python my_python2_script.py
{{< /highlight >}}
{{% /panel %}}
or
{{% panel theme="info" header="Python 3.5 version" %}}
{{< highlight bash >}}
module load theano/py35/1.0
python my_python3_script.py
{{< /highlight >}}
{{% /panel %}}
or
{{% panel theme="info" header="Python 3.6 version" %}}
{{< highlight bash >}}
module load theano/py36/1.0
python my_python3_script.py
{{< /highlight >}}
{{% /panel %}}
### Running the GPU version
To use the GPU version, first create a `~/.theanorc` file with the
following contents (or append to an existing file as needed):
{{% panel theme="info" header="~/.theanorc" %}}
{{< highlight batch >}}
[global]
device = cuda
{{< /highlight >}}
{{% /panel %}}
Next, load the theano module:
{{% panel theme="info" header="Load the theano module" %}}
{{< highlight bash >}}
module load theano/py27/0.9
{{< /highlight >}}
{{% /panel %}}
To test the GPU support, start an interactive job on a GPU node and
import the theano module within the Python interpreter. You should see
output similar to the following:
{{% panel theme="info" header="GPU support test" %}}
{{< highlight python >}}
Python 2.7.15 | packaged by conda-forge | (default, May 8 2018, 14:46:53)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import theano
Using cuDNN version 7005 on context None
Mapped name None to device cuda: Tesla K20m (0000:03:00.0)
{{< /highlight >}}
{{% /panel %}}
+++
title = "Using Preinstalled Software"
description = "How to use the module utility on HCC resources."
weight=10
+++
HCC offers many popular software packages already installed. Unlike a traditional
laptop or desktop, HCC resources use a module system for managing installed software. Users can load and
use pre-installed software by using the `module` command.
To request additional software installs, please complete a [software installation request]
(https://hcc.unl.edu/software-installation-request).
`module` commands provide an HPC system user the capability to compile
into their source code using any type of library that is
available on the server. The `module` command gives each user the
ability to modify their environmental `PATH` and `LD_LIBRARY_PATH`
variables.
{{% notice info %}}
Please note that if you compile your application using a particular
module, you must include the appropriate module load statement in your
submit script.
{{% /notice %}}
### List Modules Loaded
{{% panel theme="info" header="Example Usage: module list" %}}
{{< highlight bash >}}
module list
No Modulefiles Currently Loaded.
echo $PATH
/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
{{< /highlight >}}
{{% /panel %}}
### List Modules Available
{{% panel theme="info" header="Example Usage: Listing Available Modules" %}}
{{< highlight bash >}}
module avail
---------------------------------------------- /util/opt/Modules/modulefiles ----------------------------------------------
NCL/6.0 bowtie/2.0.0-beta6 compiler/pgi/12 hdfeos5/1.14 mplus/7.0 szip/2.1
NCL/6.0dist compiler/gcc/4.6 cufflinks/2.0.2 hugeseq/1.0 netcdf/4.1 tophat/2.0.5
NCO/4.1 compiler/gcc/4.7 deprecated intel-mkl/11 netcdf/4.2 udunits/2.1
R/2.15 compiler/intel/11 hdf4/4.2 intel-mkl/12 openmpi/1.5 zlib/1.2
WRF/WRF compiler/intel/12 hdf5/1.8 lsdyna/5.1.1 openmpi/1.6
acml/5.1 compiler/open64/4.5 hdf5/1.8.6 lsdyna/6.0.0 samtools/0.1
bowtie/0.12.8 compiler/pgi/11 hdfeos2/2.18 mplus/6.12 sas/9.3
{{< /highlight >}}
{{% /panel %}}
#### module load \<module-name\>
Places the binaries and libraries for \<module-name\> into your `PATH` and `LD_LIBRARY_PATH`.
{{% panel theme="info" header="Example Usage: Loading Desired Module" %}}
{{< highlight bash >}}
module load compiler/pgi/11
module list
Currently Loaded Modulefiles:
1) compiler/pgi/11
echo $PATH
/util/comp/pgi/linux86-64/11/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
{{< /highlight >}}
{{% /panel %}}
#### module unload \<module-name\>
Removes the binaries and libraries associated with \<module-name\> from your PATH and `LD_LIBRARY_PATH`.
{{% panel theme="info" header="Example Usage: module unload" %}}
{{< highlight bash >}}
module unload compiler/pgi/11
module list
No Modulefiles Currently Loaded.
echo $PATH
/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
{{< /highlight >}}
{{% /panel %}}
#### module purge
**Purges** all previously **loaded** module libraries and binaries from
your `PATH` and `LD_LIBRARY_PATH`.
{{% panel theme="info" header="Example Usage: module purge" %}}
{{< highlight bash >}}
module load compiler/open64
module load zlib/1.2
module list
Currently Loaded Modulefiles:
1) zlib/1.2 2) compiler/open64/4.5
module purge
module list
No Modulefiles Currently Loaded.
{{< /highlight >}}
{{% /panel %}}
#### module help
To see a complete list of module commands/options.
**Example Usage: module help**
{{% panel theme="info" header="Example Usage: module help" %}}
{{< highlight bash >}}
Usage: module [options] sub-command [args ...]
Options:
-h -? -H --help This help message
-s availStyle --style=availStyle Site controlled avail style: system en_grouped (default: en_grouped)
--regression_testing Lmod regression testing
-D Program tracing written to stderr
--debug=dbglvl Program tracing written to stderr
--pin_versions=pinVersions When doing a restore use specified version, do not follow defaults
-d --default List default modules only when used with avail
-q --quiet Do not print out warnings
--expert Expert mode
-t --terse Write out in machine readable format for commands: list, avail, spider, savelist
--initial_load loading Lmod for first time in a user shell
--latest Load latest (ignore default)
--ignore_cache Treat the cache file(s) as out-of-date
--novice Turn off expert and quiet flag
--raw Print modulefile in raw output when used with show
-w twidth --width=twidth Use this as max term width
-v --version Print version info and quit
-r --regexp use regular expression match
--gitversion Dump git version in a machine readable way and quit
--dumpversion Dump version in a machine readable way and quit
--check_syntax --checkSyntax Checking module command syntax: do not load
--config Report Lmod Configuration
--config_json Report Lmod Configuration in json format
--mt Report Module Table State
--timer report run times
--force force removal of a sticky module or save an empty collection
--redirect Send the output of list, avail, spider to stdout (not stderr)
--no_redirect Force output of list, avail and spider to stderr
--show_hidden Avail and spider will report hidden modules
--spider_timeout=timeout a timeout for spider
-T --trace
module [options] sub-command [args ...]
Help sub-commands:
------------------
help prints this message
help module [...] print help message from module(s)
Loading/Unloading sub-commands:
-------------------------------
load | add module [...] load module(s)
try-load | try-add module [...] Add module(s), do not complain if not found
del | unload module [...] Remove module(s), do not complain if not found
swap | sw | switch m1 m2 unload m1 and load m2
purge unload all modules
refresh reload aliases from current list of modules.
update reload all currently loaded modules.
Listing / Searching sub-commands:
---------------------------------
list List loaded modules
list s1 s2 ... List loaded modules that match the pattern
avail | av List available modules
avail | av string List available modules that contain "string".
spider List all possible modules
spider module List all possible version of that module file
spider string List all module that contain the "string".
spider name/version Detailed information about that version of the module.
whatis module Print whatis information about module
keyword | key string Search all name and whatis that contain "string".
Searching with Lmod:
--------------------
All searching (spider, list, avail, keyword) support regular expressions:
spider -r '^p' Finds all the modules that start with `p' or `P'
spider -r mpi Finds all modules that have "mpi" in their name.
spider -r 'mpi$ Finds all modules that end with "mpi" in their name.
Handling a collection of modules:
--------------------------------
save | s Save the current list of modules to a user defined "default" collection.
save | s name Save the current list of modules to "name" collection.
reset The same as "restore system"
restore | r Restore modules from the user's "default" or system default.
restore | r name Restore modules from "name" collection.
restore system Restore module state to system defaults.
savelist List of saved collections.
describe | mcc name Describe the contents of a module collection.
Deprecated commands:
--------------------
getdefault [name] load name collection of modules or user's "default" if no name given.
===> Use "restore" instead <====
setdefault [name] Save current list of modules to name if given, otherwise save as the default list for you the user.
===> Use "save" instead. <====
Miscellaneous sub-commands:
---------------------------
show modulefile show the commands in the module file.
use [-a] path Prepend or Append path to MODULEPATH.
unuse path remove path from MODULEPATH.
tablelist output list of active modules as a lua table.
Important Environment Variables:
--------------------------------
LMOD_COLORIZE If defined to be "YES" then Lmod prints properties and warning in color.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Lmod Web Sites
Documentation: http://lmod.readthedocs.org
Github: https://github.com/TACC/Lmod
Sourceforge: https://lmod.sf.net
TACC Homepage: https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
To report a bug please read http://lmod.readthedocs.io/en/latest/075_bug_reporting.html
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Modules based on Lua: Version 7.4.16 2017-05-23 11:10 -05:00
by Robert McLay mclay@tacc.utexas.edu
{{< /highlight >}}
{{% /panel %}}
+++
title = "Available Software for Crane"
description = "List of available software for crane.unl.edu."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++
{{% notice tip %}}
HCC provides some software packages via the Singularity container
software. If you do not see a desired package in the module list below,
please check the [Using Singularity]({{< relref "using_singularity" >}})
page for the software list there.
{{% /notice %}}
{{% panel theme="warning" header="Module prerequisites" %}}
If a module lists one or more prerequisites, the prerequisite module(s)
must be loaded before or along with, that module.
For example, the `cdo/2.1` modules requires `compiler/pgi/13.` To load
the cdo module, doing either
`module load compiler/pgi/13`
`module load cdo/2.1`
or
`module load compiler/pgi/13 cdo/2.1` (Note the prerequisite module
**must** be first.)
is acceptable.
{{% /panel %}}
{{% panel theme="info" header="Multiple versions" %}}
Some packages list multiple compilers for prerequisites. This means that
the package has been built with each version of the compilers listed.
{{% /panel %}}
{{% panel theme="warning" header="Custom GPU Anaconda Environment" %}}
If you are using custom GPU Anaconda Environment, the only module you need to load is `anaconda`:
`module load anaconda`
{{% /panel %}}
{{< table url="http://crane-head.unl.edu:8192/lmod/spider/json" >}}
+++
title = "Available Software for Rhino"
description = "List of available software for rhino.unl.edu."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++
{{% notice tip %}}
HCC provides some software packages via the Singularity container
software. If you do not see a desired package in the module list below,
please check the [Using Singularity]({{< relref "using_singularity" >}})
page for the software list there.
{{% /notice %}}
{{% panel theme="warning" header="Module prerequisites" %}}
If a module lists one or more prerequisites, the prerequisite module(s)
must be loaded before or along with, that module.
For example, the `cdo/2.1` modules requires `compiler/pgi/13.` To load
the cdo module, doing either
`module load compiler/pgi/13`
`module load cdo/2.1`
or
`module load compiler/pgi/13 cdo/2.1` (Note the prerequisite module
**must** be first.)
is acceptable.
{{% /panel %}}
{{% panel theme="info" header="Multiple versions" %}}
Some packages list multiple compilers for prerequisites. This means that
the package has been built with each version of the compilers listed.
{{% /panel %}}
{{% panel theme="warning" header="Custom GPU Anaconda Environment" %}}
If you are using custom GPU Anaconda Environment, the only module you need to load is `anaconda`:
`module load anaconda`
{{% /panel %}}
{{< table url="http://rhino-head.unl.edu:8192/lmod/spider/json" >}}
+++
title = "Reusing SSH connections in Linux/Mac"
description = "Reusing connections makes it easier to use multiple terminals"
weight = "37"
+++
To make it more convenient for users who use multiple terminal sessions
simultaneously, SSH can reuse an existing connection if connecting from
Linux or Mac. After the initial login, subsequent terminals can use
that connection, eliminating the need to enter the username and password
each time for every connection. To enable this feature, add the
following lines to your `~/.ssh/config `file:
{{% panel header="`~/.ssh/config`"%}}
{{< highlight bash >}}
Host *
ControlMaster auto
ControlPath /tmp/%r@%h:%p
ControlPersist 2h
{{< /highlight >}}
{{% /panel %}}
{{% notice info%}}
You may not have an existing `~/.ssh/config.` If not, simply create the
file and set the permissions appropriately first:
`touch ~/.ssh/config && chmod 600 ~/.ssh/config`
{{% /notice %}}
This will enable connection reuse when connecting to any host via SSH or
SCP.
---
title: "Facilities of the Holland Computing Center"
---
This document details the equipment resident in the Holland Computing Center (HCC) as of October 2019.
HCC has two primary locations directly interconnected by a 100 Gbps primary link with a 10 Gbps backup. The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. Dell S4248FB-ON edge switches and Z9264F-ON core switches provide high WAN bandwidth and Software Defined Networking (SDN) capability for both locations. The Schorr and PKI machine rooms both have 100 Gbps paths to the University of Nebraska, Internet2, and ESnet as well as backup 10 Gbps paths. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Rhino and Red. Rhino is a linux cluster dedicated to general campus usage with 7,040 compute cores interconnected by low-latency Mellanox QDR InfiniBand networking. 360 TB of BeeGFS storage is complemented by 50 TB of NFS storage and 1.5 TB of local scratch per node. Each compute node is a Dell R815 server with at least 192 GB RAM and 4 Opteron 6272 / 6376 (2.1 / 2.3 GHz) processors.
The largest machine on the Lincoln campus is Red, with 14,160 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 11 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 43 GPU nodes with 110 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
Anvil is an OpenStack cloud environment consisting of 1,520 cores and 400TB of CEPH storage all connected by 10 Gbps networking. The Anvil cloud exists to address needs of NU researchers that cannot be served by traditional scheduler-based HPC environments such as GUI applications, Windows based software, test environments, and persistent services. In addition, a project to expand Ceph storage by 1.1 PB is in progress.
Attic and Silo form a near line archive with 1.0 PB of usable storage. Attic is located at PKI in Omaha, while Silo acts as an online backup located in Lincoln. Both Attic and Silo are connected with 10 Gbps network connections.
In addition to the cluster specific Lustre storage, a shared common storage space exists between all HCC resources with 1.9PB capacity.
These resources are detailed further below.
# 1. HCC at UNL Resources
## 1.1 Rhino
* 107 4-socket Opteron 6172 / 6376 (16-core, 2.1 / 2.3 GHz) with 192 or 256 GB RAM
* 2x with 512 GB RAM, 2x with 1024 GB RAM
* Mellanox QDR InfiniBand
* 1 and 10 GbE networking
* 5x Dell N3048 switches
* 50TB shared storage (NFS) -> /home
* 360TB BeeGFS storage over Infiniband -> /work
* 1.5TB local scratch
## 1.2 Red
* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
* 46 2-socket Xeon Gold 6126 (2.6GHz) (48 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 16 2-socket Xeon E5-2640 v3 (2.6GHz) (32 slots per node)
* 40 2-socket Xeon E5-2650 v3 (2.3GHz) (40 slots per node)
* 28 2-socket Xeon E5-2650 v2 (2.6GHz) (32 slots per node)
* 48 2-socket Xeon E5-2660 v2 (2.2GHz) (32 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 24 2-socket Xeon E5520 (2.27GHz) (16 slots per node)
* 1 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 40 2-socket Opteron 6128 (2.0GHz) (32 slots per node)
* 40 4-socket Opteron 6272 (2.1GHz) (64 slots per node)
* 11 PB HDFS storage
* Mix of 1, 10, and 40 GbE networking
* 1x Dell S6000-ON switch
* 3x Dell S4048-ON switch
* 5x Dell S3048-ON switches
* 2x Dell S4810 switches
* 2x Dell N3048 switches
## 1.3 Silo (backup mirror for Attic)
* 1 Mercury RM216 2U Rackmount Server 2 Xeon E5-2630 (12-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
# 2. HCC at PKI Resources
## 2.1 Crane
* 452 Relion 2840e systems from Penguin
* 452x with 64 GB RAM
* 2-socket Intel Xeon E5-2670 (8-core, 2.6GHz)
* Intel QDR InfiniBand
* 96 nodes from multiple vendor
* 59x with 256 GB RAM
* 37x with 512 GB RAM
* 2-socket Intel Xeon E5-2697 v4 (18-core, 2.3GHz)
* Intel Omni-Path
* 1 and 10 GbE networking
* 4x 10 GbE switch
* 14x 1 GbE switches
* 1500 TB Lustre storage over InfiniBand
* 3 Supermicro SYS-6016GT systems
* 48 GB RAM
* 2-socket Intel Xeon E5620 (4-core, 2.4GHz)
* 2 Nvidia M2070 GPUs
* 3 Supermicro SYS-1027GR-TSF systems
* 128 GB RAM
* 2-socket Intel Xeon E5-2630 (6-core, 2.3GHz)
* 3 Nvidia K20M GPUs
* 1 Supermicro SYS-5017GR-TF systems
* 32 GB RAM
* 1-socket Intel Xeon E5-2650 v2 (8-core, 2.6GHz)
* 2 Nvidia K40C GPUs
* 5 Supermicro SYS-2027GR-TRF systems
* 64 GB RAM
* 2-socket Intel Xeon E5-2650 v2 (8-core, 2.6GHz)
* 4 Nvidia K40M GPUs
* 2 Supermicro SYS-5018GR-T systems
* 64 GB RAM
* 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz)
* 2 Nvidia P100 GPUs
* 4 Lenovo SR630 systems
* 1.5 TB RAM
* 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
* 3.84TB NVME Solid State Drive
* Intel Omni-Path
* 21 Supermicro SYS-1029GP-TR systems
* 192 GB RAM
* 2-socket Intel Xeon Gold 6248 (20-core, 2.5GHz)
* 2 Nvidia V100 GPUs
* Intel Omni-Path
## 2.2 Attic
* 1 Mercury RM216 2U Rackmount Server 2-socket Xeon E5-2630 (6-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
## 2.3 Anvil
* 76 PowerEdge R630 systems
* 76x with 256 GB RAM
* 2-socket Intel Xeon E5-2650 v3 (10-core, 2.3GHz)
* Dual 10Gb Ethernet
* 12 PowerEdge R730xd systems
* 12x with 128 GB RAM
* 2-socket Intel Xeon E5-2630L v3 (8-core, 1.8GHz)
* 12x 4TB NL SAS Hard Disks and 2x200 GB SSD
* Dual 10 Gb Ethernet
* 2 PowerEdge R320 systems
* 2x with 48 GB RAM
* 1-socket Intel E5-2403 v3 (4-core, 1.8GHz)
* Quad 10Gb Ethernet
* 10 GbE networking
* 6x Dell S4048-ON switches
## 2.4 Shared Common Storage
* Storage service providing 1.9PB usable capacity
* 6 SuperMicro 1028U-TNRTP+ systems
* 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
* 256 GB RAM
* 120x 4TB SAS Hard Disks
* 2 SuperMicro 1028U-TNRTP+ systems
* 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
* 128 GB RAM
* 6x 200 GB SSD
* Intel Omni-Path
* 10 GbE networking
+++
title = "Data Storage"
description = "How to work with and transfer data to/from HCC resources."
weight = "30"
+++
{{% panel theme="danger" header="**Sensitive and Protected Data**" %}}HCC currently has *no storage* that is suitable for **HIPAA** or other **PID** data sets. Users are not permitted to store such data on HCC machines.{{% /panel %}}
All HCC machines have three separate areas for every user to store data,
each intended for a different purpose. In addition, we have a transfer
service that utilizes [Globus Connect]({{< relref "../data_transfer/globus_connect" >}}).
{{< figure src="/images/35325560.png" >}}
---
### Home Directory
{{% notice info %}}
You can access your home directory quickly using the $HOME environmental
variable (i.e. '`cd $HOME'`).
{{% /notice %}}
Your home directory (i.e. `/home/[group]/[username]`) is meant for items
that take up relatively small amounts of space. For example: source
code, program binaries, configuration files, etc. This space is
quota-limited to **20GB per user**. The home directories are backed up
for the purposes of best-effort disaster recovery. This space is not
intended as an area for I/O to active jobs. **/home** is mounted
**read-only** on cluster worker nodes to enforce this policy.
---
### Common Directory
{{% notice info %}}
You can access your common directory quickly using the $COMMON
environmental variable (i.e. '`cd $COMMON`')
{{% /notice %}}
The common directory operates similarly to work and is mounted with
**read and write capability to worker nodes all HCC Clusters**. This
means that any files stored in common can be accessed from Crane and Rhino, making this directory ideal for items that need to be
accessed from multiple clusters such as reference databases and shared
data files.
{{% notice warning %}}
Common is not designed for heavy I/O usage. Please continue to use your
work directory for active job output to ensure the best performance of
your jobs.
{{% /notice %}}
Quotas for common are **30 TB per group**, with larger quotas available
for purchase if needed. However, files stored here will **not be backed
up** and are **not subject to purge** at this time. Please continue to
backup your files to prevent irreparable data loss.
Additional information on using the common directories can be found in
the documentation on [Using the /common File System]({{< relref "using_the_common_file_system" >}})
---
### High Performance Work Directory
{{% notice info %}}
You can access your work directory quickly using the $WORK environmental
variable (i.e. '`cd $WORK'`).
{{% /notice %}}
{{% panel theme="danger" header="**File Loss**" %}}The `/work` directories are **not backed up**. Irreparable data loss is possible with a mis-typed command. See [Preventing File Loss]({{< relref "preventing_file_loss" >}}) for strategies to avoid this.{{% /panel %}}
Every user has a corresponding directory under /work using the same
naming convention as `/home` (i.e. `/work/[group]/[username]`). We
encourage all users to use this space for I/O to running jobs. This
directory can also be used when larger amounts of space are temporarily
needed. There is a **50TB per group quota**; space in /work is shared
among all users. It should be treated as short-term scratch space, and
**is not backed up**. **Please use the `hcc-du` command to check your
own and your group's usage, and back up and clean up your files at
reasonable intervals in $WORK.**
---
### Purge Policy
HCC has a **purge policy on /work** for files that become dormant.
After **6 months of inactivity on a file (26 weeks)**, an automated
purge process will reclaim the used space of these dormant files. HCC
provides the **`hcc-purge`** utility to list both the summary and the
actual file paths of files that have been dormant for **24 weeks**.
This list is periodically generated; the timestamp of the last search
is included in the default summary output when calling `hcc-purge` with
no arguments. No output from `hcc-purge` indicates the last scan did
not find any dormant files. `hcc-purge -l` will use the less pager to
list the matching files for the user. The candidate list can also be
accessed at the following path:` /lustre/purge/current/${USER}.list`.
This list is updated twice a week, on Mondays and Thursdays.
{{% notice warning %}}
`/work` is intended for recent job output and not long term storage. Evidence of circumventing the purge policy by users will result in consequences including account lockout.
{{% /notice %}}
If you have space requirements outside what is currently provided,
please
email <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> and
we will gladly discuss alternatives.
---
### [Attic]({{< relref "using_attic" >}})
Attic is a near line archive available for purchase at HCC. Attic
provides reliable large data storage that is designed to be more
reliable then `/work`, and larger than `/home`. Access to Attic is done
through [Globus Connect]({{< relref "../data_transfer/globus_connect" >}}).
More details on Attic can be found on HCC's
<a href="https://hcc.unl.edu/attic" class="external-link">Attic</a>
website.
---
### [Globus Connect]({{< relref "../data_transfer/globus_connect" >}})
For moving large amounts of data into or out of HCC resources, users are
highly encouraged to consider using [Globus
Connect]({{< relref "../data_transfer/globus_connect" >}}).
---
### Using Box
You can use your [UNL
Box.com]({{< relref "integrating_box_with_hcc" >}}) account to download and
upload files from any of the HCC clusters.
+++
title = "Data for UNMC Users Only"
description= "Data storage options for UNMC users"
weight = 60
+++
{{% panel theme="danger" header="Sensitive and Protected Data" %}} HCC currently has no storage that is suitable for HIPAA or other PID
data sets. Users are not permitted to store such data on HCC machines.
Crane have a special directory, only for UNMC users. Please
note that this filesystem is still not suitable for HIPAA or other PID
data sets.
{{% /panel %}}
---
### Transferring files to this machine from UNMC.
You will need to email us
at <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> to
gain access to this machine. Once you do, you can sftp to 10.14.250.1
and upload your files. Note that sftp is your only option. You may use
different sftp utilities depending on your platform you are logging in
from. Email us if you need help with this. Once you are logged in, you
should be at `/volumes/UNMC1ZFS/[group]/[username]`, or
`/home/[group]/[username]`. Both are the same location and you will be
allowed to write files there.
For Windows, learn more about logging in and uploading files
[here](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users).
Using your uploaded files on Crane.
---------------------------------------------
Using your
uploaded files is easy. Just go to
`/shared/unmc1/[group]/[username]` and your files will be in the same
place. You may notice that the directory is not available at times. This
is because the unmc1 directory is automounted. This means, if you try to
go to the directory, it will show up. Just "`cd`" to
`/shared/unmc1/[group]/[username]` and all of the files will be
there.
If you have space requirements outside what is currently provided,
please
email <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> and
we will gladly discuss alternatives.
+++
title = "Integrating Box with HCC"
description = "How to integrate Box with HCC"
weight = 50
+++
NU has come to an arrangement
with <a href="https://www.box.com/" class="external-link">Box.com</a> to
provide unlimited cloud storage to every student, staff, and faculty
member. This can be useful when used with jobs to automatically upload
results when the job has completed. Combined with
<a href="https://sites.box.com/sync4/" class="external-link">Box Sync</a>,
the uploaded files can be sync'd to your laptop or desktop upon job
completion. The upload and download speed of Box is about 20 to 30 MB/s
in good network traffic conditions. Users can use a tool called lftp to transfer files between HCC clusters and their Box accounts.
---
### Step-by-step guide for Lftp
1. Login to your [UNK Box.com](https://unk.account.box.com/), [UNL Box.com](https://unl.account.box.com/), or [UNO Box.com](https://unomaha.account.box.com/) account.
2. Since we are going to be using [webdav](https://en.wikipedia.org/wiki/WebDAV) protocol to access your [Box.com](https://www.box.com/) storage, you need to create an **External Password**. In the Box.com interface, you can create it at **Account Settings > Account > Authentication > Create Password.**
{{< figure src="/images/box_create_external_password.png" class="img-border" >}}
3. After logging into the cluster of your choice, load the `lftp` module by entering the command below at the prompt:
{{% panel theme="info" header="Load the lftp module" %}}
{{< highlight bash >}}
module load lftp
{{< /highlight >}}
{{% /panel %}}
4. Connect to Box using your full email as the username and external password you created:
{{% panel theme="info" header="Connect to Box" %}}
{{< highlight bash >}}
lftp -u <username> ftps://ftp.box.com
Password: <password>
{{< /highlight >}}
{{% /panel %}}
5. Test the connection by running the `ls` command. You should see a listing of your Box files. Assuming it works, add a bookmark named "box" to use when connecting later. Optionally run `set bmk:save-passwords yes` first if you want lftp to remember the password:
{{% panel theme="info" header="Add lftp bookmark" %}}
{{< highlight bash >}}
lftp demo2@unl.edu@ftp.box.com:/> set bmk:save-passwords yes
lftp demo2@unl.edu@ftp.box.com:/> bookmark add box
{{< /highlight >}}
{{% /panel %}}
6. Exit `lftp` by typing `quit`. To reconnect later, use bookmark name:
{{% panel theme="info" header="Connect using bookmark name" %}}
{{< highlight bash >}}
lftp box
{{< /highlight >}}
{{% /panel %}}
7. To upload or download files, use `get` and `put` commands. For example:
{{% panel theme="info" header="Transferring files" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> put myfile.txt
lftp demo2@unl.edu@ftp.box.com:/> get my_other_file.txt
{{< /highlight >}}
{{% /panel %}}
8. To download directories, use the `mirror` command. To upload directories, use the `mirror` command with the `-R` option. For example, to download a directory named `my_box-dir` to your current directory:
{{% panel theme="info" header="Download a directory from Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> mirror my_box_dir
{{< /highlight >}}
{{% /panel %}}
To upload a directory named `my_hcc_dir` to Box, use `mirror` with the `-R` option:
{{% panel theme="info" header="Upload a directory to Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> mirror -R my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
9. Lftp also supports using scripts to transfer files. This can be used to automatically download or upload files during jobs. For example, create a file called "transfer.sh" with the following lines:
{{% panel theme="info" header="transfer.sh" %}}
{{< highlight bash >}}
open box
get some_input_file.tar.gz
put my_output_file.tar.gz
{{< /highlight >}}
{{% /panel %}}
To run this script, do:
{{% panel theme="info" header="Run transfer.sh" %}}
{{< highlight bash >}}
module load lftp
lftp -f transfer.sh
{{< /highlight >}}
{{% /panel %}}
+++
title = "Data Transfer"
description = "How to transfer data to/from HCC resources."
weight = "30"
+++
### [Globus Connect]({{< relref "../data_transfer/globus_connect/" >}})
For moving large amounts of data into or out of HCC resources, users are
highly encouraged to consider using [Globus
Connect]({{< relref "../data_transfer/globus_connect/" >}}).
---
### Using Box
You can use your [UNL
Box.com]({{< relref "integrating_box_with_hcc" >}}) account to download and
upload files from any of the HCC clusters.
+++
title = "File Transfer with CyberDuck"
description = "Transfering data to and from HCC clusters with the Cyberduck SCP Client"
weight = "30"
+++
---
## Using Cyberduck
---------------
If you wish to use a GUI for data transfer, be aware that not all programs will function
correctly with Duo two-factor authentication. Mac users are recommended
to use [Cyberduck](https://cyberduck.io). It is compatible with Duo, but a
few settings need to be changed.
Under **Preferences - General**, change the default protocol to SFTP:
{{< figure src="/images/7274497.png" height="450" >}}
Under **Preferences - Transfers**, reuse the browser connection for file
transfers. This will avoid the need to reenter your password for every
file transfer:
{{< figure src="/images/7274498.png" height="450" >}}
Finally, under **Preferences - SFTP**, set the file transfer method to
SCP:
{{< figure src="/images/7274499.png" height="450" >}}
To add an HCC machine, in the bookmarks pane click the "+" icon:
{{< figure src="/images/7274500.png" height="450" >}}
Ensure the type of connection is SFTP. Enter the hostname of the machine
you wish to connect to (crane.unl.edu, rhino.unl.edu) in the **Server**
field, and your HCC username in the **Username** field. The
**Nickname** field is arbitrary, so enter whatever you prefer.
{{< figure src="/images/7274501.png" height="450" >}}
After you add the bookmark, double-click it to connect.
{{< figure src="/images/7274505.png" height="450" >}}
Enter your HCC username and password in the dialog box that will appear
and click *Login*.
{{< figure src="/images/7274508.png" height="450" >}}
A second login dialogue will now appear. Notice the text has changed to
say Duo two-factor.
{{< figure src="/images/7274510.png" height="450" >}}
Clear the **Password** field in the dialogue. If you are using the Duo
Mobile app, enter '1' to have a push notification send to your phone or
tablet. If you are using a Yubikey, ensure the cursor is active in the
**Password** field, and press the button on the Yubikey.
{{< figure src="/images/7274509.png" height="450" >}}
The login should complete and you can simply drag and drop files to or
from the window.
{{< figure src="/images/7274511.png" height="450" >}}
If you run into issues with two-factor authentication, try the command below for a quick fix:
{{< highlight bash >}}
$ rm -rf ~/Library/ApplicationSupport/Cyberduck
{{< /highlight >}}
+++
title = "Activating HCC Cluster Endpoints"
description = "How to activate HCC endpoints on Globus"
weight = 20
+++
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), Rhino, (`hcc#rhino`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
1. [Sign in](https://www.globus.org/SignIn) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar.
{{< figure src="/images/Glogin.png" >}}
{{< figure src="/images/endpoints.png" >}}
2. Find the endpoint you want by entering '`hcc#crane`', '`hcc#rhino`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
{{< figure src="/images/activateEndpoint.png" >}}
{{< figure src="/images/EndpointContinue.png" >}}
3. You will be redirected to the HCC Globus Endpoint Activation page. Enter your *HCC* username and password (the password you usually use to log into the HCC clusters).
{{< figure src="/images/hccEndpoint.png" >}}
4. Next you will be prompted to
provide your *Duo* credentials. If you use the Duo Mobile app on
your smartphone or tablet, select 'Duo Push'. Once you approve the notification that is sent to your phone,
the activation will be complete. If you use a Yubikey for
authentication, select the 'Passcode' option and then press your
Yubikey to complete the activation. Upon successful activation, you
will be redirected to your Globus *Manage Endpoints* page.
{{< figure src="/images/EndpointPush.png" >}}
{{< figure src="/images/endpointComplete.png" >}}
The endpoint should now be ready
and will not have to be activated again for the next 7 days.
To transfer files between any two HCC clusters, you will need to
activate both endpoints individually.
Next, learn how to [make file transfers between HCC endpoints]({{< relref "/handling_data/data_transfer/globus_connect/file_transfers_between_endpoints" >}}) or how to [transfer between HCC endpoints and a personal computer]({{< relref "/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations" >}}).
---
+++
title = "File Sharing"
description = "How to share files using Globus"
weight = 50
+++
If you would like another colleague or researcher to have access to your
data, you may create a shared endpoint on Crane, Rhino, or Attic. You can personally manage access to this endpoint and
give access to anybody with a Globus account (whether or not
they have an HCC account). *Please use this feature responsibly by
sharing only what is necessary and granting access only to trusted
users.*
{{% notice info %}}
Shared endpoints created in your `home` directory on HCC servers (with
the exception of Attic) are *read-only*. You may create readable and
writable shared endpoints in your `work` directory (or `/shared`).
{{% /notice %}}
1. Sign in to your Globus account, click on the 'Endpoints' tab
and search for the endpoint that you will use to host your shared
endpoint. For example, if you would like to share data in your
Crane `work` directory, search for the `hcc#crane` endpoint. Once
you have found the endpoint, it will need to be activated if it has
not been already (see [endpoint activation instructions
here]({{< relref "activating_hcc_cluster_endpoints" >}})).
If it is already activated, select the endpoint by clicking on the
name. Then select the 'share' button on the right sidebar.
{{< figure src="/images/sharedEndpoint.png" >}}
{{< figure src="/images/shareButton.png" >}}
2. In the 'Path' box, enter the full path to the directory you
would like to share. Only files under this directory will be shared
to the endpoint users you grant access. Enter a descriptive endpoint
name and provide a
short description of the endpoint if you wish. Finally, click 'Create Share'.
{{< figure src="/images/createShare.png" >}}
3. Type the Globus ID (or group name) of the user (or group) to whom you would like to grant
access to this endpoint. Next enter the *relative path* of the
directory that this user should be able to access. For example, if
the source path of your shared endpoint
is `/work/<groupid>/<userid>/share` but you would like your
colleague to only have access
to `/work/<groupid>/<userid>/share/dataX`, then the 'Path' should be
entered as simply `/dataX`. Finally, click the blue 'Add Permission' button.
You should see the user or group added to the list.
{{< figure src="/images/addPermission.png" >}}
{{< figure src="/images/sharedGroup.png" >}}
---
+++
title = "File Transfers Between Endpoints"
description = "How to transfer files between HCC clusters using Globus"
weight = 30
+++
To transfer files between HCC clusters, you will first need to
[activate]({{< relref "/handling_data/data_transfer/globus_connect/activating_hcc_cluster_endpoints" >}}) the
two endpoints you would like to use (the available endpoints
are: `hcc#crane` `hcc#rhino`, and `hcc#attic`). Once
that has been completed, follow the steps below to begin transferring
files. (Note: You can also transfer files between an HCC endpoint and
any other Globus endpoint for which you have authorized access. That
may include a [personal
endpoint]({{< relref "/handling_data/data_transfer/globus_connect/file_transfers_to_and_from_personal_workstations" >}}),
a [shared
endpoint]({{< relref "/handling_data/data_transfer/globus_connect/file_sharing" >}}),
or an endpoint on another computing resource or cluster. Once the
endpoints have been activated, the file transfer process is generally
the same regardless of the type of endpoints you use. For demonstration
purposes we use two HCC endpoints.)
1. Once both endpoints for the desired file transfer have been
activated, [sign in](https://www.globus.org/SignIn) to
your Globus account (if you are not already) and select
"Transfer or Sync to.." from the right sidebar. If you have
a small screen, you may have to click the menu icon
first.
{{< figure src="/images/Transfer.png">}}
2. Enter the names of the two endpoints you would like to use, or
select from the drop-down menus (for
example, `hcc#attic` and `hcc#crane`). Enter the
directory paths for both the source and destination (the 'from' and
'to' paths on the respective endpoints). Press 'Enter' to view files
under these directories. Select the files or directories you would
like to transfer (press *shift* or *control* to make multiple
selections) and click the blue highlighted arrow to start the
transfer.
{{< figure src="/images/startTransfer.png" >}}
3. Globus will display a message when your transfer has completed
(or in the unlikely event that it was unsuccessful), and you will
also receive an email. Select the 'refresh' icon to see your file
in the destination folder.
{{< figure src="/images/transferComplete.png" >}}
---
+++
title = "File Transfer with scp"
description = "Transfering data to and from HCC clusters with the scp command"
weight = "10"
+++
## Using the SCP command
For MacOS, Linux, and later Windows users, file transferring between your personal computer
and the HCC supercomputers can be achieved through the command `scp` which stands for secure copy.
This method is ideal for quick transfer of smaller files. For large volume transfers,
we recommend the using [Globus] or an SCP client such as [WinSCP for Windows]({{< relref "winscp">}}) or
[CyberDuck for Mac/Linux]({{< relref "cyberduck">}}).
Just like the `cp` copy command, the `scp` command requires two arguments,
the path to source file(s) and the path to the target location.
Since one or more of these locations are remote, you will need to specify the username and host for those.
{{< highlight bash >}}
$ scp <username>@<host>:<path_to_files> <username>@<host>:<path_to_files>
{{< /highlight >}}
For the local location, you do not need to specify the username or host. **When transferring to and from
your local computer, the `scp` command should be ran on your computer, NOT from HCC clusters.**
### Uploading a file to Crane
Here is an example of file transfer to and from the Crane cluster.
To upload the file `data.csv` in your current directory to your `$WORK` directory
on the Crane cluster, you would use the command:
{{< highlight bash >}}
$ scp ./data.csv <user_name>@crane.unl.edu:/work/<group_name>/<user_name>
{{< /highlight >}}
where `<user_name>` and `<group_name>` are replaced with your user name and your group name.
### Downloading a file from Crane
To download the file `data.csv` from your `$WORK` directory
on the Crane cluster to your current directory, you would use the command:
{{< highlight bash >}}
$ scp <user_name>@crane.unl.edu:/work/<group_name>/<user_name>/data.csv ./
{{< /highlight >}}
+++
title = "Using Rclone for File Transfer"
description = "How to use Rclone with HCC"
weight =60
+++
Rclone is an open source file transfer tool to make transfering files to and from various cloud resources such as Box, Amazon S3, Microsoft OneDrive, and Google Cloud Storage and your local machine a simpler task. Guides on how to set up a variety of resources to transfer to and from can be found at [rclone's webpage](https://rclone.org/).
This tool can be used to transfer files between HCC clusters and outside cloud providers, such as Box.
---
### Setup RClone
1. You need to create your UNL [Box.com](https://www.box.com/) account [here](https://box.unl.edu/).
2. Due to the clusters being remote machines, Rclone will need to be installed on your [local machine](https://rclone.org/downloads/) in order to authorize box. Some services, such as Google Drive, do not require Rclone to be installed on your local machine.
3. After logging into the cluster of your choice, load the `rclone` module by entering the command below at the prompt:
{{% panel theme="info" header="Load the Rclone module" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ module load rclone
{{< /highlight >}}
{{% /panel %}}
4. We will need to start the basic configuration for box. To do this run `rclone config`:
{{% panel theme="info" header="Load the rclone config" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone config
{{< /highlight >}}
{{% /panel %}}
5. In a new configuration, you will see no remotes found. Enter `n` to make a new remote and name it a name you will know. In our example, we will use "UNLBox". Select box by entering in the corresponding number, in our case `6`. Hit Enter for the client_id and client_secret and `y` for Edit advanced config. Due to the file size limit with Box, set the upload_cutoff to `15G` and press enter, also leaving the commit_retries as the default by pressing Enter. When you are prompted for auto config, select `N` and switch to a terminal on your local machine:
{{% panel theme="info" header="Configure box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone config
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> UNLBox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
6 / Box
\ "box"
Storage> 6
Box App Client Id.
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
Box App Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> y
Cutoff for switching to multipart upload (>= 50MB).
Enter a size with suffix k,M,G,T. Press Enter for the default ("50M").
upload_cutoff> 15G
Max number of times to try committing a multipart file.
Enter a signed integer. Press Enter for the default ("100").
commit_retries>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> n
For this to work, you will need Rclone available on a machine that has a web browser available.
Execute the following on your machine:
rclone authorize "box"
Then paste the result below:
result>
{{< /highlight >}}
{{% /panel %}}
6. Run `rclone authorize "box"` on the local machine. You will be prompted to go to a 127.0.0.1 address in your web browser if a browser doesn't open automatically. Select `Use Single Sign On(SSO)` at the bottom and then enter in your UNL e-mail address. You will be taken to sign into UNL's Box using your **Canvas** credentials. Select `Grant access to Box`. You will be told to paste a line of code from your local machine to the cluster and then to confirm that the config is correct.
{{% panel theme="info" header="List contents of Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone authorize "box"
{{< /highlight >}}
{{% /panel %}}
{{< figure src="/images/BoxSSO.png" height="500" class="img-border">}}
{{% panel theme="info" header="Local Config" %}}
{{< highlight bash >}}
[demo2@local.machine ~]$ rclone authorize "box"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize Rclone for access
Waiting for code...
{{< /highlight >}}
{{% /panel %}}
For other services, please refer to the [rclone documentation](https://rclone.org/).
7. Test the connection by running the `ls` command. You should see a listing of your Box files.
{{% panel theme="info" header="List contents of Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone ls UNLBox:/
{{< /highlight >}}
{{% /panel %}}
8. To upload or download files, use the `clone` command. For example:
{{% panel theme="info" header="Transferring files" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy UNLBox:/SomeFile.txt ./
[demo2@login.crane ~]$ rclone copy ./SomeFile.txt UNLBox:/
{{< /highlight >}}
{{% /panel %}}
9. To download directories, use the `clone` command and use directory names over file. This copies the contents of the folders, so you need to specify a destination folder.
{{% panel theme="info" header="Download a directory from Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy UNLBox:/my_hcc_dir ./my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
To upload a directory named `my_hcc_dir` to Box, use `clone`.
{{% panel theme="info" header="Upload a directory to Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy ./my_hcc_dir UNLBox:/my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
11. Rclone also supports using sync to transfer files, similar to rsync. The syntax is similar to `rclone copy`. This would only transfer files that are updated by name, checksum, or time. The exmaple below would sync the files of the local directory to the remote directory on box.
{{% panel theme="info" header="transfer.sh" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone sync ./my_hcc_dir UNLBox:/my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
---
title: "Redirector"
---
<script>
// Redirector for hcc-docs links
// Search for URL parameter 'q' and redirect to top match
var lunrIndex;
function getQueryVariable(variable) {
var query = window.location.search.substring(1);
var vars = query.split('&');
for (var i = 0; i < vars.length; i++) {
var pair = vars[i].split('=');
if (pair[0] === variable) {
return decodeURIComponent(pair[1].replace(/\+/g, '%20'));
}
}
}
// Initialize lunrjs using our generated index file
function initLunr() {
// First retrieve the index file
return $.getJSON(baseurl + "/index.json")
.done(function(index) {
pagesIndex = index;
// Set up lunrjs by declaring the fields we use
// Also provide their boost level for the ranking
lunrIndex = new lunr.Index
lunrIndex.ref("uri");
lunrIndex.field('title', {
boost: 15
});
lunrIndex.field('tags', {
boost: 10
});
lunrIndex.field("content", {
boost: 5
});
// Feed lunr with each file and let lunr actually index them
pagesIndex.forEach(function(page) {
lunrIndex.add(page);
});
lunrIndex.pipeline.remove(lunrIndex.stemmer)
})
.fail(function(jqxhr, textStatus, error) {
var err = textStatus + ", " + error;
console.error("Error getting Hugo index file:", err);
});
}
function search(query) {
// Find the item in our index corresponding to the lunr one to have more info
return lunrIndex.search(query).map(function(result) {
return pagesIndex.filter(function(page) {
return page.uri === result.ref;
})[0];
});
}
initLunr().then(function() {
var searchTerm = getQueryVariable('q');
// Replace non-word chars with space. lunr doesn't like quotes.
searchTerm = searchTerm.replace(/[\W_]+/g," ");
var results = search(searchTerm);
if (!results.length) {
window.location = baseurl;
} else {
window.location = results[0].uri;
}
});
</script>
+++
title = "Application Specific Guides"
weight = "100"
+++
In-depth guides for running applications on HCC resources
--------------------------------------
{{% children description="true" %}}
+++
title = "Available Partitions"
description = "Listing of partitions on Crane and Rhino."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
weight=70
+++
Partitions are used on Crane and Rhino to distinguish different
resources. You can view the partitions with the command `sinfo`.
### Crane:
[Full list for Crane]({{< relref "crane_available_partitions" >}})
### Rhino:
[Full list for Rhino]({{< relref "rhino_available_partitions" >}})
#### Priority for short jobs
To run short jobs for testing and development work, a job can specify a
different quality of service (QoS). The *short* QoS increases a jobs
priority so it will run as soon as possible.
| SLURM Specification |
|----------------------- |
| `#SBATCH --qos=short` |
{{% panel theme="warning" header="Limits per user for 'short' QoS" %}}
- 6 hour job run time
- 2 jobs of 16 CPUs or fewer
- No more than 256 CPUs in use for *short* jobs from all users
{{% /panel %}}
### Limitations of Jobs
Overall limitations of maximum job wall time. CPUs, etc. are set for
all jobs with the default setting (when thea "–qos=" section is omitted)
and "short" jobs (described as above) on Crane and Rhino.
The limitations are shown in the following form.
| | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User |
| ------- | -------------------- | ---------------- | ----------------- | ----------------- |
| Default | Leave blank | 7 days | 2000 | 1000 |
| Short | #SBATCH --qos=short | 6 hours | 16 | 2 |
Please also note that the memory and
local hard drive limits are subject to the physical limitations of the
nodes, described in the resources capabilities section of the
[HCC Documentation]({{< relref "/#resource-capabilities" >}})
and the partition sections above.
### Owned Partitions
Partitions marked as owned by a group means only specific groups are
allowed to submit jobs to that partition. Groups are manually added to
the list allowed to submit jobs to the partition. If you are unable to
submit jobs to a partition, and you feel that you should be, please
contact {{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu).
### Guest Partition
The `guest` partition can be used by users and groups that do not own
dedicated resources on Crane or Rhino. Jobs running in the `guest` partition
will run on the owned resources with Intel OPA interconnect. The jobs
are preempted when the resources are needed by the resource owners and
are restarted on another node.
### tmp_anvil Partition
We have put Anvil nodes which are not running Openstack in this
partition. They have Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores and
256GB memory per node. However, they don't have Infiniband or OPA
interconnect. They are suitable for serial or single node parallel jobs.
The nodes in this partition are subjected to be drained and move to our
Openstack cloud when more cloud resources are needed without notice in
advance.
### Use of Infiniband or OPA
Crane nodes use either Infiniband or Intel Omni-Path interconnects in
the batch partition. Most users don't need to worry about which one to
choose. Jobs will automatically be scheduled for either of them by the
scheduler. However, if the user wants to use one of the interconnects
exclusively, the SLURM constraint keyword is available. Here are the
examples:
{{% panel theme="info" header="SLURM Specification: Omni-Path" %}}
{{< highlight bash >}}
#SBATCH --constraint=opa
{{< /highlight >}}
{{% /panel %}}
{{% panel theme="info" header="SLURM Specification: Infiniband" %}}
{{< highlight bash >}}
#SBATCH --constraint=ib
{{< /highlight >}}
{{% /panel %}}