Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • FAQ
  • RDPv10
  • UNL_OneDrive
  • atticguidelines
  • data_share
  • globus-auto-backups
  • good-hcc-practice-rep-workflow
  • hchen2016-faq-home-is-full
  • ipynb-doc
  • master
  • rclone-fix
  • sislam2-master-patch-51693
  • sislam2-master-patch-86974
  • site_url
  • test
15 results

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
  • 26-add-screenshots-for-newer-rdp-v10-client
  • 28-overview-page-for-connecting-2
  • AddExamples
  • OMCCLUNG2-master-patch-74599
  • RDPv10
  • globus-auto-backups
  • gpu_update
  • master
  • mtanash2-master-patch-75717
  • mtanash2-master-patch-83333
  • mtanash2-master-patch-87890
  • mtanash2-master-patch-96320
  • patch-1
  • patch-2
  • patch-3
  • runTime
  • submitting-jobs-overview
  • tharvill1-master-patch-26973
18 results
Show changes
Showing
with 0 additions and 1303 deletions
+++
title = "2017"
description = "Historical listing of various HCC events for the year 2017."
weight = "20"
+++
Historical listing of HCC Events
----------
{{% children %}}
+++
title = "UNK Linear Algebra, Feb. 28, 2017"
description = "UNK Linear Algebra, Feb. 28, 2017."
+++
**If at any time today you have difficulties or become lost, please
place the <span style="color: rgb(255,0,0);">red</span> sticky note on
top of your monitor and a helper will be around to assist you.**
For these instructions, any commands to be typed into the terminal will be formatted like this.
**What is a cluster:**
----------------------
![cluster image](/images/cluster_small.png)
(picture courtesy of:
[http://training.h3abionet.org/technical_workshop_2015/?page_id=403](http://training.h3abionet.org/technical_workshop_2015/?page_id=403))
**
**
**To connect to the Crane cluster:**
------------------------------------
- insert the Yubikey into the computer's USB drive. There should be a
small green light in the middle of the Yubikey to indicate it is
inserted correctly.
- Open your preferred web browser and navigate to
[http://go.unl.edu/cranessh](http://go.unl.edu/cranessh)
- Click "Start SSH session to crane.unl.edu"
{{% notice info %}}
The link above is no longer available. If you wish to use a terminal in your browser, Sandstone is an option [https://hcc.unl.edu/docs/guides/sandstone/](https://hcc.unl.edu/docs/guides/sandstone/)
{{% /notice %}}
![](/images/ssh.png)
- Click the "Terminal: SSH" icon to begin the SSH session
![](/images/terminalSSH.png)
- Type in the provided Username and Password. Note that the password
will not display on screen, but rest assured that even though
nothing is being output, your password is being entered as you type.
- At the "Passcode:" prompt, press your finger to the gold circle in
the middle of the Yubikey until a string of characters appears on
screen.
![](/images/yubikey.png)
- If you logged in successfully, your screen should look similar to
the one belo
![](/images/crane_login.png)
**Linux Commands Reference List:**
----------------------------------
https://hcc.unl.edu/docs/quickstarts/connecting/basic_linux_commands/
**To run MATLAB interactively:**
--------------------------------
- After logging into the cluster, navigate to your $WORK directory:
- cd $WORK
- Request an interactive job:
- srun --reservation=unk --mem=4096 --pty $SHELL
- Load the MATLAB module:
- module load matlab
- Run MATLAB:
- matlab
**To access the MATLAB Tutorial:**
----------------------------------
- Navigate to your $WORK directory:
- cd $WORK
- Clone the github repo containing the tutorial files:
- git clone https://github.com/unlhcc/HCCWorkshops.git
+++
title = "R for Biologists, March 8, 2017"
description = "R for Biologists, March 8, 2017."
+++
**We will be utilizing <span style="color: rgb(255,0,0);">red</span> and
<span style="color: rgb(51,153,102);">green</span> sticky notes today.
If you run into problems or have questions,**
**please place the <span style="color: rgb(255,0,0);">red</span> sticky
note to the back of your computer screen and a helper will assist you.**
If you have not already requested an HCC account under the rcourse998
group, please do so
[here](https://hcc.unl.edu/new-user-request)
If you already have an HCC account and need to be added to the
rcourse998 group, please let us know.
If you have not previously set up Duo Authentication, please ask for
assistance.
**Set up Instructions:**
**Windows:**
For Windows will we use two third party
application **PuTTY** and **WinSCP** for demonstration.
PuTTY:
&lt;[http://www.putty.org/](http://www.putty.org/)&gt;
WinSCP:
&lt; [http://winscp.net/eng/download.php](http://winscp.net/eng/download.php)&gt;
**Mac/Linux:**
Mac and Linux users will need to download and install **Cyberduck**.
Detailed information for downloading and setting up Cyberduck can be
found here: [For Mac/Linux Users](https://cyberduck.io/)
**Linux Commands Reference List:**
[https://hcc.unl.edu/docs/quickstarts/connecting/basic_linux_commands/](https://hcc.unl.edu/docs/quickstarts/connecting/basic_linux_commands/)
**R core and R Studio:**
We will be writing scripts offline in RStudio and then uploading them to
execute them on the cluster. This lesson assumes you have the R core and
RStudio installed. If you do not you can install them here:
R
core: [https://cloud.r-project.org/](https://cloud.r-project.org/)
RStudio: [https://www.rstudio.com/products/rstudio/download/](https://www.rstudio.com/products/rstudio/download/)
**Required Packages:**
We will also be using the dplyr, ggplot2 and maps package. If you do not
have these installed, please install them now. You can do so using the
following commands inside the RStudio console:
install.packages("dplyr")
install.packages("ggplot2")
install.packages("maps")
**What is a cluster:**
![](/images/cluster_small.png)
(picture courtesy
of: [http://training.h3abionet.org/technical_workshop_2015/?page_id=403](http://training.h3abionet.org/technical_workshop_2015/?page_id=403))
### To download the tutorial files:
- Navigate to your $WORK directory:
- cd $WORK
- Clone the github repo containing the tutorial files:
- git clone https://github.com/unlhcc/HCCWorkshops.git
Take Home Exercise:
[Data Analysis in R](https://unl.box.com/s/8i647f8are21tc11la0jqk2xddlg19wy) - Please note
that the on the bottom of page three, there is a missing parenthesis at
the end of the last command.
The final code chunk should read:
# Calculate flight age using birthmonth
age <- data.frame(names(acStart), acStart, stringsAsFactors=FALSE)
colnames(age) <- c("TailNum", "acStart")
flights <- left_join(flights, age, by="TailNum")
flights <- mutate(flights, Age = (flights$Year * 12) + flights$Month - flights$acStart)
\ No newline at end of file
+++
title = "2018"
description = "Listing of various HCC events for the year 2018."
weight = "10"
+++
Historical listing of HCC Events
----------
{{% children %}}
+++
title = "Events"
description = "Historical listing of various HCC events."
weight = "30"
+++
Historical listing of HCC Events
----------
{{% children sort="weight" description="true" %}}
---
title: "Facilities of the Holland Computing Center"
---
This document details the equipment resident in the Holland Computing Center (HCC) as of November 2018.
HCC has two primary locations directly interconnected by a pair of 10 Gbps fiber optic links (20 Gbps total). The 1800 sq. ft. HCC machine room at the Peter Kiewit Institute (PKI) in Omaha can provide up to 500 kVA in UPS and genset protected power, and 160 ton cooling. A 2200 sq. ft. second machine room in the Schorr Center at the University of Nebraska-Lincoln (UNL) can currently provide up to 100 ton cooling with up to 400 kVA of power. One Brocade MLXe router and two Dell Z9264F-ON core switches in each location provide both high WAN bandwidth and Software Defined Networking (SDN) capability. The Schorr machine room connects to campus and Internet2/ESnet at 100 Gbps while the PKI machine room connects at 10 Gbps. HCC uses multiple data transfer nodes as well as a FIONA (flash IO network appliance) to facilitate end-to-end performance for data intensive workflows.
HCC's resources at UNL include two distinct offerings: Rhino and Red. Rhino is a linux cluster dedicated to general campus usage with 7,040 compute cores interconnected by low-latency Mellanox QDR InfiniBand networking. 360 TB of BeeGFS storage is complemented by 50 TB of NFS storage and 1.5 TB of local scratch per node. Each compute node is a Dell R815 server with at least 192 GB RAM and 4 Opteron 6272 / 6376 (2.1 / 2.3 GHz) processors.
The largest machine on the Lincoln campus is Red, with 9,536 job slots interconnected by a mixture of 1, 10, and 40 Gbps ethernet. More importantly, Red serves up over 6.6 PB of storage using the Hadoop Distributed File System (HDFS). Red is integrated with the Open Science Grid (OSG), and serves as a major site for storage and analysis in the international high energy physics project known as CMS (Compact Muon Solenoid).
HCC's resources at PKI (Peter Kiewit Institute) in Omaha include Crane, Anvil, Attic, and Common storage.
Crane debuted at 474 on the Top500 list with an HPL benchmark or 121.8 TeraFLOPS. Intel Xeon chips (8-core, 2.6 GHz) provide the processing with 4 GB RAM available per core and a total of 12,236 cores. The cluster shares 1.5 PetaBytes of Lustre storage and contains HCC's GPU resources. We have since expanded the existing cluster: 96 nodes with new Intel Xeon E5-2697 v4 chips and 100GB Intel Omni-Path interconnect were added to Crane. Moreover, Crane has 21 GPU nodes with 57 NVIDIA GPUs in total which enables the most state-of-art research, from drug discovery to deep learning.
Anvil is an OpenStack cloud environment consisting of 1,520 cores and 400TB of CEPH storage all connected by 10 Gbps networking. The Anvil cloud exists to address needs of NU researchers that cannot be served by traditional scheduler-based HPC environments such as GUI applications, Windows based software, test environments, and persistent services. In addition, a project to expand Ceph storage by 1.1 PB is in progress.
Attic and Silo form a near line archive with 1.0 PB of usable storage. Attic is located at PKI in Omaha, while Silo acts as an online backup located in Lincoln. Both Attic and Silo are connected with 10 Gbps network connections.
In addition to the cluster specific Lustre storage, a shared common storage space exists between all HCC resources with 1.9PB capacity.
These resources are detailed further below.
# 1. HCC at UNL Resources
## 1.1 Rhino
* 107 4-socket Opteron 6172 / 6376 (16-core, 2.1 / 2.3 GHz) with 192 or 256 GB RAM
* 2x with 512 GB RAM, 2x with 1024 GB RAM
* Mellanox QDR InfiniBand
* 1 and 10 GbE networking
* 5x Dell N3048 switches
* 50TB shared storage (NFS) -> /home
* 360TB BeeGFS storage over Infiniband -> /work
* 1.5TB local scratch
## 1.2 Red
* USCMS Tier-2 resource, available opportunistically via the Open Science Grid
* 60 2-socket Xeon E5530 (2.4GHz) (16 slots per node)
* 16 2-socket Xeon E5520 (2.27 GHz) (16 slots per node)
* 36 2-socket Xeon X5650 (2.67GHz) (24 slots per node)
* 16 2-socket Xeon E5-2640 v3 (2.6GHz) (32 slots per node)
* 40 2-socket Xeon E5-2650 v3 (2.3GHz) (40 slots per node)
* 24 4-socket Opteron 6272 (2.1 GHz) (64 slots per node)
* 28 2-socket Xeon E5-2650 v2 (2.6GHz) (32 slots per node)
* 48 2-socket Xeon E5-2660 (2.2GHz) (32 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 2 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 10.8 PB HDFS storage
* Mix of 1, 10, and 40 GbE networking
* 1x Dell S6000-ON switch
* 2x Dell S4048-ON switch
* 5x Dell S3048-ON switches
* 2x Dell S4810 switches
* 2x Dell N3048 switches
## 1.3 Silo (backup mirror for Attic)
* 1 Mercury RM216 2U Rackmount Server 2 Xeon E5-2630 (12-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
# 2. HCC at PKI Resources
## 2.1 Crane
* 452 Relion 2840e systems from Penguin
* 452x with 64 GB RAM
* 2-socket Intel Xeon E5-2670 (8-core, 2.6GHz)
* Intel QDR InfiniBand
* 96 nodes from multiple vendor
* 59x with 256 GB RAM
* 37x with 512 GB RAM
* 2-socket Intel Xeon E5-2697 v4 (18-core, 2.3GHz)
* Intel Omni-Path
* 1 and 10 GbE networking
* 4x 10 GbE switch
* 14x 1 GbE switches
* 1500 TB Lustre storage over InfiniBand
* 3 Supermicro SYS-6016GT systems
* 48 GB RAM
* 2-socket Intel Xeon E5620 (4-core, 2.4GHz)
* 2 Nvidia M2070 GPUs
* 3 Supermicro SYS-1027GR-TSF systems
* 128 GB RAM
* 2-socket Intel Xeon E5-2630 (6-core, 2.3GHz)
* 3 Nvidia K20M GPUs
* 1 Supermicro SYS-5017GR-TF systems
* 32 GB RAM
* 1-socket Intel Xeon E5-2650 v2 (8-core, 2.6GHz)
* 2 Nvidia K40C GPUs
* 5 Supermicro SYS-2027GR-TRF systems
* 64 GB RAM
* 2-socket Intel Xeon E5-2650 v2 (8-core, 2.6GHz)
* 4 Nvidia K40M GPUs
* 2 Supermicro SYS-5018GR-T systems
* 64 GB RAM
* 2-socket Intel Xeon E5-2620 v4 (8-core, 2.1GHz)
* 2 Nvidia P100 GPUs
## 2.2 Attic
* 1 Mercury RM216 2U Rackmount Server 2-socket Xeon E5-2630 (6-core, 2.6GHz)
* 10 Mercury RM445J 4U Rackmount JBOD with 45x 4TB NL SAS Hard Disks
## 2.3 Anvil
* 76 PowerEdge R630 systems
* 76x with 256 GB RAM
* 2-socket Intel Xeon E5-2650 v3 (10-core, 2.3GHz)
* Dual 10Gb Ethernet
* 12 PowerEdge R730xd systems
* 12x with 128 GB RAM
* 2-socket Intel Xeon E5-2630L v3 (8-core, 1.8GHz)
* 12x 4TB NL SAS Hard Disks and 2x200 GB SSD
* Dual 10 Gb Ethernet
* 2 PowerEdge R320 systems
* 2x with 48 GB RAM
* 1-socket Intel E5-2403 v3 (4-core, 1.8GHz)
* Quad 10Gb Ethernet
* 10 GbE networking
* 6x Dell S4048-ON switches
## 2.4 Shared Common Storage
* Storage service providing 1.9PB usable capacity
* 6 SuperMicro 1028U-TNRTP+ systems
* 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
* 256 GB RAM
* 120x 4TB SAS Hard Disks
* 2 SuperMicro 1028U-TNRTP+ systems
* 2-socket Intel Xeon E5-2637 v4 (4-core, 3.5GHz)
* 128 GB RAM
* 6x 200 GB SSD
* Intel Omni-Path
* 10 GbE networking
+++
title = "FAQ"
description = "HCC Frequently Asked Questions"
weight = "20"
+++
- [I have an account, now what?](#i-have-an-account-now-what)
- [How do I change my password?](#how-do-i-change-my-password)
- [I forgot my password, how can I retrieve it?](#i-forgot-my-password-how-can-i-retrieve-it)
- [I just deleted some files and didn't mean to! Can I get them back?](#i-just-deleted-some-files-and-didn-t-mean-to-can-i-get-them-back)
- [How do I (re)activate Duo?](#how-do-i-re-activate-duo)
- [How many nodes/memory/time should I request?](#how-many-nodes-memory-time-should-i-request)
- [I am trying to run a job but nothing happens?](#i-am-trying-to-run-a-job-but-nothing-happens)
- [I keep getting the error "slurmstepd: error: Exceeded step memory limit at some point." What does this mean and how do I fix it?](#i-keep-getting-the-error-slurmstepd-error-exceeded-step-memory-limit-at-some-point-what-does-this-mean-and-how-do-i-fix-it)
- [I want to talk to a human about my problem. Can I do that?](#i-want-to-talk-to-a-human-about-my-problem-can-i-do-that)
---
#### I have an account, now what?
Congrats on getting an HCC account! Now you need to connect to a Holland
cluster. To do this, we use an SSH connection. SSH stands for Secure
Shell, and it allows you to securely connect to a remote computer and
operate it just like you would a personal machine.
Depending on your operating system, you may need to install software to
make this connection. Check out on Quick Start Guides for information on
how to install the necessary software for your operating system
- [For Mac/Linux Users]({{< relref "for_maclinux_users" >}})
- [For Windows Users]({{< relref "for_windows_users" >}})
#### How do I change my password?
#### I forgot my password, how can I retrieve it?
Information on how to change or retrieve your password can be found on
the documentation page: [How to change your
password]({{< relref "how_to_change_your_password" >}})
All passwords must be at least 8 characters in length and must contain
at least one capital letter and one numeric digit. Passwords also cannot
contain any dictionary words. If you need help picking a good password,
consider using a (secure!) password generator such as
[this one provided by Random.org](https://www.random.org/passwords)
To preserve the security of your account, we recommend changing the
default password you were given as soon as possible.
#### I just deleted some files and didn't mean to! Can I get them back?
That depends. Where were the files you deleted?
**If the files were in your $HOME directory (/home/group/user/):** It's
possible.
$HOME directories are backed up daily and we can restore your files as
they were at the time of our last backup. Please note that any changes
made to the files between when the backup was made and when you deleted
them will not be preserved. To have these files restored, please contact
HCC Support at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
as soon as possible.
**If the files were in your $WORK directory (/work/group/user/):** No.
Unfortunately, the $WORK directories are created as a short term place
to hold job files. This storage was designed to be quickly and easily
accessed by our worker nodes and as such is not conducive to backups.
Any irreplaceable files should be backed up in a secondary location,
such as Attic, the cloud, or on your personal machine. For more
information on how to prevent file loss, check out [Preventing File
Loss]({{< relref "preventing_file_loss" >}}).
#### How do I (re)activate Duo?
**If you have not activated Duo before:**
Please stop by
[our offices](http://hcc.unl.edu/location)
along with a photo ID and we will be happy to activate it for you. If
you are not local to Omaha or Lincoln, contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
and we will help you activate Duo remotely.
**If you have activated Duo previously but now have a different phone
number:**
Stop by our offices along with a photo ID and we can help you reactivate
Duo and update your account with your new phone number.
**If you have activated Duo previously and have the same phone number:**
Email us at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
from the email address your account is registered under and we will send
you a new link that you can use to activate Duo.
#### How many nodes/memory/time should I request?
**Short answer:** We don’t know.
**Long answer:** The amount of resources required is highly dependent on
the application you are using, the input file sizes and the parameters
you select. Sometimes it can help to speak with someone else who has
used the software before to see if they can give you an idea of what has
worked for them.
But ultimately, it comes down to trial and error; try different
combinations and see what works and what doesn’t. Good practice is to
check the output and utilization of each job you run. This will help you
determine what parameters you will need in the future.
For more information on how to determine how many resources a completed
job used, check out the documentation on [Monitoring Jobs]({{< relref "monitoring_jobs" >}}).
#### I am trying to run a job but nothing happens?
Where are you trying to run the job from? You can check this by typing
the command \`pwd\` into the terminal.
**If you are running from inside your $HOME directory
(/home/group/user/)**:
Move your files to your $WORK directory (/work/group/user) and resubmit
your job.
The worker nodes on our clusters have read-only access to the files in
$HOME directories. This means that when a job is submitted from $HOME,
the scheduler cannot write the output and error files in the directory
and the job is killed. It appears the job does nothing because no output
is produced.
**If you are running from inside your $WORK directory:**
Contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
with your login, the name of the cluster you are running on, and the
full path to your submit script and we will be happy to help solve the
issue.
##### I keep getting the error "slurmstepd: error: Exceeded step memory limit at some point." What does this mean and how do I fix it?
This error occurs when the job you are running uses more memory than was
requested in your submit script.
If you specified `--mem` or `--mem-per-cpu` in your submit script, try
increasing this value and resubmitting your job.
If you did not specify `--mem` or `--mem-per-cpu` in your submit script,
chances are the default amount allotted is not sufficient. Add the line
{{< highlight batch >}}
#SBATCH --mem=<memory_amount>
{{< /highlight >}}
to your script with a reasonable amount of memory and try running it again. If you keep
getting this error, continue to increase the requested memory amount and
resubmit the job until it finishes successfully.
For additional details on how to monitor usage on jobs, check out the
documentation on [Monitoring Jobs]({{< relref "monitoring_jobs" >}}).
If you continue to run into issues, please contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
for additional assistance.
#### I want to talk to a human about my problem. Can I do that?
Of course! We have an open door policy and invite you to stop by
[either of our offices](http://hcc.unl.edu/location)
anytime Monday through Friday between 9 am and 5 pm. One of the HCC
staff would be happy to help you with whatever problem or question you
have. Alternatively, you can drop one of us a line and we'll arrange a
time to meet: [Contact Us](https://hcc.unl.edu/contact-us).
+++
title = "Guides"
weight = "20"
+++
In-depth guides to using HCC resources
--------------------------------------
{{% children description="true" %}}
+++
title = "Available images"
description = "HCC-provided images for Anvil"
+++
HCC provides pre-configured images available to researchers. Below is a
list of available images.
| Image Name | Username to Connect | Access instructions | Description |
| ------------------------ | -------------------- | ------------------------------------ | --------------------------------------------------------------------------------- |
| Cloudera 5.12 GNOME | `cloudera` | [X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | Cloudera 5.12 QuickStart VM. *Note*: Follow the X2Go instructions, but choose GNOME for the Session type instead of Xfce.|
| CentOS 7.4 | `centos` | `ssh -l centos@<ip address>` | The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL).|
| CentOS 6.9 | `centos` | `ssh -l centos@<ip address>` | **HCC Standard OS**. The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL).|
| Fedora 26 Cloud | `fedora` | `ssh -l fedora <ipaddress>` | Fedora is a Linux distribution developed by the community-supported Fedora Project and sponsored by the Red Hat company.|
| Fedora 26 RStudio (Xfce) | `fedora` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | Fedora 26 with the Xfce Desktop Environment pre-installed.|
| CentOS 7.4 Xfce | `centos` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | CentOS 7.4 with the Xfce Desktop Environment pre-installed.|
| CentOS 6.9 Xfce | `centos` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | CentOS 6.9 with the Xfce Desktop Environment pre-installed.|
| Ubuntu 14.04 Xfce | `ubuntu` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | Ubuntu 14.04 with the Xfce Desktop Environment pre-installed.|
| Ubuntu 16.04 Xfce | `ubuntu` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | Ubuntu 16.04 with the Xfce Desktop Environment pre-installed.|
| Ubuntu 17.04 Xfce | `ubuntu` |[X2Go instructions]({{< relref "connecting_to_linux_instances_using_x2go" >}}) | Ubuntu 17.04 with the Xfce Desktop Environment pre-installed.|
| Windows 7 | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 7 Enterprise edition with remote desktop access.|
| Windows 10 | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 10 LTSB edition with remote desktop access.|
| Windows 7 Matlab | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 7 Enterprise with Matlab r2013b, r2014b, r2015b, r2016b, r2017a pre-installed.|
| Windows 10 Matlab | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 10 LTSB with Matlab r2013b, r2014b, r2015b, r2016b, r2017a pre-installed.|
| Windows 7 SAS | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 7 Enterprise with SAS 9.3, 9.4 pre-installed.|
| Windows 10 SAS | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 10 LTSB with Matlab r2013b, r2014b, r2015b, r2016b, r2017a pre-installed.|
| Windows 7 Mathematica | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 7 Enterprise with Mathematica 10.4 and 11.0 pre-installed.|
| Windows 10 Mathematica | `cloud-user` |[Windows instructions]({{< relref "connecting_to_windows_instances" >}}) | Windows 10 LTSB with Mathematica 10.4 and 11.0 pre-installed.|
| Ubuntu Cloud 14.04 LTS | `ubuntu` | `ssh -l ubuntu <ipaddress>` | Ubuntu Cloud Image from the 14.04 Long Term Support release.|
| Ubuntu Cloud 16.04 LTS | `ubuntu` | `ssh -l ubuntu <ipaddress>` | Ubuntu Cloud Image from the 16.04 Long Term Support release.|
| Ubuntu Cloud 17.04 LTS | `ubuntu` | `ssh -l ubuntu <ipaddress>` | Ubuntu Cloud Image from the 17.04 Long Term Support release.|
Additional images can be produced by HCC staff by request at
{{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu).
+++
title = "Formatting and mounting a volume in Linux"
description = "How to format and mount volume as a hard drive in Linux."
+++
{{% notice info %}}
This guide assumes you associated your SSH Key Pair with the instance
when it was created, and that you are connected to the [Anvil VPN]({{< relref "connecting_to_the_anvil_vpn" >}}).
{{% /notice %}}
Once you have [created and attached]({{< relref "creating_and_attaching_a_volume" >}})
your volume, it must be formatted and mounted in your Linux instance to be usable. This
procedure is identical to what would be done when attaching a second
hard drive to a physical machine. In this example, a 1GB volume was
created and attached to the instance. Note that the majority of this
guide is for a newly created volume.
{{% notice note %}}
**If you are attaching an existing volume with data already on it,
skip to [creating a directory and mounting the volume](#mounting-the-volume).**
{{% /notice %}}
#### Formatting the volume
Follow the relevant guide
([Windows]({{< relref "connecting_to_linux_instances_from_windows">}})
| [Mac]({{< relref "connecting_to_linux_instances_from_mac" >}})) for your
operating system to connect to your instance. Formatting and mounting
the volume requires root privileges, so first run the
command `sudo su -` to get a root shell.
{{% panel theme="danger" header="**Running commands as root**" %}}**Extreme care should be taken when running commands as `root.`** It is very easy to permanently delete data or cause irreparable damage to your instance.{{% /panel %}}
{{< figure src="/images/anvil-volumes/1-sudo.png" width="576" >}}
Next, you will need to determine what device the volume is presented as
within Linux. Typically this will be `/dev/vdb`, but it is necessary to
verify this to avoid mistakes, especially if you have more than one
volume attached to an instance. The command `lsblk` will list the
hard drive devices and partitions.
{{< figure src="/images//anvil-volumes/2-lsblk.png" width="576" >}}
Here there is a completely empty (no partitions) disk device matching
the 1GB size of the volume, so `/dev/vdb` is the correct device.
The `parted` utility will first be used to label the device and then create a partition.
{{< highlight bash >}}
parted /dev/vdb mklabel gpt
parted /dev/vdb mkpart primary 0% 100%
{{< /highlight >}}
{{< figure src="/images/anvil-volumes/3-mkpart.png" width="576" >}}
Now that a partition has been created, it can be formatted. Here, the
ext4 filesystem will be used. This is the default filesystem used by
many Linux distributions including CentOS and Ubuntu, and is a good
general choice. An alternate filesystem may be used by running a
different format command. To format the partition using ext4, run the
command `mkfs.ext4 /dev/vdb1`. You will see a progress message and then
be returned to the shell prompt.
{{< figure src="/images/anvil-volumes/4-mkfs.png" width="576" >}}
#### Mounting the volume
{{% notice note %}}
If you are attaching a pre-existing volume, start here.
{{% /notice %}}
Finally, the formatted partition must be mounted as a directory to be
used. By convention this is done under `/mnt`, but you may choose to
mount it elsewhere depending on the usage. Here, a directory
called `myvolume` will be created and the volume mounted there. Run the
following commands to make the directory and mount the volume:
{{< highlight bash >}}
mkdir /mnt/myvolume
mount /dev/vdb1 /mnt/myvolume
{{< /highlight >}}
{{< figure src="/images/anvil-volumes/5-mount.png" width="576" >}}
Running the command `df -h` should then show the new mounted empty
volume.
{{< figure src="/images/anvil-volumes/6-df.png" width="576" >}}
The volume can now be used.
+++
title = "Handling Data"
description = "How to work with and transfer data to/from HCC resources."
weight = "30"
+++
{{% panel theme="danger" header="**Sensitive and Protected Data**" %}}HCC currently has *no storage* that is suitable for **HIPAA** or other **PID** data sets. Users are not permitted to store such data on HCC machines.{{% /panel %}}
All HCC machines have three separate areas for every user to store data,
each intended for a different purpose. In addition, we have a transfer
service that utilizes [Globus Connect]({{< relref "globus_connect" >}}).
{{< figure src="/images/35325560.png" height="500" class="img-border">}}
---
### Home Directory
{{% notice info %}}
You can access your home directory quickly using the $HOME environmental
variable (i.e. '`cd $HOME'`).
{{% /notice %}}
Your home directory (i.e. `/home/[group]/[username]`) is meant for items
that take up relatively small amounts of space. For example: source
code, program binaries, configuration files, etc. This space is
quota-limited to **20GB per user**. The home directories are backed up
for the purposes of best-effort disaster recovery. This space is not
intended as an area for I/O to active jobs. **/home** is mounted
**read-only** on cluster worker nodes to enforce this policy.
---
### Common Directory
{{% notice info %}}
You can access your common directory quickly using the $COMMON
environmental variable (i.e. '`cd $COMMON`')
{{% /notice %}}
The common directory operates similarly to work and is mounted with
**read and write capability to worker nodes all HCC Clusters**. This
means that any files stored in common can be accessed from Crane or Rhino,
making this directory ideal for items that need to be
accessed from multiple clusters such as reference databases and shared
data files.
{{% notice warning %}}
Common is not designed for heavy I/O usage. Please continue to use your
work directory for active job output to ensure the best performance of
your jobs.
{{% /notice %}}
Quotas for common are **30 TB per group**, with larger quotas available
for purchase if needed. However, files stored here will **not be backed
up** and are **not subject to purge** at this time. Please continue to
backup your files to prevent irreparable data loss.
Additional information on using the common directories can be found in
the documentation on [Using the /common File System]({{< relref "using_the_common_file_system" >}})
---
### High Performance Work Directory
{{% notice info %}}
You can access your work directory quickly using the $WORK environmental
variable (i.e. '`cd $WORK'`).
{{% /notice %}}
{{% panel theme="danger" header="**File Loss**" %}}The `/work` directories are **not backed up**. Irreparable data loss is possible with a mis-typed command. See [Preventing File Loss]({{< relref "preventing_file_loss" >}}) for strategies to avoid this.{{% /panel %}}
Every user has a corresponding directory under /work using the same
naming convention as `/home` (i.e. `/work/[group]/[username]`). We
encourage all users to use this space for I/O to running jobs. This
directory can also be used when larger amounts of space are temporarily
needed. There is a **50TB per group quota**; space in /work is shared
among all users. It should be treated as short-term scratch space, and
**is not backed up**. **Please use the `hcc-du` command to check your
own and your group's usage, and back up and clean up your files at
reasonable intervals in $WORK.**
---
### Purge Policy
HCC has a **purge policy on /work** for files that become dormant.
After **6 months of inactivity on a file (26 weeks)**, an automated
purge process will reclaim the used space of these dormant files. HCC
provides the **`hcc-purge`** utility to list both the summary and the
actual file paths of files that have been dormant for **24 weeks**.
This list is periodically generated; the timestamp of the last search
is included in the default summary output when calling `hcc-purge` with
no arguments. No output from `hcc-purge` indicates the last scan did
not find any dormant files. `hcc-purge -l` will use the less pager to
list the matching files for the user. The candidate list can also be
accessed at the following path:` /lustre/purge/current/${USER}.list`.
This list is updated twice a week, on Mondays and Thursdays.
{{% notice warning %}}
`/work` is intended for recent job output and not long term storage. Evidence of circumventing the purge policy by users will result in consequences including account lockout.
{{% /notice %}}
If you have space requirements outside what is currently provided,
please
email <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> and
we will gladly discuss alternatives.
---
### [Attic]({{< relref "using_attic" >}})
Attic is a near line archive available for purchase at HCC. Attic
provides reliable large data storage that is designed to be more
reliable then `/work`, and larger than `/home`. Access to Attic is done
through [Globus Connect]({{< relref "globus_connect" >}}).
More details on Attic can be found on HCC's
<a href="https://hcc.unl.edu/attic" class="external-link">Attic</a>
website.
---
### [Globus Connect]({{< relref "globus_connect" >}})
For moving large amounts of data into or out of HCC resources, users are
highly encouraged to consider using [Globus
Connect]({{< relref "globus_connect" >}}).
---
### Using Box
You can use your [UNL
Box.com]({{< relref "integrating_box_with_hcc" >}}) account to download and
upload files from any of the HCC clusters.
+++
title = "Data for UNMC Users Only"
description= "Data storage options for UNMC users"
weight = 50
+++
{{% panel theme="danger" header="Sensitive and Protected Data" %}} HCC currently has no storage that is suitable for HIPAA or other PID
data sets. Users are not permitted to store such data on HCC machines.
Crane have a special directory, only for UNMC users. Please
note that this filesystem is still not suitable for HIPAA or other PID
data sets.
{{% /panel %}}
---
### Transferring files to this machine from UNMC.
You will need to email us
at <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> to
gain access to this machine. Once you do, you can sftp to 10.14.250.1
and upload your files. Note that sftp is your only option. You may use
different sftp utilities depending on your platform you are logging in
from. Email us if you need help with this. Once you are logged in, you
should be at `/volumes/UNMC1ZFS/[group]/[username]`, or
`/home/[group]/[username]`. Both are the same location and you will be
allowed to write files there.
For Windows, learn more about logging in and uploading files
[here](https://hcc-docs.unl.edu/display/HCCDOC/For+Windows+Users).
Using your uploaded files on Crane.
---------------------------------------------
Using your
uploaded files is easy. Just go to
`/shared/unmc1/[group]/[username]` and your files will be in the same
place. You may notice that the directory is not available at times. This
is because the unmc1 directory is automounted. This means, if you try to
go to the directory, it will show up. Just "`cd`" to
`/shared/unmc1/[group]/[username]` and all of the files will be
there.
If you have space requirements outside what is currently provided,
please
email <a href="mailto:hcc-support@unl.edu" class="external-link">hcc-support@unl.edu</a> and
we will gladly discuss alternatives.
+++
title = "Activating HCC Cluster Endpoints"
description = "How to activate HCC endpoints on Globus"
weight = 20
+++
You will not be able to transfer files to or from an HCC endpoint using Globus Connect without first activating the endpoint. Endpoints are available for Crane (`hcc#crane`), Rhino, (`hcc#rhino`), and Attic (`hcc#attic`). Follow the instructions below to activate any of these endpoints and begin making transfers.
1. [Sign in](https://www.globus.org/SignIn) to your Globus account using your campus credentials or your Globus ID (if you have one). Then click on 'Endpoints' in the left sidebar.
{{< figure src="/images/Glogin.png" >}}
{{< figure src="/images/endpoints.png" >}}
2. Find the endpoint you want by entering '`hcc#crane`', '`hcc#rhino`', or '`hcc#attic`' in the search box and hit 'enter'. Once you have found and selected the endpoint, click the green 'activate' icon. On the following page, click 'continue'.
{{< figure src="/images/activateEndpoint.png" >}}
{{< figure src="/images/EndpointContinue.png" >}}
3. You will be redirected to the HCC Globus Endpoint Activation page. Enter your *HCC* username and password (the password you usually use to log into the HCC clusters).
{{< figure src="/images/hccEndpoint.png" >}}
4. Next you will be prompted to
provide your *Duo* credentials. If you use the Duo Mobile app on
your smartphone or tablet, select 'Duo Push'. Once you approve the notification that is sent to your phone,
the activation will be complete. If you use a Yubikey for
authentication, select the 'Passcode' option and then press your
Yubikey to complete the activation. Upon successful activation, you
will be redirected to your Globus *Manage Endpoints* page.
{{< figure src="/images/EndpointPush.png" >}}
{{< figure src="/images/endpointComplete.png" >}}
The endpoint should now be ready
and will not have to be activated again for the next 7 days.
To transfer files between any two HCC clusters, you will need to
activate both endpoints individually.
Next, learn how to [make file transfers between HCC endpoints]({{< relref "file_transfers_between_endpoints" >}}) or how to [transfer between HCC endpoints and a personal computer]({{< relref "file_transfers_to_and_from_personal_workstations" >}}).
---
+++
title = "File Sharing"
description = "How to share files using Globus"
weight = 50
+++
If you would like another colleague or researcher to have access to your
data, you may create a shared endpoint on Crane, Rhino, or Attic. You can personally manage access to this endpoint and
give access to anybody with a Globus account (whether or not
they have an HCC account). *Please use this feature responsibly by
sharing only what is necessary and granting access only to trusted
users.*
{{% notice info %}}
Shared endpoints created in your `home` directory on HCC servers (with
the exception of Attic) are *read-only*. You may create readable and
writable shared endpoints in your `work` directory (or `/shared`).
{{% /notice %}}
1. Sign in to your Globus account, click on the 'Endpoints' tab
and search for the endpoint that you will use to host your shared
endpoint. For example, if you would like to share data in your
Crane `work` directory, search for the `hcc#crane` endpoint. Once
you have found the endpoint, it will need to be activated if it has
not been already (see [endpoint activation instructions
here]({{< relref "activating_hcc_cluster_endpoints" >}})).
If it is already activated, select the endpoint by clicking on the
name. Then select the 'share' button on the right sidebar.
{{< figure src="/images/sharedEndpoint.png" >}}
{{< figure src="/images/shareButton.png" >}}
2. In the 'Path' box, enter the full path to the directory you
would like to share. Only files under this directory will be shared
to the endpoint users you grant access. Enter a descriptive endpoint
name and provide a
short description of the endpoint if you wish. Finally, click 'Create Share'.
{{< figure src="/images/createShare.png" >}}
3. Type the Globus ID (or group name) of the user (or group) to whom you would like to grant
access to this endpoint. Next enter the *relative path* of the
directory that this user should be able to access. For example, if
the source path of your shared endpoint
is `/work/<groupid>/<userid>/share` but you would like your
colleague to only have access
to `/work/<groupid>/<userid>/share/dataX`, then the 'Path' should be
entered as simply `/dataX`. Finally, click the blue 'Add Permission' button.
You should see the user or group added to the list.
{{< figure src="/images/addPermission.png" >}}
{{< figure src="/images/sharedGroup.png" >}}
---
+++
title = "File Transfers Between Endpoints"
description = "How to transfer files between HCC clusters using Globus"
weight = 30
+++
To transfer files between HCC clusters, you will first need to
[activate]({{< relref "activating_hcc_cluster_endpoints" >}}) the
two endpoints you would like to use (the available endpoints
are: `hcc#crane` `hcc#rhino`, and `hcc#attic`). Once
that has been completed, follow the steps below to begin transferring
files. (Note: You can also transfer files between an HCC endpoint and
any other Globus endpoint for which you have authorized access. That
may include a [personal
endpoint]({{< relref "file_transfers_to_and_from_personal_workstations" >}}),
a [shared
endpoint]({{< relref "file_sharing" >}}),
or an endpoint on another computing resource or cluster. Once the
endpoints have been activated, the file transfer process is generally
the same regardless of the type of endpoints you use. For demonstration
purposes we use two HCC endpoints.)
1. Once both endpoints for the desired file transfer have been
activated, [sign in](https://www.globus.org/SignIn) to
your Globus account (if you are not already) and select
"Transfer or Sync to.." from the right sidebar. If you have
a small screen, you may have to click the menu icon
first.
{{< figure src="/images/Transfer.png">}}
2. Enter the names of the two endpoints you would like to use, or
select from the drop-down menus (for
example, `hcc#attic` and `hcc#crane`). Enter the
directory paths for both the source and destination (the 'from' and
'to' paths on the respective endpoints). Press 'Enter' to view files
under these directories. Select the files or directories you would
like to transfer (press *shift* or *control* to make multiple
selections) and click the blue highlighted arrow to start the
transfer.
{{< figure src="/images/startTransfer.png" >}}
3. Globus will display a message when your transfer has completed
(or in the unlikely event that it was unsuccessful), and you will
also receive an email. Select the 'refresh' icon to see your file
in the destination folder.
{{< figure src="/images/transferComplete.png" >}}
---
+++
title = "Integrating Box with HCC"
description = "How to integrate Box with HCC"
weight = 30
+++
UNL has come to an arrangement
with <a href="https://www.box.com/" class="external-link">Box.com</a> to
provide unlimited cloud storage to every student, staff, and faculty
member. This can be useful when used with jobs to automatically upload
results when the job has completed. Combined with
<a href="https://sites.box.com/sync4/" class="external-link">Box Sync</a>,
the uploaded files can be sync'd to your laptop or desktop upon job
completion. The upload and download speed of Box is about 20 to 30 MB/s
in good network traffic conditions. Users can use a tool called lftp to transfer files between HCC clusters and their Box accounts.
---
### Step-by-step guide for Lftp
1. You need to create your UNL [Box.com](https://www.box.com/) account [here](https://box.unl.edu/).
2. Since we are going to be using [webdav](https://en.wikipedia.org/wiki/WebDAV) protocol to access your [Box.com](https://www.box.com/) storage, you need to create an **External Password**. In the [Box.com](https://www.box.com/) interface, you can create it at **[Account Settings](https://unl.app.box.com/settings) > Create External Password.**
{{< figure src="/images/box_create_external_password.png" class="img-border" >}}
3. After logging into the cluster of your choice, load the `lftp` module by entering the command below at the prompt:
{{% panel theme="info" header="Load the lftp module" %}}
{{< highlight bash >}}
module load lftp
{{< /highlight >}}
{{% /panel %}}
4. Connect to Box using your full email as the username and external password you created:
{{% panel theme="info" header="Connect to Box" %}}
{{< highlight bash >}}
lftp -u <username>,<password> ftps://ftp.box.com
{{< /highlight >}}
{{% /panel %}}
5. Test the connection by running the `ls` command. You should see a listing of your Box files. Assuming it works, add a bookmark named "box" to use when connecting later:
{{% panel theme="info" header="Add lftp bookmark" %}}
{{< highlight bash >}}
lftp demo2@unl.edu@ftp.box.com:/> bookmark add box
{{< /highlight >}}
{{% /panel %}}
6. Exit `lftp` by typing `quit`. To reconnect later, use bookmark name:
{{% panel theme="info" header="Connect using bookmark name" %}}
{{< highlight bash >}}
lftp box
{{< /highlight >}}
{{% /panel %}}
7. To upload or download files, use `get` and `put` commands. For example:
{{% panel theme="info" header="Transferring files" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> put myfile.txt
lftp demo2@unl.edu@ftp.box.com:/> get my_other_file.txt
{{< /highlight >}}
{{% /panel %}}
8. To download directories, use the `mirror` command. To upload directories, use the `mirror` command with the `-R` option. For example, to download a directory named `my_box-dir` to your current directory:
{{% panel theme="info" header="Download a directory from Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> mirror my_box_dir
{{< /highlight >}}
{{% /panel %}}
To upload a directory named `my_hcc_dir` to Box, use `mirror` with the `-R` option:
{{% panel theme="info" header="Upload a directory to Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ lftp box
lftp demo2@unl.edu@ftp.box.com:/> mirror -R my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
9. Lftp also supports using scripts to transfer files. This can be used to automatically download or upload files during jobs. For example, create a file called "transfer.sh" with the following lines:
{{% panel theme="info" header="transfer.sh" %}}
{{< highlight bash >}}
open box
get some_input_file.tar.gz
put my_output_file.tar.gz
{{< /highlight >}}
{{% /panel %}}
To run this script, do:
{{% panel theme="info" header="Run transfer.sh" %}}
{{< highlight bash >}}
module load lftp
lftp -f transfer.sh
{{< /highlight >}}
{{% /panel %}}
+++
title = "Using Rclone for File Transfer"
description = "How to use Rclone with HCC"
weight = 9
+++
Rclone is an open source file transfer tool to make transfering files to and from various cloud resources such as Box, Amazon S3, Microsoft OneDrive, and Google Cloud Storage and your local machine a simpler task. Guides on how to set up a variety of resources to transfer to and from can be found at [rclone's webpage](https://rclone.org/).
This tool can be used to transfer files between HCC clusters and outside cloud providers, such as Box.
---
### Setup RClone
1. You need to create your UNL [Box.com](https://www.box.com/) account [here](https://box.unl.edu/).
2. Due to the clusters being remote machines, Rclone will need to be installed on your [local machine](https://rclone.org/downloads/) in order to authorize box. Some services, such as Google Drive, do not require Rclone to be installed on your local machine.
3. After logging into the cluster of your choice, load the `rclone` module by entering the command below at the prompt:
{{% panel theme="info" header="Load the Rclone module" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ module load rclone
{{< /highlight >}}
{{% /panel %}}
4. We will need to start the basic configuration for box. To do this run `rclone config`:
{{% panel theme="info" header="Load the rclone config" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone config
{{< /highlight >}}
{{% /panel %}}
5. In a new configuration, you will see no remotes found. Enter `n` to make a new remote and name it a name you will know. In our example, we will use "UNLBox". Select box by entering in the corresponding number, in our case `6`. Hit Enter for the client_id and client_secret and `y` for Edit advanced config. Due to the file size limit with Box, set the upload_cutoff to `15G` and press enter, also leaving the commit_retries as the default by pressing Enter. When you are prompted for auto config, select `N` and switch to a terminal on your local machine:
{{% panel theme="info" header="Configure box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone config
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> UNLBox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
6 / Box
\ "box"
Storage> 6
Box App Client Id.
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
Box App Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> y
Cutoff for switching to multipart upload (>= 50MB).
Enter a size with suffix k,M,G,T. Press Enter for the default ("50M").
upload_cutoff> 15G
Max number of times to try committing a multipart file.
Enter a signed integer. Press Enter for the default ("100").
commit_retries>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> n
For this to work, you will need Rclone available on a machine that has a web browser available.
Execute the following on your machine:
rclone authorize "box"
Then paste the result below:
result>
{{< /highlight >}}
{{% /panel %}}
6. Run `rclone authorize "box"` on the local machine. You will be prompted to go to a 127.0.0.1 address in your web browser if a browser doesn't open automatically. Select `Use Single Sign On(SSO)` at the bottom and then enter in your UNL e-mail address. You will be taken to sign into UNL's Box using your **Canvas** credentials. Select `Grant access to Box`. You will be told to paste a line of code from your local machine to the cluster and then to confirm that the config is correct.
{{% panel theme="info" header="List contents of Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone authorize "box"
{{< /highlight >}}
{{% /panel %}}
{{< figure src="/images/BoxSSO.png" height="500" class="img-border">}}
{{% panel theme="info" header="Local Config" %}}
{{< highlight bash >}}
[demo2@local.machine ~]$ rclone authorize "box"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize Rclone for access
Waiting for code...
{{< /highlight >}}
{{% /panel %}}
For other services, please refer to the [rclone documentation](https://rclone.org/).
7. Test the connection by running the `ls` command. You should see a listing of your Box files.
{{% panel theme="info" header="List contents of Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone ls UNLBox:/
{{< /highlight >}}
{{% /panel %}}
8. To upload or download files, use the `clone` command. For example:
{{% panel theme="info" header="Transferring files" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy UNLBox:/SomeFile.txt ./
[demo2@login.crane ~]$ rclone copy ./SomeFile.txt UNLBox:/
{{< /highlight >}}
{{% /panel %}}
9. To download directories, use the `clone` command and use directory names over file. This copies the contents of the folders, so you need to specify a destination folder.
{{% panel theme="info" header="Download a directory from Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy UNLBox:/my_hcc_dir ./my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
To upload a directory named `my_hcc_dir` to Box, use `clone`.
{{% panel theme="info" header="Upload a directory to Box" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone copy ./my_hcc_dir UNLBox:/my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
11. Rclone also supports using sync to transfer files, similar to rsync. The syntax is similar to `rclone copy`. This would only transfer files that are updated by name, checksum, or time. The exmaple below would sync the files of the local directory to the remote directory on box.
{{% panel theme="info" header="transfer.sh" %}}
{{< highlight bash >}}
[demo2@login.crane ~]$ rclone sync ./my_hcc_dir UNLBox:/my_hcc_dir
{{< /highlight >}}
{{% /panel %}}
+++
title = "Jupyter Notebooks on Crane"
description = "How to access and use a Jupyter Notebook"
weight = 20
+++
- [Connecting to Crane] (#connecting-to-crane)
- [Running Code] (#running-code)
- [Opening a Terminal] (#opening-a-terminal)
- [Using Custom Packages] (#using-custom-packages)
## Connecting to Crane
-----------------------
Jupyter defines it's notebooks ("Jupyter Notebooks") as
an open-source web application that allows you to create and share documents that contain live code,
equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation,
statistical modeling, data visualization, machine learning, and much more.
1. To open a Jupyter notebook, [Sign in](https://crane.unl.edu) to crane.unl.edu using your hcc credentials (NOT your
campus credentials).
{{< figure src="/images/jupyterLogin.png" >}}
2. Select your preferred authentication method.
{{< figure src="/images/jupyterPush.png" >}}
3. Choose a job profile. Select "Noteboook via SLURM Job | Small (1 core, 4GB RAM, 8 hours)" for light tasks such as debugging or small-scale testing.
Select the other options based on your computing needs. Note that a SLURM Job will save to your "work" directory.
{{< figure src="/images/jupyterjob.png" >}}
## Running Code
1. Select the "New" dropdown menu and select the file type you want to create.
{{< figure src="/images/jupyterNew.png" >}}
2. A new tab will open, where you can enter your code. Run your code by selecting the "play" icon.
{{< figure src="/images/jupyterCode.png">}}
## Opening a Terminal
1. From your user home page, select "terminal" from the "New" drop-down menu.
{{< figure src="/images/jupyterTerminal.png">}}
2. A terminal opens in a new tab. You can enter [Linux commands] ({{< relref "basic_linux_commands" >}})
at the prompt.
{{< figure src="/images/jupyterTerminal2.png">}}
## Using Custom Packages
Many popular `python` and `R` packages are already installed and available within Jupyter Notebooks.
However, it is possible to install custom packages to be used in notebooks by creating a custom Anaconda
Environment. Detailed information on how to create such an environment can be found at
[Using an Anaconda Environment in a Jupyter Notebook on Crane]({{< relref "using_anaconda_package_manager/#using-an-anaconda-environment-in-a-jupyter-notebook-on-crane" >}}).
---
+++
title = "Running Applications"
description = "How to run various applications on HCC resources."
weight = "20"
+++
{{% children %}}
+++
title = "Allinea Profiling & Debugging Tools"
description = "How to use the Allinea suite of tools for profiling and debugging."
+++
HCC provides both the Allinea Forge suite and Performance Reports to
assist with debugging and profiling C/C++/Fortran code. These tools
support single-threaded, multi-threaded (pthreads/OpenMP), MPI, and CUDA
code. The Allinea Forge suite consists of two programs: DDT for
debugging and MAP for profiling. The Performance Reports software
provides a convenient way to profile HPC applications. It generates an
easy-to-read single-page HTML report.
For information on using each tool, see the following pages.
[Using Allinea Forge via Reverse Connect]({{< relref "using_allinea_forge_via_reverse_connect" >}})
[Allinea Performance Reports]({{< relref "allinea_performance_reports" >}})