Skip to content
Snippets Groups Projects
Verified Commit 0dfde4c0 authored by Adam Caprez's avatar Adam Caprez
Browse files

Move "Handling Data" section.

parent 2fc90f87
No related branches found
No related tags found
1 merge request!167Rework
Showing
with 4 additions and 4 deletions
......@@ -165,7 +165,7 @@ precious or irreproducible data should not be placed or left on Anvil**.
Transferring files to or from an instance is similar to doing so
with a personal laptop or workstation. To transfer between an
instance and another HCC resource, both SCP and [Globus
Connect]({{< relref "guides/handling_data/globus_connect" >}}) can be used. For transferring
Connect]({{< relref "/Handling_Data/globus_connect" >}}) can be used. For transferring
between an instance and a laptop/workstation or another instance,
standard file sharing utilities such as Dropbox or Box can be used.
Globus may also be used, with one stipulation. In order to
......
+++
title = "Handling Data"
description = "How to work with and transfer data to/from HCC resources."
weight = "30"
weight = "50"
+++
{{% panel theme="danger" header="**Sensitive and Protected Data**" %}}HCC currently has *no storage* that is suitable for **HIPAA** or other **PID** data sets. Users are not permitted to store such data on HCC machines.{{% /panel %}}
......
......@@ -8,7 +8,7 @@ High-Performance Computing is the use of groups of computers to solve computatio
HPC clusters consist of four primary parts, the login node, management node, workers, and a central storage array. All of these parts are bound together with a scheduler such as HTCondor or SLURM.
</br></br>
#### Login Node:
Users will automatically land on the login node when they log in to the clusters. You will [submit jobs] ({{< ref "/guides/submitting_jobs" >}}) using one of the schedulers and pull the results of your jobs. Running jobs on the login node directly will be stopped so others can use the login node to submit jobs.
Users will automatically land on the login node when they log in to the clusters. You will [submit jobs] ({{< ref "/Submitting_Jobs" >}}) using one of the schedulers and pull the results of your jobs. Running jobs on the login node directly will be stopped so others can use the login node to submit jobs.
</br></br>
#### Management Node:
The management node does as it sounds, it manages the cluster and provides a central point to manage the rest of the systems.
......@@ -17,4 +17,4 @@ The management node does as it sounds, it manages the cluster and provides a cen
The worker nodes are what run and process your jobs that are submitted from the schedulers. Through the use of the schedulers, more work can be efficiently done by squeezing in all jobs possible for the resources requested throughout the nodes. They also allow for fair use computing by making sure one user or group is not using the entire cluster at once and allowing others to use the clusters.
</br></br>
#### Central Storage Array:
The central storage array allows all of the nodes within the cluster to have access to the same files without needing to transfer them around. HCC has three arrays mounted on the clusters with more details [here]({{< ref "/guides/handling_data" >}}).
The central storage array allows all of the nodes within the cluster to have access to the same files without needing to transfer them around. HCC has three arrays mounted on the clusters with more details [here]({{< ref "/Handling_Data" >}}).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment