From ca57c3f04fb1f8ce45a514331a90e12d154bfdb8 Mon Sep 17 00:00:00 2001
From: Carrie A Brown <cbrown58@unl.edu>
Date: Fri, 13 Sep 2019 14:17:19 -0500
Subject: [PATCH] Added index file for Connecting

---
 content/connecting/_index.md | 106 +++++++++++++++++++++++++++++++++++
 1 file changed, 106 insertions(+)
 create mode 100644 content/connecting/_index.md

diff --git a/content/connecting/_index.md b/content/connecting/_index.md
new file mode 100644
index 00000000..bc6b11c4
--- /dev/null
+++ b/content/connecting/_index.md
@@ -0,0 +1,106 @@
++++
+title = "HCC Documentation"
+description = "HCC Documentation Home"
+weight = "1"
++++
+
+HCC Documentation
+============================
+
+
+The Holland Computing Center supports a diverse collection of research
+computing hardware.  Anyone in the University of Nebraska system is
+welcome to apply for an account on HCC machines.
+
+Access to these resources is by default shared with the rest of the user
+community via various job schedulers. These policies may be found on the
+pages for the various resources. Alternatively, a user may buy into an
+existing resource, acquiring 'priority access'. Finally, several
+machines are available via Condor for opportunistic use. This will allow
+users almost immediate access, but the job is subject to preemption.
+
+#### [New Users Sign Up](http://hcc.unl.edu/new-user-request)
+
+#### [Quick Start Guides](/quickstarts)
+
+Which Cluster to Use?
+---------------------
+
+**Crane**: Crane is the newest and most powerful HCC resource . If you
+are new to using HCC resources, Crane is the recommended cluster to use
+initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
+node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
+
+**Rhino**: Rhino is intended for large memory (RAM) computing needs.
+Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
+node in the default partition. For extremely large RAM needs, there is also
+a 'highmem' partition with 2 x 512GB and 2 x 1TB nodes.
+
+User Login
+----------
+
+For Windows users, please refer to this link [For Windows Users]({{< relref "for_windows_users" >}}).
+For Mac or Linux users, please refer to this link [For Mac/Linux Users]({{< relref "for_maclinux_users">}}).
+
+**Logging into Crane or Rhino**
+
+{{< highlight bash >}}
+ssh <username>@crane.unl.edu
+{{< /highlight >}}
+
+or
+
+{{< highlight bash >}}
+ssh <username>@rhino.unl.edu
+{{< /highlight >}}
+
+Duo Security
+------------
+
+Duo two-factor authentication is **required** for access to HCC
+resources. Registration and usage of Duo security can be found in this
+section: [Setting up and using Duo]({{< relref "setting_up_and_using_duo">}})
+
+**Important Notes**
+
+-   The Crane and Rhino clusters are separate. But, they are
+    similar enough that submission scripts on whichever one will work on
+    another, and vice versa (excluding GPU resources and some combinations of
+    RAM/core requests).
+     
+-   The worker nodes cannot write to the `/home` directories. You must
+    use your `/work` directory for processing in your job. You may
+    access your work directory by using the command:
+{{< highlight bash >}}
+$ cd $WORK
+{{< /highlight >}}
+
+Resources
+---------
+
+- ##### Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
+
+- ##### Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
+
+- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
+
+- ##### Anvil - HCC's cloud computing cluster based on Openstack
+
+- ##### Glidein - A gateway to running jobs on the OSG, a collection of computing resources across the US.
+
+Resource Capabilities
+---------------------
+
+| Cluster | Overview | Processors | RAM | Connection | Storage
+| ------- | ---------| ---------- | --- | ---------- | ------
+| **Crane**   | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*\*256GB<br><br>37 nodes @ \*\*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
+| **Rhino** | 110 node Production-mode LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB\*\*/256GB\*\*\* <br><br> 2 nodes @ 512GB\*\*\*\* <br><br> 2 nodes @ 1024GB\*\*\*\*\* | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
+| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
+| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
+
+You may only request the following amount of RAM: <br>
+\*62.5GB <br>
+\*\*187.5GB <br>
+\*\*\*250GB <br>
+\*\*\*\*500GB <br>
+\*\*\*\*\*1000GB
-- 
GitLab