diff --git a/content/Applications/_index.md b/content/Applications/_index.md
index e581928afd9378bc73cd1b66e9aa556be3491c0d..1caa5c3db3ace86049d9b315d731bb9baaeb5418 100644
--- a/content/Applications/_index.md
+++ b/content/Applications/_index.md
@@ -1,9 +1,9 @@
 +++
-title = "Guides"
-weight = "20"
+title = "Applications"
+weight = "40"
 +++
 
-In-depth guides to using HCC resources
+In-depth guides for using applications on HCC resources
 --------------------------------------
 
 {{% children description="true" %}}
diff --git a/content/connecting/_index.md b/content/connecting/_index.md
index 7aa654ad3bc043ae810821f4dfed7fa5967a8cc6..f8b2f3eaf85ce0b044454e759636a53d7ff73912 100644
--- a/content/connecting/_index.md
+++ b/content/connecting/_index.md
@@ -1,106 +1,10 @@
 +++
-title = "HCC Documentation"
-description = "HCC Documentation Home"
-weight = "1"
+title = "Connecting"
+description = "Information on connecting to HCC resources"
+weight = "30"
 +++
 
-HCC Documentation
-============================
+How to connect to HCC resources
+--------------------------------------
 
-
-The Holland Computing Center supports a diverse collection of research
-computing hardware.  Anyone in the University of Nebraska system is
-welcome to apply for an account on HCC machines.
-
-Access to these resources is by default shared with the rest of the user
-community via various job schedulers. These policies may be found on the
-pages for the various resources. Alternatively, a user may buy into an
-existing resource, acquiring 'priority access'. Finally, several
-machines are available via Condor for opportunistic use. This will allow
-users almost immediate access, but the job is subject to preemption.
-
-#### [New Users Sign Up](http://hcc.unl.edu/new-user-request)
-
-#### [Quick Start Guides](/quickstarts)
-
-Which Cluster to Use?
----------------------
-
-**Crane**: Crane is the newest and most powerful HCC resource . If you
-are new to using HCC resources, Crane is the recommended cluster to use
-initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
-node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
-
-**Rhino**: Rhino is intended for large memory (RAM) computing needs.
-Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
-node in the default partition. For extremely large RAM needs, there is also
-a 'highmem' partition with 2 x 512GB and 2 x 1TB nodes.
-
-User Login
-----------
-
-For Windows users, please refer to this link [For Windows Users]({{< relref "for_windows_users" >}}).
-For Mac or Linux users, please refer to this link [For Mac/Linux Users]({{< relref "for_maclinux_users">}}).
-
-**Logging into Crane or Rhino**
-
-{{< highlight bash >}}
-ssh <username>@crane.unl.edu
-{{< /highlight >}}
-
-or
-
-{{< highlight bash >}}
-ssh <username>@rhino.unl.edu
-{{< /highlight >}}
-
-Duo Security
-------------
-
-Duo two-factor authentication is **required** for access to HCC
-resources. Registration and usage of Duo security can be found in this
-section: [Setting up and using Duo]({{< relref "/Accounts/setting_up_and_using_duo">}})
-
-**Important Notes**
-
--   The Crane and Rhino clusters are separate. But, they are
-    similar enough that submission scripts on whichever one will work on
-    another, and vice versa (excluding GPU resources and some combinations of
-    RAM/core requests).
-     
--   The worker nodes cannot write to the `/home` directories. You must
-    use your `/work` directory for processing in your job. You may
-    access your work directory by using the command:
-{{< highlight bash >}}
-$ cd $WORK
-{{< /highlight >}}
-
-Resources
----------
-
-- ##### Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.
-
-- ##### Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
-
-- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
-
-- ##### Anvil - HCC's cloud computing cluster based on Openstack
-
-- ##### Glidein - A gateway to running jobs on the OSG, a collection of computing resources across the US.
-
-Resource Capabilities
----------------------
-
-| Cluster | Overview | Processors | RAM | Connection | Storage
-| ------- | ---------| ---------- | --- | ---------- | ------
-| **Crane**   | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*\*256GB<br><br>37 nodes @ \*\*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
-| **Rhino** | 110 node Production-mode LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB\*\*/256GB\*\*\* <br><br> 2 nodes @ 512GB\*\*\*\* <br><br> 2 nodes @ 1024GB\*\*\*\*\* | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
-| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
-| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
-
-You may only request the following amount of RAM: <br>
-\*62.5GB <br>
-\*\*187.5GB <br>
-\*\*\*250GB <br>
-\*\*\*\*500GB <br>
-\*\*\*\*\*1000GB
+{{% children description="true" %}}
diff --git a/content/events/_index.md b/content/events/_index.md
index ab0b059526d54bb96c3f0a47848bdd7b64057d74..c225c4e9413972a56d8002b66261a07f762e713a 100644
--- a/content/events/_index.md
+++ b/content/events/_index.md
@@ -1,7 +1,7 @@
 +++
 title = "Events"
 description = "Historical listing of various HCC events."
-weight = "30"
+weight = "70"
 +++
 
 Historical listing of HCC Events
diff --git a/content/faq/_index.md b/content/faq/_index.md
index 75240bf5e483aadc40532c091428e5f54d8cd79d..eddc1c1ae84305814631fbf875bad3949b40796c 100644
--- a/content/faq/_index.md
+++ b/content/faq/_index.md
@@ -1,7 +1,7 @@
 +++
 title = "FAQ"
 description = "HCC Frequently Asked Questions"
-weight = "20"
+weight = "10"
 +++
 
 - [I have an account, now what?](#i-have-an-account-now-what)
diff --git a/content/osg/_index.md b/content/osg/_index.md
index 034162b2d822a44bfe370ae3fc52b5ac79229e31..4dcce7d913f66f75435f717d6039e3b28d883fa7 100644
--- a/content/osg/_index.md
+++ b/content/osg/_index.md
@@ -1,7 +1,7 @@
 +++
 title = "The Open Science Grid"
 description = "How to utilize the Open Science Grid (OSG)."
-weight = "40"
+weight = "80"
 +++
 
 If you find that you are not getting access to the volume of computing
diff --git a/content/quickstarts/_index.md b/content/quickstarts/_index.md
index d2f1201d7a2cd269d3625cc8576b1251852e5f91..95f2d05e4eaa1d249c3dfed0d7385b59ffbe0238 100755
--- a/content/quickstarts/_index.md
+++ b/content/quickstarts/_index.md
@@ -1,6 +1,6 @@
 +++
 title = "Quickstarts"
-weight = "10"
+weight = "15"
 +++
 
 The quick start guides require that you already have a HCC account.  You