From 2337bf6919e4b386191bf76f5123c8af99f1b7d0 Mon Sep 17 00:00:00 2001
From: Adam Caprez <acaprez2@unl.edu>
Date: Mon, 26 Nov 2018 20:45:45 +0000
Subject: [PATCH] Remove Sandhills references from docs.

---
 content/_index.md                                        | 9 +++------
 .../running_applications/running_gaussian_at_hcc.md      | 2 +-
 content/guides/submitting_jobs/_index.md                 | 4 ++--
 content/quickstarts/for_windows_users.md                 | 7 +++----
 4 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/content/_index.md b/content/_index.md
index 1aecf87a..c9eeecdb 100644
--- a/content/_index.md
+++ b/content/_index.md
@@ -48,11 +48,9 @@ Users](https://hcc-docs.unl.edu/pages/viewpage.action?pageId=2851290).
 **Logging into Crane or Tusker**
 
 {{< highlight bash >}}
-ssh crane.unl.edu -l <username>
+ssh <username>@crane.unl.edu
 or
-ssh tusker.unl.edu -l <username>
-or
-ssh sandhills.unl.edu -l <username>
+ssh <username>@tusker.unl.edu
 {{< /highlight >}}
 
 Duo Security
@@ -65,7 +63,7 @@ Duo](https://hcc-docs.unl.edu/display/HCCDOC/Setting+up+and+using+Duo)
 
 **Important Notes**
 
--   The Crane, Tusker and Sandhills clusters are separate. But, they are
+-   The Crane and Tusker clusters are separate. But, they are
     similar enough that submission scripts on whichever one will work on
     another, and vice versa.    
      
@@ -100,7 +98,6 @@ Resource Capabilities
 | ------- | ---------| ---------- | --- | ---------- | ------
 | **Crane**   | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
 | **Tusker**  | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*1 Node with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
-| **Sandhills** | 108 node Production-mode LINUX cluster (condominium model) | 62 4-socket Opteron 6376 (2.3 Ghz, 64 cores/node)<br>44 4-socket Opteron 6128 (2.0 Ghz, 32 cores/node)<br>2 4-socket Opteron 6168 (1.9 Ghz, 48 cores/node) | 62 nodes @ 192GB<br>44 nodes @ 128GB<br>2 nodes @ 256GB | QDR Infiniband<br>Gigabit Ethernet | 175 TB shared Lustre storage<br>~1.5TB per node
 | **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
 | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
 
diff --git a/content/guides/running_applications/running_gaussian_at_hcc.md b/content/guides/running_applications/running_gaussian_at_hcc.md
index 3611b253..5e7db265 100644
--- a/content/guides/running_applications/running_gaussian_at_hcc.md
+++ b/content/guides/running_applications/running_gaussian_at_hcc.md
@@ -22,7 +22,7 @@ For access, contact us at
  {{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
 and include your HCC username. After your account has been added to the
 group "*gauss*", here are four simple steps to run Gaussian 09 on
-Sandhills, Tusker, and Crane:
+Tusker and Crane:
 
 **Step 1:** Copy **g09** sample input file and SLURM script to your
 "g09" test directory on the `/work` filesystem:
diff --git a/content/guides/submitting_jobs/_index.md b/content/guides/submitting_jobs/_index.md
index e4629049..d454f2e3 100644
--- a/content/guides/submitting_jobs/_index.md
+++ b/content/guides/submitting_jobs/_index.md
@@ -4,9 +4,9 @@ description =  "How to submit jobs to HCC resources"
 weight = "10"
 +++
 
-Crane, Sandhills and Tusker are managed by
+Crane and Tusker are managed by
 the [SLURM](https://slurm.schedmd.com) resource manager.  
-In order to run processing on Crane, Sandhills or Tusker, you
+In order to run processing on Crane or Tusker, you
 must create a SLURM script that will run your processing. After
 submitting the job, SLURM will schedule your processing on an available
 worker node.
diff --git a/content/quickstarts/for_windows_users.md b/content/quickstarts/for_windows_users.md
index d9757571..e7d20a00 100644
--- a/content/quickstarts/for_windows_users.md
+++ b/content/quickstarts/for_windows_users.md
@@ -28,8 +28,7 @@ Access to HCC Supercomputers
 -------------------------------
 
 Here we use the HCC cluster **Tusker** for demonstration. To use the
-**Crane** or **Sandhills** clusters, replace tusker.unl.edu with
-crane.unl.edu or sandhills.unl.edu.
+**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`.
 
 1.  On the first screen, type `tusker.unl.edu` for Host Name, then click
     **Open**. 
@@ -88,8 +87,8 @@ and the HCC supercomputers through a Graphic User Interface (GUI).
 Download and install the third party application **WinSCP**
 to connect the file systems between your personal computer and the HCC supercomputers. 
 Below is a step-by-step installation guide. Here we use the HCC cluster **Tusker**
-for demonstration. To use the **Sandhills** cluster, replace `tusker.unl.edu`
-with `sandhills.unl.edu`.
+for demonstration. To use the **Crane** cluster, replace `tusker.unl.edu`
+with `crane.unl.edu`.
 
 1.  On the first screen, type `tusker.unl.edu` for Host name, enter your
     HCC account username and password for User name and Password. Then
-- 
GitLab