diff --git a/content/_index.md b/content/_index.md
index b0531ccd6b65399c905316ca53496a4ccd8ba12c..bc6b11c4317d565a8da49beda90a1ffe3b1a2086 100644
--- a/content/_index.md
+++ b/content/_index.md
@@ -29,7 +29,7 @@ Which Cluster to Use?
 **Crane**: Crane is the newest and most powerful HCC resource . If you
 are new to using HCC resources, Crane is the recommended cluster to use
 initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
-node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node. 
+node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
 
 **Rhino**: Rhino is intended for large memory (RAM) computing needs.
 Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
@@ -82,11 +82,7 @@ Resources
 
 - ##### Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
 
-- ##### Red - This cluster is the resource for UNL's US CMS Tier-2 site.
-
-    - [CMS](http://www.uscms.org/)
-    - [Open Science Grid](http://www.opensciencegrid.org)
-    - [MyOSG](https://myosg.grid.iu.edu/)
+- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
 
 - ##### Anvil - HCC's cloud computing cluster based on Openstack
 
@@ -99,7 +95,7 @@ Resource Capabilities
 | ------- | ---------| ---------- | --- | ---------- | ------
 | **Crane**   | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*\*256GB<br><br>37 nodes @ \*\*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
 | **Rhino** | 110 node Production-mode LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB\*\*/256GB\*\*\* <br><br> 2 nodes @ 512GB\*\*\*\* <br><br> 2 nodes @ 1024GB\*\*\*\*\* | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
-| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
+| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and  Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
 | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
 
 You may only request the following amount of RAM: <br>
diff --git a/content/facilities.md b/content/facilities.md
index 5842dae5ba032b084c0c257e69c594edc90a2c1e..d6fd2b5d3f511dbc275064a567b4bf40028cc903 100644
--- a/content/facilities.md
+++ b/content/facilities.md
@@ -48,7 +48,7 @@ These resources are detailed further below.
 * 48 2-socket Xeon E5-2660 (2.2GHz) (32 slots per node)
 * 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
 * 2 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
-* 6,600 TB HDFS storage (3,300 TB usable)
+* 10.8 PB HDFS storage
 * Mix of 1, 10, and 40 GbE networking
     * 1x Dell S6000-ON switch
     * 2x Dell S4048-ON switch