From 45083da63a604ab2a702d578351c089b9bfc5224 Mon Sep 17 00:00:00 2001 From: Carrie Brown <carrie.brown@unl.edu> Date: Wed, 21 Oct 2020 13:19:40 -0500 Subject: [PATCH] Simplify max requestable memory in resource chart --- content/_index.md | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/content/_index.md b/content/_index.md index 7e9b89ef..0968025c 100644 --- a/content/_index.md +++ b/content/_index.md @@ -71,17 +71,11 @@ Resources Resource Capabilities --------------------- -| Cluster | Overview | Processors | RAM | Connection | Storage +| Cluster | Overview | Processors | RAM Raw(Max Usable\*)| Connection | Storage | ------- | ---------| ---------- | --- | ---------- | ------ -| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*\*256GB<br><br>37 nodes @ \*\*\*\*512GB<br><br>4 nodes @ \*\*\*\*\*\*1.5TB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage -| **Rhino** | 110 node LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB\*\*/256GB\*\*\* <br><br> 2 nodes @ 512GB\*\*\*\* <br><br> 2 nodes @ 1024GB\*\*\*\*\* | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage | +| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 64GB(62.5GB\*)<br><br>79 nodes @ 256GB(250GB\*)<br><br>37 nodes @ 512GB(500GB\*)<br><br>4 nodes @ 1.5TB(15000GB\*) | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage +| **Rhino** | 110 node LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB(187.5GB)/256GB(250GB\*) <br><br> 2 nodes @ 512GB(500GB\*) <br><br> 2 nodes @ 1024GB(994GB\*) | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage | | **Red** | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space | | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) | -You may only request the following amount of RAM: <br> -\*62.5GB <br> -\*\*187.5GB <br> -\*\*\*250GB <br> -\*\*\*\*500GB <br> -\*\*\*\*\*994GB <br> -\*\*\*\*\*\*1500GB +\* Due to overhead needs, the maximum requestable memory per node is lower than the maximum memory on the node. Requesting higher than this value may result in your job not being able to run, or running on a smaller number of eligible nodes. -- GitLab