From 91e866ed39e9484c763899202306f9671f663728 Mon Sep 17 00:00:00 2001 From: josh <joshcarini@gmail.com> Date: Fri, 8 Feb 2019 14:25:49 -0600 Subject: [PATCH] changed table cell to say tusker has 2 nodes at 1024GB, instead of just 1. changed preceding paragraph to match the data in the table --- content/_index.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/content/_index.md b/content/_index.md index bc29969e..188f97b7 100644 --- a/content/_index.md +++ b/content/_index.md @@ -29,13 +29,14 @@ Which Cluster to Use? **Crane**: Crane is the newest and most powerful HCC resource . If you are new to using HCC resources, Crane is the recommended cluster to use initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per -node. If your job requires more than 16 cores per node or you need more -than 64GB of memory, consider using Tusker instead. +node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node. +If your job requires more than 36 cores per node or you need more than +512GB of memory, consider using Tusker instead. **Tusker**: Similar to Crane, Tusker is another cluster shared by all -campus users. It has 4 CPU/ 64 cores and 256GB RAM per nodes. Two nodes -have 512GB RAM for very large memory jobs. So for jobs requiring more -than 16 cores per node or large memory, Tusker would be a better option. +campus users. It has 4 CPU/ 64 cores and 256GB RAM per node. Two nodes +have 1024GB RAM for very large memory jobs. So for jobs requiring more +than 36 cores per node or large memory, Tusker would be a better option. User Login ---------- @@ -94,7 +95,7 @@ Resource Capabilities | Cluster | Overview | Processors | RAM | Connection | Storage | ------- | ---------| ---------- | --- | ---------- | ------ | **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage -| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*1 Node with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch | +| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*2 Nodes with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch | | **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space | | **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) | -- GitLab