Skip to content
Snippets Groups Projects
Commit 3a28e187 authored by tharvill1's avatar tharvill1
Browse files

Update _index.md

parent 548bc234
No related branches found
No related tags found
1 merge request!259Update _index.md
......@@ -73,7 +73,7 @@ Resource Capabilities
| Cluster | Overview | Processors | RAM\* | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 62.5GB<br><br>79 nodes @ 250GB<br><br>37 nodes @ 500GB<br><br>4 nodes @ 15000GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Crane** | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ 62.5GB<br><br>79 nodes @ 250GB<br><br>37 nodes @ 500GB<br><br>4 nodes @ 1500GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Rhino** | 110 node LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 187.5GB/250GB <br><br> 2 nodes @ 500GB<br><br> 2 nodes @ 994GB | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
| **Red** | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment