Skip to content
Snippets Groups Projects
Commit 7fedf004 authored by Carrie A Brown's avatar Carrie A Brown
Browse files

Merge branch 'ram' into 'master'

changed paragraph to match the RAM listed in following table

See merge request !81
parents d935203a 013b9acf
No related branches found
No related tags found
No related merge requests found
......@@ -29,13 +29,14 @@ Which Cluster to Use?
**Crane**: Crane is the newest and most powerful HCC resource . If you
are new to using HCC resources, Crane is the recommended cluster to use
initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
node. If your job requires more than 16 cores per node or you need more
than 64GB of memory, consider using Tusker instead.
node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
If your job requires more than 36 cores per node or you need more than
512GB of memory, consider using Tusker instead.
**Tusker**: Similar to Crane, Tusker is another cluster shared by all
campus users. It has 4 CPU/ 64 cores and 256GB RAM per nodes. Two nodes
have 512GB RAM for very large memory jobs. So for jobs requiring more
than 16 cores per node or large memory, Tusker would be a better option.
campus users. It has 4 CPU/ 64 cores and 256GB RAM per node. Two nodes
have 1024GB RAM for very large memory jobs. So for jobs requiring more
than 36 cores per node or large memory, Tusker would be a better option.
User Login
----------
......@@ -94,7 +95,7 @@ Resource Capabilities
| Cluster | Overview | Processors | RAM | Connection | Storage
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*256GB<br><br>37 nodes @ \*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*1 Node with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
| **Tusker** | 82 node Production-mode LINUX cluster | Opteron 6272 2.1GHz, 4 CPU/64 cores per node | \*\*256 GB RAM per node<br>\*\*\*2 Nodes with 512GB per node<br>\*\*\*\*2 Nodes with 1024GB per node | QDR Infiniband | ~500 TB shared Lustre storage<br>~500GB local scratch |
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment