Skip to content
Snippets Groups Projects
Commit 93037824 authored by Garhan Attebury's avatar Garhan Attebury
Browse files

Merge branch 't2update' into 'master'

tweaked HDFS space mentioned for red

See merge request !141
parents b70ffb0f 22dd0887
No related branches found
No related tags found
1 merge request!141tweaked HDFS space mentioned for red
......@@ -29,7 +29,7 @@ Which Cluster to Use?
**Crane**: Crane is the newest and most powerful HCC resource . If you
are new to using HCC resources, Crane is the recommended cluster to use
initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per
node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
**Rhino**: Rhino is intended for large memory (RAM) computing needs.
Rhino has 4 AMD Interlagos CPUs (64 cores) per node, with either 192GB or 256GB RAM per
......@@ -82,11 +82,7 @@ Resources
- ##### Rhino - HCC's AMD-based cluster, intended for large RAM computing needs.
- ##### Red - This cluster is the resource for UNL's US CMS Tier-2 site.
- [CMS](http://www.uscms.org/)
- [Open Science Grid](http://www.opensciencegrid.org)
- [MyOSG](https://myosg.grid.iu.edu/)
- ##### Red - This cluster is the resource for UNL's [USCMS](https://uscms.org/) Tier-2 site.
- ##### Anvil - HCC's cloud computing cluster based on Openstack
......@@ -99,7 +95,7 @@ Resource Capabilities
| ------- | ---------| ---------- | --- | ---------- | ------
| **Crane** | 548 node Production-mode LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node<br> <br>116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node<br><br>("CraneOPA") | 452 nodes @ \*64GB<br><br>79 nodes @ \*\*\*256GB<br><br>37 nodes @ \*\*\*\*512GB | QDR Infiniband<br><br>EDR Omni-Path Architecture | ~1.8 TB local scratch per node<br><br>~4 TB local scratch per node<br><br>~1452 TB shared Lustre storage
| **Rhino** | 110 node Production-mode LINUX cluster | 110 AMD Interlagos CPUs (6272 / 6376), 4 CPU/64 cores per node | 106 nodes @ 192GB\*\*/256GB\*\*\* <br><br> 2 nodes @ 512GB\*\*\*\* <br><br> 2 nodes @ 1024GB\*\*\*\*\* | QDR Infiniband | ~1.5TB local scratch per node <br><br> ~360TB shared BeeGFS storage |
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~6.67PB of raw storage space |
| **Red** | 344 node Production-mode LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
| **Anvil** | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
You may only request the following amount of RAM: <br>
......
......@@ -48,7 +48,7 @@ These resources are detailed further below.
* 48 2-socket Xeon E5-2660 (2.2GHz) (32 slots per node)
* 24 2-socket Xeon E5-2660 v4 (2.0GHz) (56 slots per node)
* 2 2-socket Xeon E5-1660 v3 (3.0GHz) (16 slots per node)
* 6,600 TB HDFS storage (3,300 TB usable)
* 10.8 PB HDFS storage
* Mix of 1, 10, and 40 GbE networking
* 1x Dell S6000-ON switch
* 2x Dell S4048-ON switch
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment