Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 26-add-screenshots-for-newer-rdp-v10-client
  • 28-overview-page-for-connecting-2
  • AddExamples
  • OMCCLUNG2-master-patch-74599
  • RDPv10
  • globus-auto-backups
  • gpu_update
  • master
  • mtanash2-master-patch-75717
  • mtanash2-master-patch-83333
  • mtanash2-master-patch-87890
  • mtanash2-master-patch-96320
  • patch-1
  • patch-2
  • patch-3
  • runTime
  • submitting-jobs-overview
  • tharvill1-master-patch-26973
18 results

Target

Select target project
  • dweitzel2/hcc-docs
  • OMCCLUNG2/hcc-docs
  • salmandjing/hcc-docs
  • hcc/hcc-docs
4 results
Select Git revision
  • FAQ
  • RDPv10
  • UNL_OneDrive
  • atticguidelines
  • data_share
  • globus-auto-backups
  • good-hcc-practice-rep-workflow
  • hchen2016-faq-home-is-full
  • ipynb-doc
  • master
  • rclone-fix
  • sislam2-master-patch-51693
  • sislam2-master-patch-86974
  • site_url
  • test
15 results
Show changes
Commits on Source (1)
...@@ -11,10 +11,10 @@ of GPU in your job resource requirements if necessary. ...@@ -11,10 +11,10 @@ of GPU in your job resource requirements if necessary.
| Description | SLURM Feature | Available Hardware | | Description | SLURM Feature | Available Hardware |
| -------------------- | ------------- | ---------------------------- | | -------------------- | ------------- | ---------------------------- |
| Tesla K20, non-IB | gpu_k20 | 3 nodes - 2 GPUs per node | | Tesla K20, non-IB | gpu_k20 | 3 nodes - 2 GPUs with 4 GB mem per node |
| Teska K20, with IB | gpu_k20 | 3 nodes - 3 GPUs per node | | Teska K20, with IB | gpu_k20 | 3 nodes - 3 GPUs with 4 GB mem per node |
| Tesla K40, with IB | gpu_k40 | 5 nodes - 4 K40M GPUs per node<br> 1 node - 2 K40C GPUs | | Tesla K40, with IB | gpu_k40 | 5 nodes - 4 K40M GPUs with 11 GB mem per node<br> 1 node - 2 K40C GPUs |
| Tesla P100, with OPA | gpu_p100 | 2 nodes - 2 GPUs per node | | Tesla P100, with OPA | gpu_p100 | 2 nodes - 2 GPUs with 12 GB per node |
To run your job on the next available GPU regardless of type, add the To run your job on the next available GPU regardless of type, add the
......