Skip to content
Snippets Groups Projects
Verified Commit d424c410 authored by Adam Caprez's avatar Adam Caprez
Browse files

Add new V100 GPU info.

parent 2e36d405
Branches
No related tags found
1 merge request!167Rework
+++ +++
title = "Submitting CUDA or OpenACC Jobs" title = "Submitting GPU Jobs"
description = "How to submit GPU (CUDA/OpenACC) jobs on HCC resources." description = "How to submit GPU (CUDA/OpenACC) jobs on HCC resources."
+++ +++
...@@ -9,13 +9,14 @@ Crane has four types of GPUs available in the **gpu** partition. The ...@@ -9,13 +9,14 @@ Crane has four types of GPUs available in the **gpu** partition. The
type of GPU is configured as a SLURM feature, so you can specify a type type of GPU is configured as a SLURM feature, so you can specify a type
of GPU in your job resource requirements if necessary. of GPU in your job resource requirements if necessary.
| Description | SLURM Feature | Available Hardware | | Description | SLURM Feature | Available Hardware |
| -------------------- | ------------- | ---------------------------- | | -------------------- | ------------- | ---------------------------- |
| Tesla K20, non-IB | gpu_k20 | 3 nodes - 2 GPUs with 4 GB mem per node | | Tesla K20, non-IB | gpu_k20 | 3 nodes - 2 GPUs with 4 GB mem per node |
| Teska K20, with IB | gpu_k20 | 3 nodes - 3 GPUs with 4 GB mem per node | | Teska K20, with IB | gpu_k20 | 3 nodes - 3 GPUs with 4 GB mem per node |
| Tesla K40, with IB | gpu_k40 | 5 nodes - 4 K40M GPUs with 11 GB mem per node<br> 1 node - 2 K40C GPUs | | Tesla K40, with IB | gpu_k40 | 5 nodes - 4 K40M GPUs with 11 GB mem per node<br> 1 node - 2 K40C GPUs |
| Tesla P100, with OPA | gpu_p100 | 2 nodes - 2 GPUs with 12 GB per node | | Tesla P100, with OPA | gpu_p100 | 2 nodes - 2 GPUs with 12 GB per node |
| Tesla V100, with 10GbE | gpu_v100 | 1 node - 4 GPUs with 16 GB per node | | Tesla V100, with 10GbE | gpu_v100 | 1 node - 4 GPUs with 16 GB per node |
| Tesla V100, with OPA | gpu_v100 | 21 nodes - 2 GPUs with 32GB per node |
To run your job on the next available GPU regardless of type, add the To run your job on the next available GPU regardless of type, add the
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment