diff --git a/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md b/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md index a20d8b0c5fe2adaa740bbe3d96bc48fffa48952b..2806fe885140715de25c403030c7b2da5f6ab9ec 100644 --- a/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md +++ b/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md @@ -9,13 +9,13 @@ Crane has four types of GPUs available in the **gpu** partition. The type of GPU is configured as a SLURM feature, so you can specify a type of GPU in your job resource requirements if necessary. -| Description | SLURM Feature | Available Hardware | +| Description | SLURM Feature | Available Hardware | | -------------------- | ------------- | ---------------------------- | | Tesla K20, non-IB | gpu_k20 | 3 nodes - 2 GPUs with 4 GB mem per node | | Teska K20, with IB | gpu_k20 | 3 nodes - 3 GPUs with 4 GB mem per node | | Tesla K40, with IB | gpu_k40 | 5 nodes - 4 K40M GPUs with 11 GB mem per node<br> 1 node - 2 K40C GPUs | | Tesla P100, with OPA | gpu_p100 | 2 nodes - 2 GPUs with 12 GB per node | -| Tesla V100, with | gpu_v100 | 1 node - 4 GPUs with 16 GB per node | +| Tesla V100, with 10GbE | gpu_v100 | 1 node - 4 GPUs with 16 GB per node | To run your job on the next available GPU regardless of type, add the