From 5e118c634c02409d58f3881e99cdaa678b20b345 Mon Sep 17 00:00:00 2001
From: Natasha Pavlovikj <natasha.pavlovikj@huskers.unl.edu>
Date: Wed, 3 Aug 2022 12:51:46 -0500
Subject: [PATCH] Add Swan GPU info

---
 content/submitting_jobs/submitting_gpu_jobs.md | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/content/submitting_jobs/submitting_gpu_jobs.md b/content/submitting_jobs/submitting_gpu_jobs.md
index 879d9a31..e0ebf598 100644
--- a/content/submitting_jobs/submitting_gpu_jobs.md
+++ b/content/submitting_jobs/submitting_gpu_jobs.md
@@ -18,7 +18,16 @@ of GPU in your job resource requirements if necessary.
 | Tesla P100, with OPA   | gpu_p100      | 2 nodes - 2 GPUs with 12 GB per node |
 | Tesla V100, with 10GbE | gpu_v100      | 1 node - 4 GPUs with 16 GB per node |
 | Tesla V100, with OPA   | gpu_v100      | 21 nodes - 2 GPUs with 32GB per node |
-| Tesla T4, with IB      | gpu_t4        | 12 nodes - 2 GPUs with 16GB per node |
+
+Swan has two types of GPUs available in the `gpu` partition. The
+type of GPU is configured as a SLURM feature, so you can specify a type
+of GPU in your job resource requirements if necessary.
+
+|    Description         | SLURM Feature |      Available Hardware      |
+| --------------------   | ------------- | ---------------------------- |
+| Tesla V100S            | gpu_v100      | 4 nodes - 2 GPUs with 32GB per node |
+| Tesla T4               | gpu_t4        | 12 nodes - 2 GPUs with 16GB per node |
+
 
 ### Specifying GPU memory (optional)
 
-- 
GitLab