diff --git a/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md b/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md
index bae4b76360a0af1262ea5cab04019a4856c23345..58ae124c97202f8be03c67deca1561d4c9e99bbf 100644
--- a/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md
+++ b/content/guides/submitting_jobs/submitting_cuda_or_openacc_jobs.md
@@ -9,12 +9,12 @@ Crane has four types of GPUs available in the **gpu** partition. The
 type of GPU is configured as a SLURM feature, so you can specify a type
 of GPU in your job resource requirements if necessary.
 
-|    Description       | SLURM Feature |      Available Hardware      |
+|    Description       | SLURM Feature |      Available Hardware      | 
 | -------------------- | ------------- | ---------------------------- |
-| Tesla K20, non-IB    | gpu_k20       | 3 nodes - 2 GPUs per node    |
-| Teska K20, with IB   | gpu_k20       | 3 nodes - 3 GPUs per node    |
-| Tesla K40, with IB   | gpu_k40       | 5 nodes - 4 K40M GPUs per node<br> 1 node - 2 K40C GPUs |
-| Tesla P100, with OPA | gpu_p100      | 2 nodes - 2 GPUs per node |
+| Tesla K20, non-IB    | gpu_k20       | 3 nodes - 2 GPUs with 4 GB mem per node  |
+| Teska K20, with IB   | gpu_k20       | 3 nodes - 3 GPUs with 4 GB mem per node    |
+| Tesla K40, with IB   | gpu_k40       | 5 nodes - 4 K40M GPUs with 11 GB mem per node<br> 1 node - 2 K40C GPUs |
+| Tesla P100, with OPA | gpu_p100      | 2 nodes - 2 GPUs with 12 GB per node |
 
 
 To run your job on the next available GPU regardless of type, add the