diff --git a/content/applications/_index.md b/content/applications/_index.md
index f94b7d9fc2a2be1405c405d928ebb42ddb5dafa1..eb433bde41476238171335ccc64ddf7361974139 100644
--- a/content/applications/_index.md
+++ b/content/applications/_index.md
@@ -8,6 +8,6 @@ In-depth guides for using applications on HCC resources
 
 On HCC clusters there are many [preinstalled software packages](./modules/) and versions available to use. Unlike a traditional laptop or desktop where software is installed globally and always available, HCC resources use a module system to manage installed software to load and unload software packages. Users can load and use pre-installed software by using the `module` command. Software can be loaded in using `module load <module-name>`. A more in-depth explaination of the module system is available under [Using Preinstalled Software](./modules/).
 
-Custom software is also able to be run on HCC resources through various methods. [Source code](./user_software/) can be compiled for use by a user. Software libraries for different languages are able to be used through different packages managers including [Anaconda for Python/R](./user_software/using_anaconda_package_manager/), [the R command line](./user_software/r_packages/), [an Anaconda environment for Perl modules](./user_software/installing_perl_modules/), or through [Singularity and Docker](./user_software/using_singularity/).
+Custom software is also able to be run on HCC resources through various methods. [Source code](./user_software/) can be compiled for use by a user. Software libraries for different languages are able to be used through different packages managers including [Anaconda for Python/R](./user_software/using_anaconda_package_manager/), [the R command line](./user_software/r_packages/), [an Anaconda environment for Perl modules](./user_software/installing_perl_modules/), or through [Apptainer and Docker](./user_software/using_apptainer/).
 
 There are multiple [in-depth guides](./app_specific/) available for different software packages and tools, [Jupyter Notebooks](./app_specific/jupyter/), [various bioinformatics tools](./app_specific/bioinformatics_tools/), [MPI Jobs](./app_specific/mpi_jobs_on_hcc/), and various other tools and software. Examples of different software and submit scripts can be found in [HCC's job-examples git repository](https://github.com/unlhcc/job-examples).
diff --git a/content/applications/app_specific/running_postgres.md b/content/applications/app_specific/running_postgres.md
index 48abee2ef0393cbf6ce57369db2f45d77f4711fd..567b90928fd011c45d1b56b121bbf32472293450 100644
--- a/content/applications/app_specific/running_postgres.md
+++ b/content/applications/app_specific/running_postgres.md
@@ -63,8 +63,8 @@ export POSTGRES_PORT=$(shuf -i 2000-65000 -n 1)
 echo "Postgres server running on $(hostname) on port $POSTGRES_PORT"
 echo "This job started at $(date +%Y-%m-%dT%T)"
 echo "This job will end at $(squeue --noheader -j $SLURM_JOBID -o %e) (in $(squeue --noheader -j $SLURM_JOBID -o %L))"
-module load singularity
-exec singularity run -B $POSTGRES_HOME/db:/var/lib/postgresql -B $POSTGRES_HOME/run:/var/run/postgresql docker://postgres:11 -c "port=$POSTGRES_PORT"
+module load apptainer
+exec apptainer run -B $POSTGRES_HOME/db:/var/lib/postgresql -B $POSTGRES_HOME/run:/var/run/postgresql docker://postgres:11 -c "port=$POSTGRES_PORT"
 {{< /highlight >}}
 {{% /panel %}}
 
diff --git a/content/applications/modules/available_software_for_crane.md b/content/applications/modules/available_software_for_crane.md
index 7c151d54f0a6dadfb4a4615b69b806738e0245d4..b501d00feed00667086fd10de7aad2a06e7c2178 100644
--- a/content/applications/modules/available_software_for_crane.md
+++ b/content/applications/modules/available_software_for_crane.md
@@ -6,9 +6,9 @@ css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mott
 +++
 
 {{% notice tip %}}
-HCC provides some software packages via the Singularity container
+HCC provides some software packages via the Apptainer container
 software. If you do not see a desired package in the module list below,
-please check the [Using Singularity]({{< relref "using_singularity" >}})
+please check the [Using Apptainer]({{< relref "using_apptainer" >}})
 page for the software list there.
 {{% /notice %}}
 
diff --git a/content/applications/modules/available_software_for_swan.md b/content/applications/modules/available_software_for_swan.md
index 7b2e01c5c817e4c822e9a8acfa76f3eb2acf9859..da25127d2a0eec76118bce7d79edf5fcec54140e 100644
--- a/content/applications/modules/available_software_for_swan.md
+++ b/content/applications/modules/available_software_for_swan.md
@@ -6,9 +6,9 @@ css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mott
 +++
 
 {{% notice tip %}}
-HCC provides some software packages via the Singularity container
+HCC provides some software packages via the Apptainer container
 software. If you do not see a desired package in the module list below,
-please check the [Using Singularity]({{< relref "using_singularity" >}})
+please check the [Using Apptainer]({{< relref "using_apptainer" >}})
 page for the software list there.
 {{% /notice %}}
 
diff --git a/content/applications/user_software/using_singularity.md b/content/applications/user_software/using_apptainer.md
similarity index 73%
rename from content/applications/user_software/using_singularity.md
rename to content/applications/user_software/using_apptainer.md
index c5521bf213abf140493ed85681dd33eec25f2920..12ac61852d5b1ec09eacc1b42f94c44abdbd7918 100644
--- a/content/applications/user_software/using_singularity.md
+++ b/content/applications/user_software/using_apptainer.md
@@ -1,12 +1,16 @@
 +++
-title = "Using Singularity and Docker Containers"
-description = "How to use the Singularity containerization software on HCC resources."
+title = "Using Apptainer and Docker Containers"
+description = "How to use the Apptainer containerization software on HCC resources."
 weight=20
 +++
 
-## What is Singularity
+{{% notice note %}}
+Previously known as *Singularity*, the [project has since been renamed](https://apptainer.org/news/community-announcement-20211130/) *Apptainer*.
+{{% /notice %}}
+
+## What is Apptainer
 
-[Singularity](https://www.sylabs.io/singularity/)
+[Apptainer](https://apptainer.org)
 is a containerization solution designed for high-performance computing
 cluster environments.  It allows a user on an HPC resource to run an
 application using a different operating system than the one provided by
@@ -22,7 +26,7 @@ differences that make it more suited for HPC environments.  
 
 ## Finding Images
 
-Singularity can run images from a variety of sources, including
+Apptainer can run images from a variety of sources, including
 both a flat image file or a Docker image from Docker Hub.
 
 ### Docker Hub
@@ -45,14 +49,14 @@ to run the software.
 {{% notice note %}}
 If you would like to request an image to be added, please fill out the
 HCC [Software Request Form](http://hcc.unl.edu/software-installation-request)
-and indicate you would like to use Singularity.
+and indicate you would like to use Apptainer.
 {{% /notice %}}
 
 
 ## Use images on HCC resources
 
-To use Singularity on HCC machines, first load the `singularity `module.
-Singularity provides a few different ways to access the container.
+To use Apptainer on HCC machines, first load the `apptainer `module.
+Apptainer provides a few different ways to access the container.
 Most common is to use the `exec` command to run a specific command
 within the container; alternatively, the `shell` command is used to
 launch a bash shell and work interactively.  Both commands take the
@@ -63,30 +67,30 @@ run.
 Finally, pass any arguments for the program itself in the same manner as you would if running it directly.
  For example, the Spades Assembler software is run using the Docker
 image `unlhcc/spades` and via the command `spades.py`.
-To run the software using Singularity, the commands are:
+To run the software using Apptainer, the commands are:
 
-{{% panel theme="info" header="Run Spades using Singularity" %}}
+{{% panel theme="info" header="Run Spades using Apptainer" %}}
 {{< highlight bash >}}
-module load singularity
-singularity exec docker://unlhcc/spades spades.py <spades arguments>
+module load apptainer
+apptainer exec docker://unlhcc/spades spades.py <spades arguments>
 {{< /highlight >}}
 {{% /panel %}}
 
 ### Use images within a SLURM job
 
-Using Singularity in a SLURM job is similar to how you would use any other software within a job. Load the module, then execute your image:
+Using Apptainer in a SLURM job is similar to how you would use any other software within a job. Load the module, then execute your image:
 
-{{% panel theme="info" header="Example Singularity SLURM script" %}}
+{{% panel theme="info" header="Example Apptainer SLURM script" %}}
 {{< highlight bash >}}
 #!/bin/bash
 #SBATCH --time=03:15:00          # Run time in hh:mm:ss
 #SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
-#SBATCH --job-name=singularity-test
+#SBATCH --job-name=apptainer-test
 #SBATCH --error=/work/[groupname]/[username]/job.%J.err
 #SBATCH --output=/work/[groupname]/[username]/job.%J.out
 
-module load singularity
-singularity exec docker://unlhcc/spades spades.py <spades arguments>
+module load apptainer
+apptainer exec docker://unlhcc/spades spades.py <spades arguments>
 {{< /highlight >}}
 {{% /panel %}}
 
@@ -95,15 +99,15 @@ singularity exec docker://unlhcc/spades spades.py <spades arguments>
 Custom images can be created locally on your personal machine and added to Docker Hub for use
 on HCC clusters. More information on creating custom Docker images can be found in the [Docker documentation](https://docs.docker.com/develop/develop-images/baseimages/).
 
-You can create custom Docker image and use it with Singularity on our clusters.
-Singularity can run images directly from Docker Hub, so you don't need to upload anything to HCC.
+You can create custom Docker image and use it with Apptainer on our clusters.
+Apptainer can run images directly from Docker Hub, so you don't need to upload anything to HCC.
 For this purpose, you just need to have a Docker Hub account and upload
 your image there. Then, if you want to run the command "*mycommand*"
 from the image "*myimage*", type:
 
 {{< highlight bash >}}
-module load singularity
-singularity exec docker://myaccount/myimage mycommand
+module load apptainer
+apptainer exec docker://myaccount/myimage mycommand
 {{< /highlight >}}
 
 where "*myaccount*" is your Docker Hub account.
@@ -112,13 +116,13 @@ In case you see the error `ERROR MANIFEST_INVALID: manifest invalid`
 when running the command above, try:
 
 {{< highlight bash >}}
-module load singularity
+module load apptainer
 unset REGISTRY
-singularity exec docker://myaccount/myimage mycommand
+apptainer exec docker://myaccount/myimage mycommand
 {{< /highlight >}}
 
 {{% notice info %}}
-If you get the error `FATAL: kernel too old` when using your Singularity image on the HCC clusters, that means the *glibc* version in your image is too new for the kernel on the cluster. One way to solve this is to use lower version of your base image (for example, if you have used Ubuntu:18.04 please use Ubuntu:16.04 instead).
+If you get the error `FATAL: kernel too old` when using your Apptainer image on the HCC clusters, that means the *glibc* version in your image is too new for the kernel on the cluster. One way to solve this is to use lower version of your base image (for example, if you have used Ubuntu:18.04 please use Ubuntu:16.04 instead).
 {{% /notice %}}
 
 
@@ -145,7 +149,7 @@ your `$WORK` directory and set the `PYTHONPATH` variable to that
 location in your submit script.  The extra packages will then be "seen"
 by the Python interpreter within the image.  To ensure the packages will
 work, the install must be done from within the container via
-the `singularity shell` command.  For example, suppose you are using
+the `apptainer shell` command.  For example, suppose you are using
 the `tensorflow-gpu` image and need the packages `nibabel` and `tables`.
  First, run an interactive SLURM job to get a shell on a worker node.
 
@@ -167,13 +171,13 @@ worker node.  Next, start an interactive session in the container.
 
 {{% panel theme="info" header="Start a shell in the container" %}}
 {{< highlight bash >}}
-module load singularity
-singularity shell docker://unlhcc/tensorflow-gpu
+module load apptainer
+apptainer shell docker://unlhcc/tensorflow-gpu
 {{< /highlight >}}
 {{% /panel %}}
 
 This may take a few minutes to start.  Again, the prompt will change and
-begin with `Singularity` to indicate you're within the container.
+begin with `Apptainer` to indicate you're within the container.
 
 Next, install the needed packages via `pip` to a location somewhere in
 your `work` directory.  For example, `$WORK/tf-gpu-pkgs`.  (If you are
@@ -194,8 +198,8 @@ packages for.   Be sure to use a separate location for each image's
 extra packages.
 
 To make the packages visible within the container, you'll need to add a
-line to the submit script used for your Singularity job.  Before the
-lines to load the `singularity `module and run the script, add a line
+line to the submit script used for your Apptainer job.  Before the
+lines to load the `apptainer `module and run the script, add a line
 setting the `PYTHONPATH` variable to the `$WORK/tf-gpu-pkgs` directory.
 For example,
 
@@ -204,30 +208,30 @@ For example,
 #!/bin/bash
 #SBATCH --time=03:15:00          # Run time in hh:mm:ss
 #SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
-#SBATCH --job-name=singularity-test
+#SBATCH --job-name=apptainer-test
 #SBATCH --partition=gpu
 #SBATCH --gres=gpu
 #SBATCH --error=/work/[groupname]/[username]/job.%J.err
 #SBATCH --output=/work/[groupname]/[username]/job.%J.out
  
 export PYTHONPATH=$WORK/tf-gpu-pkgs
-module load singularity
-singularity exec docker://unlhcc/tensorflow-gpu python /path/to/my_tf_code.py
+module load apptainer
+apptainer exec docker://unlhcc/tensorflow-gpu python /path/to/my_tf_code.py
 {{< /highlight >}}
 {{% /panel %}}
 
 The additional packages should then be available for use by your Python
 code running within the container.
 
-### What if I need a specific software version of the Singularity image?
+### What if I need a specific software version of the Apptainer image?
 
 You can see all the available versions of the software built with
-Singularity in the table above. If you don't specify a specific sofware
-version, Singulariy will use the latest one. If you want to use a
+Apptainer in the table above. If you don't specify a specific sofware
+version, Apptainer will use the latest one. If you want to use a
 specific version instead, you can append the version number from the
-table to the image. For example, if you want to use the Singularity
+table to the image. For example, if you want to use the Apptainer
 image for Spades version 3.11.0, run:
 
 {{< highlight bash >}}
-singularity exec docker://unlhcc/spades:3.11.0 spades.py
+apptainer exec docker://unlhcc/spades:3.11.0 spades.py
 {{< /highlight >}}