# if needed, change current working directory, e.g., $WORK to /scratch
# pushd /scratch
# use your program of interest and write program output to /scratch
# using the proper output arguments from the used program, e.g.,
my_program --output /scratch/output
# return the batch script shell to where it was at when pushd was called
# popd
# copy needed output to $WORK
cp -r /scratch/output $WORK
{{</highlight>}}
{{% /panel %}}
{{% notice info %}}
If your application requires for the input data to be in the current working directory (cwd) or the output to be stored in the current workng directory, then make sure you change the current working directory with **pushd /scratch** before you start running your application.
{{% /notice %}}
Additional examples of SLURM submit scripts that use **scratch** and are used on Swan are provided for
and [Trinity](https://hcc.unl.edu/docs/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/running_trinity_in_multiple_steps/).
{{% notice note %}}
Please note that after the job finishes (either successfully or fails), the data in *scratch* for that job will be permanently deleted.
{{% /notice %}}
## Disadvantages of Scratch
- limited storage capacity
- shared with other jobs that are running on the same compute/worker node
- job spanning across multiple compute nodes have its own unique *scratch* storage per compute node
- data stored in *scratch* on one compute node can not be directly accessed by a different compute node and the processes that run there
- temporary storage while the job is running
- if the job fails, no output is saved and checkpointing can not be used
{{% notice note %}}
Using *scratch* is especially recommended for many Bioinformatics applications (such as BLAST, GATK, Trinity)
that perform many rapid input/output operations and can affect the file system on the cluster.