using_singularity.md 20.6 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
1.  [HCC-DOCS](index.html)
2.  [HCC-DOCS Home](HCC-DOCS-Home_327685.html)
3.  [HCC Documentation](HCC-Documentation_332651.html)
4.  [Running Applications](Running-Applications_7471153.html)

<span id="title-text"> HCC-DOCS : Using Singularity </span>
===========================================================

Created by <span class="author"> Adam Caprez</span>, last modified on
Apr 17, 2018

<a href="http://singularity.lbl.gov" class="external-link">Singularity</a>
is a containerization solution designed for high-performance computing
cluster environments.  It allows a user on an HPC resource to run an
application using a different operating system than the one provided by
the cluster.  For example, the application may require Ubuntu but the
cluster OS is CentOS.  Conceptually, it is similar to other container
software such as Docker, but is designed with several important
differences that make it more suited for HPC environments.  

-   Encapsulation of the environment
-   Containers are image based
-   No user contextual changes or root escalation allowed
-   No root owned daemon processes

To use Singularity on HCC machines, first load the `singularity `module.
 Singularity provides a few different ways to access the container.
 Most common is to use the `exec` command to run a specific command
within the container; alternatively, the `shell` command is used to
launch a bash shell and work interactively.  Both commands take the
source of the image to run as the first argument.  The `exec` command
takes an additional argument for the command within the container to
run.  Singularity can run images from a variety of sources, including
both a flat image file or a Docker image from Docker Hub.  For
convenience, HCC provides a set of images on
<a href="https://hub.docker.com/u/unlhcc/dashboard/" class="external-link">Docker Hub</a>
known to work on HCC resources.  Finally, pass any arguments for the
program itself in the same manner as you would if running it directly.
 For example, the Spades Assembler software is run using the Docker
image <span style="font-family: monospace;">unlhcc/spades</span> and via
the command `spades.py`.  To run the software using Singularity, the
commands are:

**Run Spades using Singularity**

``` syntaxhighlighter-pre
module load singularity
singularity exec docker://unlhcc/spades spades.py <spades arguments>
```

Using Singularity in a SLURM job is the same as any other software.

**Example Singularity SLURM script**

``` syntaxhighlighter-pre
#!/bin/sh
#SBATCH --time=03:15:00          # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out

module load singularity
singularity exec docker://unlhcc/spades spades.py <spades arguments>
```

### Available Images

The following table lists the currently available images and the command
to run the software.

Request additional images

<span
class="aui-icon aui-icon-small aui-iconfont-warning confluence-information-macro-icon"></span>

If you would like to request an image to be added, please fill out the
HCC
<a href="http://hcc.unl.edu/software-installation-request" class="external-link">Software Request Form</a>
and indicate you would like to use Singularity.

| Software                       | Version        | Command to Run                                                                                                              | Additional Notes                                                                                                                                         |
|--------------------------------|----------------|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| DREAM3D                        | 6.3.29, 6.5.36 | `singularity exec docker://unlhcc/dream3d PipelineRunner`                                                                   |                                                                                                                                                          |
| Spades                         | 3.11.0         | `singularity exec docker://unlhcc/spades spades.py`                                                                         |                                                                                                                                                          |
| Macaulay2                      | 1.9.2          | `singularity exec docker://unlhcc/macaulay2 M2`                                                                             |                                                                                                                                                          |
| CUDA (Ubuntu)                  | 8.0            | `singularity exec docker://unlhcc/cuda-ubuntu <my CUDA program>`                                                            | Ubuntu 16.04.1 LTS w/CUDA 8.0                                                                                                                            |
| TensorFlow GPU                 | 1.4            | `singularity exec docker://unlhcc/tensorflow-gpu python /path/to/my_tf_code.py`                                             | Use `python3` for Python3 code                                                                                                                           |
| Keras w/Tensorflow GPU backend | 2.0.4, 2.1.5   | `singularity exec docker://unlhcc/keras-tensorflow-gpu python /path/to/my_keras_code.py`                                    | Use `python3` for Python3 code                                                                                                                           |
| Octave                         | 4.2.1          | `singularity exec docker://unlhcc/octave octave`                                                                            |                                                                                                                                                          |
| Sonnet GPU                     | 1.13           | `singularity exec docker://unlhcc/sonnet-gpu python /path/to/my_sonnet_code.py`                                             | Use `python3` for Python3 code                                                                                                                           |
| Neurodocker w/ANTs             | 2.2.0          | `singularity exec docker://unlhcc/neurodocker-ants <ants script>`                                                           | Replace `<ants script>` with the desired ANTs program                                                                                                    |
| GNU Radio                      | 3.7.11         | `singularity exec docker://unlhcc/gnuradio python /path/to/my_gnuradio_code.py`                                             | Replace `python /path/to/my_gnuradio_code.py` with other GNU Radio commands to run                                                                       |
| Neurodocker w/AFNI             | 17.3.00        | `singularity exec docker://unlhcc/neurodocker-afni <AFNI program>`                                                          | Replace `<AFNI program>` with the desired AFNI program                                                                                                   |
| Neurodocker w/FreeSurfer       | 6.0.0          | `singularity run -B <path to your FS license>:/opt/freesurfer/license.txt docker://unlhcc/neurodocker-freesurfer recon-all` | Substitute `<path to your FS license>` with the full path to your particular FS license file. Replace `recon-all` with other FreeSurfer commands to run. |
| fMRIprep                       | 1.0.7          | `singularity exec docker://unlhcc/fmriprep fmriprep`                                                                        |                                                                                                                                                          |
| ndmg                           | 0.0.50         | `singularity exec docker://unlhcc/ndmg ndmg_bids`                                                                           |                                                                                                                                                          |
| NIPYPE (Python2)               | 1.0.0          | `singularity exec docker://unlhcc/nipype-py27 <NIPYPE program>`                                                             | Replace `<NIPYPE program>` with the desired NIPYPE program                                                                                               |
| NIPYPE (Python3)               | 1.0.0          | `singularity exec docker://unlhcc/nipype-py36 <NIPYPE program>`                                                             | Replace `<NIPYPE program>` with the desired NIPYPE program                                                                                               |
| DPARSF                         | 4.3.12         | `singularity exec docker://unlhcc/dparsf <DPARSF program>`                                                                  | Replace `<DPARSF program>` with the desired DPARSF program                                                                                               |
| Caffe GPU                      | 1.0            | `singularity exec docker://unlhcc/caffe-gpu caffe`                                                                          |                                                                                                                                                          |
| ENet Caffe GPU                 | 427a014        | `singularity exec docker://unlhcc/enet-caffe-gpu <ENET program>`                                                            | Replace `<ENET program>` with the desired ENET program                                                                                                   |
| ROS Kinetic                    | 1.3.1          | `singularity exec docker://unlhcc/ros-kinetic <ROS program>`                                                                | Replace `<ROS program>` with the desired ROS program                                                                                                     |
| Mitsuba                        | 1.5.0          | `singularity exec docker://unlhcc/mitsuba mitsuba`                                                                          |                                                                                                                                                          |
| FImpute                        | 2.2            | `singularity exec docker://unlhcc/fimpute FImpute <control file>`                                                           | Replace `<control file>` with the control file you have prepared                                                                                         |
| Neurodocker w/FSL              | 5.0.11         | `singularity run docker://unlhcc/neurodocker-fsl <FSL program>`                                                             | Replace `<FSL program>` with the desired FSL program. This image includes GPU support.                                                                   |

### What if I need other Python packages not in the image?

Unfortunately it's not possible to create one image that has every
available Python package installed for logistical reasons.  Images are
created with a small set of the most commonly-used scientific packages,
but you may need others.  If so, you can install them in a location in
your `$WORK` directory and set the `PYTHONPATH` variable to that
location in your submit script.  The extra packages will then be "seen"
by the Python interpreter within the image.  To ensure the packages will
work, the install must be done from within the container via
the `singularity shell` command.  For example, suppose you are using
the `tensorflow-gpu` image and need the packages `nibabel` and `tables`.
 First, run an interactive SLURM job to get a shell on a worker node.

**Run an interactive SLURM job**

``` syntaxhighlighter-pre
srun --pty --mem=4gb --qos=short $SHELL
```

After the job starts, the prompt will change to indicate you're on a
worker node.  Next, start an interactive session in the container.

**Start a shell in the container**

``` syntaxhighlighter-pre
module load singularity
singularity shell docker://unlhcc/tensorflow-gpu
```

This may take a few minutes to start.  Again, the prompt will change and
begin with `Singularity` to indicate you're within the container.

Next, install the needed packages via `pip` to a location somewhere in
your `work` directory.  For example, `$WORK/tf-gpu-pkgs`.  (If you are
using Python 3, use `pip3` instead of `pip`).

**Install needed Python packages with pip**

``` syntaxhighlighter-pre
export LC_ALL=C
pip install --system --target=$WORK/tf-gpu-pkgs --install-option="--install-scripts=$WORK/tf-gpu-pkgs/bin" nibabel tables
```

You should see some progress indicators, and a
"`Successfully installed..."` message at the end.  Exit both the
container and the interactive SLURM job by typing `exit` twice.  The
above steps only need to be done once per each image you need additional
packages for.   Be sure to use a separate location for each image's
extra packages.

To make the packages visible within the container, you'll need to add a
line to the submit script used for your Singularity job.  Before the
lines to load the `singularity `module and run the script, add a line
setting the `PYTHONPATH` variable to the `$WORK/tf-gpu-pkgs` directory.
 For example,

**Example SLURM script**

``` syntaxhighlighter-pre
#!/bin/sh
#SBATCH --time=03:15:00          # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
#SBATCH --partition=gpu
#SBATCH --gres=gpu
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
 
export PYTHONPATH=$WORK/tf-gpu-pkgs
module load singularity
singularity exec docker://unlhcc/tensorflow-gpu python /path/to/my_tf_code.py
```

The additional packages should then be available for use by your Python
code running within the container.

### What if I need a specific software version of the Singularity image?

You can see all the available versions of the software built with
Singularity in the table above. If you don't specify a specific sofware
version, Singulariy will use the latest one. If you want to use a
specific version instead, you can append the version number from the
table to the image. For example, if you want to use the Singularity
image for Spades version 3.11.0, type:

``` syntaxhighlighter-pre
singularity exec docker://unlhcc/spades:3.11.0 spades.py
```

### What if I want to build a custom image to use on the HCC clusters?

<span style="color: rgb(0,0,0);">You can create custom Docker image and
use it with Singularity on our clusters. Singularity can run images
directly from Docker Hub, so you don't need to upload anything to HCC.
For this purpose, you just need to have a Docker Hub account and upload
your image there. Then, if you want to run the command "*mycommand*"
from the image "*myimage*", type:</span>

``` syntaxhighlighter-pre
module load singularity
singularity exec docker://myaccount/myimage mycommand
```

<span style="color: rgb(0,0,0);">where "*myaccount*" is your Docker Hub
account.</span>

<span style="color: rgb(0,0,0);">In case you see the error "<span
style="color: rgb(153,51,102);">ERROR MANIFEST\_INVALID: manifest
invalid</span>" when running the command above, try:</span>

``` syntaxhighlighter-pre
module load singularity
unset REGISTRY
singularity exec docker://myaccount/myimage mycommand
```

<span style="color: rgb(0,0,0);">All the Dockerfiles of the images we
host on HCC are
<a href="https://github.com/unlhcc/singularity-dockerfiles" class="external-link">publicly available here</a></span><span
style="color: rgb(0,0,0);">. You can use them as an example when
creating your own image. The only thing you need to note w</span><span
style="color: rgb(0,0,0);">hen creating custom Docker images you want to
use on HCC is to add the line:  
</span>`RUN mkdir -p /work`<span style="color: rgb(0,0,0);">at the end
of your Dockerfile. This creates a "</span>*/work*<span
style="color: rgb(0,0,0);">" directory inside your image so your
"</span>*/work*<span style="color: rgb(0,0,0);">" directory on
Crane/Tusker is available.</span>

Attachments:
------------

<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/17041276.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/17043291.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/18548329.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/21071161.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24150707.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24150750.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24150958.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151174.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151575.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151576.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151702.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151704.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24151705.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24152060.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24152061.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24152145.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/24152173.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/29065634.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/30442051.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/30444516.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/30444517.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/30446691.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/33685989.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/35324359.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/35324941.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/35325144.md)
(application/octet-stream)  
<img src="assets/images/icons/bullet_blue.gif" width="8" height="8" />
[IMAGELIST.md](attachments/17040488/17041272.md)
(application/octet-stream)