mpi_jobs_on_hcc.md 10.3 KB
Newer Older
Adam Caprez's avatar
Adam Caprez committed
1
2
3
4
5
+++
title = "MPI Jobs on HCC"
description = "How to compile and run MPI programs on HCC machines"
weight = "52"
+++
6
7
8

This quick start demonstrates how to implement a parallel (MPI)
Fortran/C program on HCC supercomputers. The sample codes and submit
Adam Caprez's avatar
Adam Caprez committed
9
scripts can be downloaded from [mpi_dir.zip](/attachments/mpi_dir.zip).
10

Adam Caprez's avatar
Adam Caprez committed
11
#### Login to a HCC Cluster
12

Adam Caprez's avatar
Adam Caprez committed
13
14
Log in to a HCC cluster through PuTTY ([For Windows Users]({{< relref "/Quickstarts/connecting/for_windows_users">}})) or Terminal ([For Mac/Linux
Users]({{< relref "/Quickstarts/connecting/for_maclinux_users">}})) and make a subdirectory called `mpi_dir` under the `$WORK` directory.
15

Adam Caprez's avatar
Adam Caprez committed
16
{{< highlight bash >}}
17
18
$ cd $WORK
$ mkdir mpi_dir
Adam Caprez's avatar
Adam Caprez committed
19
{{< /highlight >}}
20
21
22
23
24
25
26
27
28
29

In the subdirectory `mpi_dir`, save all the relevant codes. Here we
include two demo programs, `demo_f_mpi.f90` and `demo_c_mpi.c`, that
compute the sum from 1 to 20 through parallel processes. A
straightforward parallelization scheme is used for demonstration
purpose. First, the master core (i.e. `myid=0`) distributes equal
computation workload to a certain number of cores (as specified by
`--ntasks `in the submit script). Then, each worker core computes a
partial summation as output. Finally, the master core collects the
outputs from all worker cores and perform an overall summation. For easy
Adam Caprez's avatar
Adam Caprez committed
30
comparison with the serial code ([Fortran/C on HCC]({{< relref "fortran_c_on_hcc">}})), the
31
32
added lines in the parallel code (MPI) are marked with "!=" or "//=".

Adam Caprez's avatar
Adam Caprez committed
33
34
{{%expand "demo_f_mpi.f90" %}}
{{< highlight fortran >}}
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
Program demo_f_mpi
!====== MPI =====
    use mpi     
!================
    implicit none
    integer, parameter :: N = 20
    real*8 w
    integer i
    common/sol/ x
    real*8 x
    real*8, dimension(N) :: y 
!============================== MPI =================================
    integer ind
    real*8, dimension(:), allocatable :: y_local                    
    integer numnodes,myid,rc,ierr,start_local,end_local,N_local     
    real*8 allsum                                                   
!====================================================================
    
!============================== MPI =================================
    call mpi_init( ierr )                                           
    call mpi_comm_rank ( mpi_comm_world, myid, ierr )               
    call mpi_comm_size ( mpi_comm_world, numnodes, ierr )           
                                                                                                                                        !
    N_local = N/numnodes                                            
    allocate ( y_local(N_local) )                                   
    start_local = N_local*myid + 1                                  
    end_local =  N_local*myid + N_local                             
!====================================================================
    do i = start_local, end_local
        w = i*1d0
        call proc(w)
        ind = i - N_local*myid
        y_local(ind) = x
!       y(i) = x
!       write(6,*) 'i, y(i)', i, y(i)
    enddo   
!       write(6,*) 'sum(y) =',sum(y)
!============================================== MPI =====================================================
    call mpi_reduce( sum(y_local), allsum, 1, mpi_real8, mpi_sum, 0, mpi_comm_world, ierr )             
    call mpi_gather ( y_local, N_local, mpi_real8, y, N_local, mpi_real8, 0, mpi_comm_world, ierr )     
                                                                                                        
    if (myid == 0) then                                                                                 
        write(6,*) '-----------------------------------------'                                          
        write(6,*) '*Final output from... myid=', myid                                                  
        write(6,*) 'numnodes =', numnodes                                                               
        write(6,*) 'mpi_sum =', allsum  
        write(6,*) 'y=...'
        do i = 1, N
            write(6,*) y(i)
        enddo                                                                                       
        write(6,*) 'sum(y)=', sum(y)                                                                
    endif                                                                                               
                                                                                                        
    deallocate( y_local )                                                                               
    call mpi_finalize(rc)                                                                               
!========================================================================================================
    
Stop
End Program
Subroutine proc(w)
    real*8, intent(in) :: w
    common/sol/ x
    real*8 x
    
    x = w
    
Return
End Subroutine
Adam Caprez's avatar
Adam Caprez committed
103
104
{{< /highlight >}}
{{% /expand %}}
105

Adam Caprez's avatar
Adam Caprez committed
106
107
{{%expand "demo_c_mpi.c" %}}
{{< highlight c >}}
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
//demo_c_mpi
#include <stdio.h>
//======= MPI ========
#include "mpi.h"    
#include <stdlib.h>   
//====================

double proc(double w){
        double x;       
        x = w;  
        return x;
}

int main(int argc, char* argv[]){
    int N=20;
    double w;
    int i;
    double x;
    double y[N];
    double sum;
//=============================== MPI ============================
    int ind;                                                    
    double *y_local;                                            
    int numnodes,myid,rc,ierr,start_local,end_local,N_local;    
    double allsum;                                              
//================================================================
//=============================== MPI ============================
    MPI_Init(&argc, &argv);
    MPI_Comm_rank( MPI_COMM_WORLD, &myid );
    MPI_Comm_size ( MPI_COMM_WORLD, &numnodes );
    N_local = N/numnodes;
    y_local=(double *) malloc(N_local*sizeof(double));
    start_local = N_local*myid + 1;
    end_local = N_local*myid + N_local;
//================================================================
    
    for (i = start_local; i <= end_local; i++){        
        w = i*1e0;
        x = proc(w);
        ind = i - N_local*myid;
        y_local[ind-1] = x;
//      y[i-1] = x;
//      printf("i,x= %d %lf\n", i, y[i-1]) ;
    }
    sum = 0e0;
    for (i = 1; i<= N_local; i++){
        sum = sum + y_local[i-1];   
    }
//  printf("sum(y)= %lf\n", sum);    
//====================================== MPI ===========================================
    MPI_Reduce( &sum, &allsum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD );
    MPI_Gather( &y_local[0], N_local, MPI_DOUBLE, &y[0], N_local, MPI_DOUBLE, 0, MPI_COMM_WORLD );
    
    if (myid == 0){
    printf("-----------------------------------\n");
    printf("*Final output from... myid= %d\n", myid);
    printf("numnodes = %d\n", numnodes);
    printf("mpi_sum = %lf\n", allsum);
    printf("y=...\n");
    for (i = 1; i <= N; i++){
        printf("%lf\n", y[i-1]);
    }   
    sum = 0e0;
    for (i = 1; i<= N; i++){
        sum = sum + y[i-1]; 
    }
    
    printf("sum(y) = %lf\n", sum);
    
    }
    
    free( y_local );
    MPI_Finalize ();
//======================================================================================        

return 0;
}
Adam Caprez's avatar
Adam Caprez committed
185
186
{{< /highlight >}}
{{% /expand %}}
187

Adam Caprez's avatar
Adam Caprez committed
188
189
190
---

#### Compiling the Code
191
192
193
194
195
196

The compiling of a MPI code requires first loading a compiler "engine"
such as `gcc``intel`, or `pgi` and then loading a MPI wrapper
`openmpi`. Here we will use the GNU Complier Collection, `gcc`, for
demonstration.

Adam Caprez's avatar
Adam Caprez committed
197
198
199
200
201
{{< highlight bash >}}
$ module load compiler/gcc/6.1 openmpi/2.1
$ mpif90 demo_f_mpi.f90 -o demo_f_mpi.x  
$ mpicc demo_c_mpi.c -o demo_c_mpi.x
{{< /highlight >}}
202
203
204
205
206

The above commends load the `gcc` complier with the `openmpi` wrapper.
The compiling commands `mpif90` or `mpicc` are used to compile the codes
to`.x` files (executables). 

Adam Caprez's avatar
Adam Caprez committed
207
### Creating a Submit Script
208
209
210
211
212

Create a submit script to request 5 cores (with `--ntasks`). A parallel
execution command `mpirun ./` needs to enter to last line before the
main program name.

Adam Caprez's avatar
Adam Caprez committed
213
214
{{% panel header="`submit_f.mpi`"%}}
{{< highlight bash >}}
215
216
217
218
219
220
221
222
223
#!/bin/sh
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=Fortran
#SBATCH --error=Fortran.%J.err
#SBATCH --output=Fortran.%J.out

mpirun ./demo_f_mpi.x 
Adam Caprez's avatar
Adam Caprez committed
224
225
{{< /highlight >}}
{{% /panel %}}
226

Adam Caprez's avatar
Adam Caprez committed
227
228
{{% panel header="`submit_c.mpi`"%}}
{{< highlight bash >}}
229
230
231
232
233
234
235
236
237
#!/bin/sh
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=C
#SBATCH --error=C.%J.err
#SBATCH --output=C.%J.out

mpirun ./demo_c_mpi.x 
Adam Caprez's avatar
Adam Caprez committed
238
239
{{< /highlight >}}
{{% /panel %}}
240

Adam Caprez's avatar
Adam Caprez committed
241
#### Submit the Job
242
243
244
245

The job can be submitted through the command `sbatch`. The job status
can be monitored by entering `squeue` with the `-u` option.

Adam Caprez's avatar
Adam Caprez committed
246
{{< highlight bash >}}
247
248
249
$ sbatch submit_f.mpi
$ sbatch submit_c.mpi
$ squeue -u <username>
Adam Caprez's avatar
Adam Caprez committed
250
251
252
{{< /highlight >}}

Replace `<username>` with your HCC username.
253
254
255
256
257
258
259
260

Sample Output
-------------

The sum from 1 to 20 is computed and printed to the `.out` file (see
below). The outputs from the 5 cores are collected and processed by the
master core (i.e. `myid=0`).

Adam Caprez's avatar
Adam Caprez committed
261
262
{{%expand "Fortran.out" %}}
{{< highlight batchfile>}}
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
 -----------------------------------------
 *Final output from... myid=           0
 numnodes =           5
 mpi_sum =   210.00000000000000     
 y=...
   1.0000000000000000     
   2.0000000000000000     
   3.0000000000000000     
   4.0000000000000000     
   5.0000000000000000     
   6.0000000000000000     
   7.0000000000000000     
   8.0000000000000000     
   9.0000000000000000     
   10.000000000000000     
   11.000000000000000     
   12.000000000000000     
   13.000000000000000     
   14.000000000000000     
   15.000000000000000     
   16.000000000000000     
   17.000000000000000     
   18.000000000000000     
   19.000000000000000     
   20.000000000000000     
 sum(y)=   210.00000000000000     
Adam Caprez's avatar
Adam Caprez committed
289
290
{{< /highlight >}}
{{% /expand %}} 
291

Adam Caprez's avatar
Adam Caprez committed
292
293
{{%expand "C.out" %}}
{{< highlight batchfile>}}
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
-----------------------------------
*Final output from... myid= 0
numnodes = 5
mpi_sum = 210.000000
y=...
1.000000
2.000000
3.000000
4.000000
5.000000
6.000000
7.000000
8.000000
9.000000
10.000000
11.000000
12.000000
13.000000
14.000000
15.000000
16.000000
17.000000
18.000000
19.000000
20.000000
sum(y) = 210.000000
Adam Caprez's avatar
Adam Caprez committed
320
321
{{< /highlight >}}
{{% /expand %}}
322