Commit d2eb12ae authored by Adam Caprez's avatar Adam Caprez
Browse files

Merge branch 'runningSAS' into 'master'

Running SAS on HCC

See merge request !252
parents 5f03b318 1e0c8dcb
......@@ -75,18 +75,19 @@ Loss]({{< relref "preventing_file_loss" >}}).
**If you have not activated Duo before:**
Please stop by
[our offices](http://hcc.unl.edu/location)
along with a photo ID and we will be happy to activate it for you. If
you are not local to Omaha or Lincoln, contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu)
and we will help you activate Duo remotely.
Please ~~stop by
[our offices](http://hcc.unl.edu/location)~~
join our [Remote Open Office hours](https://hcc.unl.edu/OOH) or schedule another remote
session at [hcc-support@unl.edu](mailto:hcc-support@unl.edu) and show your photo ID and we will be happy to activate it for you.
**If you have activated Duo previously but now have a different phone
number:**
Stop by our offices along with a photo ID and we can help you reactivate
Duo and update your account with your new phone number.
~~Stop by our offices along with a photo ID and we can help you reactivate
Duo and update your account with your new phone number.~~
Join our [Remote Open Office hours](https://hcc.unl.edu/OOH) or schedule another remote
session at [hcc-support@unl.edu](mailto:hcc-support@unl.edu) and show your photo ID and we will be happy to activate it for you.
**If you have activated Duo previously and have the same phone number:**
......@@ -166,12 +167,12 @@ for additional assistance.
#### I want to talk to a human about my problem. Can I do that?
Of course! We have an open door policy and invite you to stop by
Of course! We have an open door policy and invite you to ~~stop by
[either of our offices](http://hcc.unl.edu/location)
anytime Monday through Friday between 9 am and 5 pm. One of the HCC
staff would be happy to help you with whatever problem or question you
have.  Alternatively, you can drop one of us a line and we'll arrange a
time to meet: [Contact Us](https://hcc.unl.edu/contact-us).
have.~~ join our [Remote Open Office hours](https://hcc.unl.edu/OOH), schedule a remote
session at [hcc-support@unl.edu](mailto:hcc-support@unl.edu), or you can drop one of us a line and we'll arrange a time to meet: [Contact Us](https://hcc.unl.edu/contact-us).
#### My submitted job takes long time waiting in the queue or it is not running?
If your submitted jobs are taking long time waiting in the queue, that usually means your account is over-utilizing and your fairshare score is low, this might be due submitting big number of jobs over the past period of time; and/or the amount of resources (memory, time) you requested for your job is big.
......
......@@ -30,5 +30,4 @@ your account is active.
Once the above steps are complete, your account is now active and you are ready to
[connect to HCC resources]({{< ref "/connecting" >}}) and
[begin submitting jobs]({{< ref "/submitting_jobs" >}}). If you
have any questions or would like to setup a consultation meeting, please [contact us]
({{< ref "/contact_us" >}}).
have any questions or would like to setup a consultation meeting, please [contact us]({{< relref "/contact_us" >}}).
......@@ -21,72 +21,9 @@ $ mkdir serial_dir
In the subdirectory `serial_dir`, save all the relevant Fortran/C codes. Here we include two demo
programs, `demo_f_serial.f90` and `demo_c_serial.c`, that compute the sum from 1 to 20. 
{{%expand "demo_f_serial.f90" %}}
{{< highlight bash >}}
Program demo_f_serial
implicit none
integer, parameter :: N = 20
real*8 w
integer i
common/sol/ x
real*8 x
real*8, dimension(N) :: y
do i = 1,N
w = i*1d0
call proc(w)
y(i) = x
write(6,*) 'i,x = ', i, y(i)
enddo
write(6,*) 'sum(y) =',sum(y)
Stop
End Program
Subroutine proc(w)
real*8, intent(in) :: w
common/sol/ x
real*8 x
x = w
Return
End Subroutine
{{< /highlight >}}
{{% /expand %}}
{{%expand "demo_c_serial.c" %}}
{{< highlight c >}}
//demo_c_serial
#include <stdio.h>
double proc(double w){
double x;
x = w;
return x;
}
int main(int argc, char* argv[]){
int N=20;
double w;
int i;
double x;
double y[N];
double sum;
for (i = 1; i <= N; i++){
w = i*1e0;
x = proc(w);
y[i-1] = x;
printf("i,x= %d %lf\n", i, y[i-1]) ;
}
sum = 0e0;
for (i = 1; i<= N; i++){
sum = sum + y[i-1];
}
printf("sum(y)= %lf\n", sum);  
 
return 0;
}
{{< /highlight >}}
{{% /expand %}}
[demo_c_serial.c](https://raw.githubusercontent.com/unlhcc/job-examples/master/C/demo_c_serial.c)
[demo_f_serial.f90](https://raw.githubusercontent.com/unlhcc/job-examples/master/fortran/demo_f_serial.f90)
---
......@@ -121,33 +58,10 @@ Create a submit script to request one core (default) and 1-min run time
on the supercomputer. The name of the main program enters at the last
line.
{{% panel header="`submit_f.serial`"%}}
{{< highlight bash >}}
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=Fortran
#SBATCH --error=Fortran.%J.err
#SBATCH --output=Fortran.%J.out
module load compiler/gcc/4.9
./demo_f_serial.x
{{< /highlight >}}
{{% /panel %}}
[submit_f.serial](https://raw.githubusercontent.com/unlhcc/job-examples/master/fortran/submit_f.serial)
[submit_c.serial](https://raw.githubusercontent.com/unlhcc/job-examples/master/C/submit_c.serial)
{{% panel header="`submit_c.serial`"%}}
{{< highlight bash >}}
#!/bin/bash
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=C
#SBATCH --error=C.%J.err
#SBATCH --output=C.%J.out
module load compiler/gcc/4.9
./demo_c_serial.x
{{< /highlight >}}
{{% /panel %}}
#### Submit the Job
......@@ -164,56 +78,4 @@ Replace `<username>` with your HCC username.
#### Sample Output
The sum from 1 to 20 is computed and printed to the `.out` file (see
below). 
{{%expand "Fortran.out" %}}
{{< highlight batchfile>}}
i,x = 1 1.0000000000000000
i,x = 2 2.0000000000000000
i,x = 3 3.0000000000000000
i,x = 4 4.0000000000000000
i,x = 5 5.0000000000000000
i,x = 6 6.0000000000000000
i,x = 7 7.0000000000000000
i,x = 8 8.0000000000000000
i,x = 9 9.0000000000000000
i,x = 10 10.000000000000000
i,x = 11 11.000000000000000
i,x = 12 12.000000000000000
i,x = 13 13.000000000000000
i,x = 14 14.000000000000000
i,x = 15 15.000000000000000
i,x = 16 16.000000000000000
i,x = 17 17.000000000000000
i,x = 18 18.000000000000000
i,x = 19 19.000000000000000
i,x = 20 20.000000000000000
sum(y) = 210.00000000000000
{{< /highlight >}}
{{% /expand %}}
{{%expand "C.out" %}}
{{< highlight batchfile>}}
i,x= 1 1.000000
i,x= 2 2.000000
i,x= 3 3.000000
i,x= 4 4.000000
i,x= 5 5.000000
i,x= 6 6.000000
i,x= 7 7.000000
i,x= 8 8.000000
i,x= 9 9.000000
i,x= 10 10.000000
i,x= 11 11.000000
i,x= 12 12.000000
i,x= 13 13.000000
i,x= 14 14.000000
i,x= 15 15.000000
i,x= 16 16.000000
i,x= 17 17.000000
i,x= 18 18.000000
i,x= 19 19.000000
i,x= 20 20.000000
sum(y)= 210.000000
{{< /highlight >}}
{{% /expand %}}
The sum from 1 to 20 is computed and printed to the `.out` files.
......@@ -30,160 +30,9 @@ outputs from all worker cores and perform an overall summation. For easy
comparison with the serial code ([Fortran/C on HCC]({{< relref "fortran_c_on_hcc">}})), the
added lines in the parallel code (MPI) are marked with "!=" or "//=".
{{%expand "demo_f_mpi.f90" %}}
{{< highlight fortran >}}
Program demo_f_mpi
!====== MPI =====
use mpi
!================
implicit none
integer, parameter :: N = 20
real*8 w
integer i
common/sol/ x
real*8 x
real*8, dimension(N) :: y
!============================== MPI =================================
integer ind
real*8, dimension(:), allocatable :: y_local
integer numnodes,myid,rc,ierr,start_local,end_local,N_local
real*8 allsum
!====================================================================
!============================== MPI =================================
call mpi_init( ierr )
call mpi_comm_rank ( mpi_comm_world, myid, ierr )
call mpi_comm_size ( mpi_comm_world, numnodes, ierr )
!
N_local = N/numnodes
allocate ( y_local(N_local) )
start_local = N_local*myid + 1
end_local = N_local*myid + N_local
!====================================================================
do i = start_local, end_local
w = i*1d0
call proc(w)
ind = i - N_local*myid
y_local(ind) = x
! y(i) = x
! write(6,*) 'i, y(i)', i, y(i)
enddo
! write(6,*) 'sum(y) =',sum(y)
!============================================== MPI =====================================================
call mpi_reduce( sum(y_local), allsum, 1, mpi_real8, mpi_sum, 0, mpi_comm_world, ierr )
call mpi_gather ( y_local, N_local, mpi_real8, y, N_local, mpi_real8, 0, mpi_comm_world, ierr )
if (myid == 0) then
write(6,*) '-----------------------------------------'
write(6,*) '*Final output from... myid=', myid
write(6,*) 'numnodes =', numnodes
write(6,*) 'mpi_sum =', allsum
write(6,*) 'y=...'
do i = 1, N
write(6,*) y(i)
enddo
write(6,*) 'sum(y)=', sum(y)
endif
deallocate( y_local )
call mpi_finalize(rc)
!========================================================================================================
Stop
End Program
Subroutine proc(w)
real*8, intent(in) :: w
common/sol/ x
real*8 x
x = w
Return
End Subroutine
{{< /highlight >}}
{{% /expand %}}
{{%expand "demo_c_mpi.c" %}}
{{< highlight c >}}
//demo_c_mpi
#include <stdio.h>
//======= MPI ========
#include "mpi.h"
#include <stdlib.h>
//====================
[demo_f_mpi.f90](https://raw.githubusercontent.com/unlhcc/job-examples/master/fortran/demo_f_mpi.f90)
double proc(double w){
double x;
x = w;
return x;
}
int main(int argc, char* argv[]){
int N=20;
double w;
int i;
double x;
double y[N];
double sum;
//=============================== MPI ============================
int ind;
double *y_local;
int numnodes,myid,rc,ierr,start_local,end_local,N_local;
double allsum;
//================================================================
//=============================== MPI ============================
MPI_Init(&argc, &argv);
MPI_Comm_rank( MPI_COMM_WORLD, &myid );
MPI_Comm_size ( MPI_COMM_WORLD, &numnodes );
N_local = N/numnodes;
y_local=(double *) malloc(N_local*sizeof(double));
start_local = N_local*myid + 1;
end_local = N_local*myid + N_local;
//================================================================
for (i = start_local; i <= end_local; i++){
w = i*1e0;
x = proc(w);
ind = i - N_local*myid;
y_local[ind-1] = x;
// y[i-1] = x;
// printf("i,x= %d %lf\n", i, y[i-1]) ;
}
sum = 0e0;
for (i = 1; i<= N_local; i++){
sum = sum + y_local[i-1];
}
// printf("sum(y)= %lf\n", sum);
//====================================== MPI ===========================================
MPI_Reduce( &sum, &allsum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD );
MPI_Gather( &y_local[0], N_local, MPI_DOUBLE, &y[0], N_local, MPI_DOUBLE, 0, MPI_COMM_WORLD );
if (myid == 0){
printf("-----------------------------------\n");
printf("*Final output from... myid= %d\n", myid);
printf("numnodes = %d\n", numnodes);
printf("mpi_sum = %lf\n", allsum);
printf("y=...\n");
for (i = 1; i <= N; i++){
printf("%lf\n", y[i-1]);
}
sum = 0e0;
for (i = 1; i<= N; i++){
sum = sum + y[i-1];
}
printf("sum(y) = %lf\n", sum);
}
free( y_local );
MPI_Finalize ();
//======================================================================================
return 0;
}
{{< /highlight >}}
{{% /expand %}}
[demo_c_mpi.c](https://raw.githubusercontent.com/unlhcc/job-examples/master/C/demo_c_mpi.c)
---
......@@ -210,33 +59,10 @@ Create a submit script to request 5 cores (with `--ntasks`). A parallel
execution command `mpirun ./` needs to enter to last line before the
main program name.
{{% panel header="`submit_f.mpi`"%}}
{{< highlight bash >}}
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=Fortran
#SBATCH --error=Fortran.%J.err
#SBATCH --output=Fortran.%J.out
mpirun ./demo_f_mpi.x
{{< /highlight >}}
{{% /panel %}}
[submit_f.mpi](https://raw.githubusercontent.com/unlhcc/job-examples/master/fortran/submit_f.mpi)
{{% panel header="`submit_c.mpi`"%}}
{{< highlight bash >}}
#!/bin/bash
#SBATCH --ntasks=5
#SBATCH --mem-per-cpu=1024
#SBATCH --time=00:01:00
#SBATCH --job-name=C
#SBATCH --error=C.%J.err
#SBATCH --output=C.%J.out
[submit_c.mpi](https://raw.githubusercontent.com/unlhcc/job-examples/master/C/submit_c.mpi)
mpirun ./demo_c_mpi.x
{{< /highlight >}}
{{% /panel %}}
#### Submit the Job
......@@ -254,69 +80,6 @@ Replace `<username>` with your HCC username.
Sample Output
-------------
The sum from 1 to 20 is computed and printed to the `.out` file (see
below). The outputs from the 5 cores are collected and processed by the
The sum from 1 to 20 is computed and printed to the `.out` files. The outputs from the 5 cores are collected and processed by the
master core (i.e. `myid=0`).
{{%expand "Fortran.out" %}}
{{< highlight batchfile>}}
-----------------------------------------
*Final output from... myid= 0
numnodes = 5
mpi_sum = 210.00000000000000
y=...
1.0000000000000000
2.0000000000000000
3.0000000000000000
4.0000000000000000
5.0000000000000000
6.0000000000000000
7.0000000000000000
8.0000000000000000
9.0000000000000000
10.000000000000000
11.000000000000000
12.000000000000000
13.000000000000000
14.000000000000000
15.000000000000000
16.000000000000000
17.000000000000000
18.000000000000000
19.000000000000000
20.000000000000000
sum(y)= 210.00000000000000
{{< /highlight >}}
{{% /expand %}}
{{%expand "C.out" %}}
{{< highlight batchfile>}}
-----------------------------------
*Final output from... myid= 0
numnodes = 5
mpi_sum = 210.000000
y=...
1.000000
2.000000
3.000000
4.000000
5.000000
6.000000
7.000000
8.000000
9.000000
10.000000
11.000000
12.000000
13.000000
14.000000
15.000000
16.000000
17.000000
18.000000
19.000000
20.000000
sum(y) = 210.000000
{{< /highlight >}}
{{% /expand %}}
+++
title = "Running SAS at HCC"
description = "How to run SAS on HCC resources."
+++
- [Running SAS through the command line](#sas-on-hcc-clusters)
- [Running SAS on JupyterHub](#sas-on-jupyterhub)
- [Running SAS on Anvil](#sas-on-anvil)
This quick start demonstrates how to implement a SAS program on
HCC supercomputers through the command line and JupyterHub, and on HCC's Anvil platform. The sample code and submit scripts can be
downloaded from [HCC's job-examples git repository](https://github.com/unlhcc/job-examples).
## SAS on HCC Clusters
SAS applications can be ran on HCC clusters similar to other jobs.
[Connect to a HCC cluster]({{< relref "../../connecting/" >}}) and make a subdirectory
called `sas_demo` under your `$WORK` directory. 
In the subdirectory `sas_demo`, save the sas code. Here we include a single demo
programs, `t_test.sas`, to perform a t test analysis on a small data set.
[t_test.sas](https://raw.githubusercontent.com/unlhcc/job-examples/master/sas/t-test.sas)
---
#### Creating a Submit Script
Create a submit script to request one core (default) and 10-min run time
on the supercomputer. The name of the main program enters at the last
line.
[sas.submit](https://raw.githubusercontent.com/unlhcc/job-examples/master/sas/sas.submit)
#### Submit the Job
The job can be submitted through the command `sbatch`. The job status
can be monitored by entering `squeue` with the `-u` option.
{{< highlight bash >}}
$ sbatch sas.submit
$ squeue -u <username>
{{< /highlight >}}
Replace `<username>` with your HCC username.
#### Sample Output
The results of the t-test are computed and printed to the `.lst` file
## SAS on JupyterHub
Sas can also be run on Jupyter notebook environments available through [HCC Open OnDemand]({{< relref "../../open_ondemand/connecting_to_hcc_ondemand/" >}}). [Launch a jupyter notebook session]({{< relref "../../open_ondemand/virtual_desktop_and_jupyter_notebooks/" >}}). From the `New` dropdown box, select the `SAS`.
{{< figure src="/images/jupyterNew.png" >}}
Here you can run code in the notebook's cells. The SAS code is then ran when you click on the "play" icon or pressing the `shift` and `enter` keys simultaneously.
{{< figure src="/images/jupyterCode.png" >}}
## SAS on Anvil
SAS can also be ran on a Windows 10 instance on anvil. This allows SAS scripts to be run with a full GUI environment.
Start off creating a `Windows 10 SAS` instance from the [Anvil dashboard](https://anvil.unl.edu/). [Create an instance]({{< relref "../../anvil/creating_an_instance.md" >}}) and use the image labeled `Windows 10 SAS`. Once the instance is fully launched, [connect to the instance]({{< relref "../../anvil/connecting_to_windows_instances.md" >}}) using the retrieved password. After connecting to the instance and logging , SAS can be launched from the desktop shortcut.
{{< figure src="/images/sasAnvilDesktop.png" height="450" >}}
From here sas scripts can be ran from the editor at the bottom of the SAS window. Scripts can also be opened from a script file on the Anvil instance.
{{< figure src="/images/sasAnvil.png" height="450" >}}
Executing a script is done at the top of the SAS window `Run` and click `Submit`. When the script finishes executing, the results will be displayed.
{{< figure src="/images/sasAnvilResults.png" height="450" >}}
\ No newline at end of file
......@@ -6,7 +6,7 @@ weight = "100"
If you have questions, please contact us at
{{< icon name="envelope" >}}[hcc-support@unl.edu](mailto:hcc-support@unl.edu)
or stop by one of our locations.
or ~~stop by one of our locations.~~ join one of our [Remote Open Office hours](https://hcc.unl.edu/OOH) or schedule a remote session at [hcc-support@unl.edu](mailto:hcc-support@unl.edu).
| Lincoln | Omaha |
| ----------------------------------------------- | ---------------------------------- |
......
......@@ -31,8 +31,7 @@ depends on Jobs B and C completing.
{{< figure src="/images/4980738.png" width="400" >}}
The SLURM submit files for each step are below.
{{%expand "JobA.submit" %}}
{{% panel theme="info" header="JobA.submit" %}}
{{< highlight batch >}}
#!/bin/bash
#SBATCH --job-name=JobA
......@@ -44,10 +43,10 @@ echo "I'm job A"
echo "Sample job A output" > jobA.out
sleep 120
{{< /highlight >}}
{{% /expand %}}
{{% /panel %}}
{{%expand "JobB.submit" %}}
{{% panel theme="info" header="JobB.submit" %}}
{{< highlight batch >}}
#!/bin/bash
#SBATCH --job-name=JobB
......@@ -62,9 +61,9 @@ echo "" >> jobB.out
echo "Sample job B output" >> jobB.out
sleep 120
{{< /highlight >}}
{{% /expand %}}
{{% /panel %}}
{{%expand "JobC.submit" %}}
{{% panel theme="info" header="JobC.submit" %}}
{{< highlight batch >}}
#!/bin/bash
#SBATCH --job-name=JobC
......@@ -79,9 +78,9 @@ echo "" >> jobC.out
echo "Sample job C output" >> jobC.out
sleep 120
{{< /highlight >}}
{{% /expand %}}
{{% /panel %}}
{{%expand "JobC.submit" %}}
{{% panel theme="info" header="JobD.submit" %}}
{{< highlight batch >}}
#!/bin/bash
#SBATCH --job-name=JobD
......@@ -98,7 +97,7 @@ echo "" >> jobD.out
echo "Sample job D output" >> jobD.out
sleep 120
{{< /highlight >}}
{{% /expand %}}
{{% /panel %}}
To start the workflow, submit Job A first: