Skip to content
Snippets Groups Projects

Removed Tucker

Merged Caughlin Bohn requested to merge 24-Remove-Tucker into master
14 files
+ 43
43
Compare changes
  • Side-by-side
  • Inline

Files

@@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. 
@@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. 
**Logging In**
**Logging In**
``` syntaxhighlighter-pre
``` syntaxhighlighter-pre
ssh tusker.unl.edu -l demoXXXX
ssh crane.unl.edu -l demoXXXX
```
```
**[Cypwin Link](http://cygwin.com/install.html)**
**[Cypwin Link](http://cygwin.com/install.html)**
@@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder
@@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder
``` syntaxhighlighter-pre
``` syntaxhighlighter-pre
$ ls
$ ls
$ scp -r ./demo_code <username>@tusker.unl.edu:/work/demo/<username>
$ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username>
<enter password>
<enter password>
```
```
@@ -59,7 +59,7 @@ Serial Job
@@ -59,7 +59,7 @@ Serial Job
First, you need to login to the cluster
First, you need to login to the cluster
``` syntaxhighlighter-pre
``` syntaxhighlighter-pre
$ ssh <username>@tusker.unl.edu
$ ssh <username>@crane.unl.edu
<enter password>
<enter password>
```
```
@@ -133,14 +133,14 @@ code.  It uses MPI for communication between the parallel processes.
@@ -133,14 +133,14 @@ code.  It uses MPI for communication between the parallel processes.
$ mpif90 fortran_mpi.f90 -o fortran_mpi.x
$ mpif90 fortran_mpi.f90 -o fortran_mpi.x
```
```
Next, we will submit the MPI application to the Tusker cluster scheduler
Next, we will submit the MPI application to the cluster scheduler
using the file `submit_tusker.mpi`.
using the file `submit_tusker.mpi`.
``` syntaxhighlighter-pre
``` syntaxhighlighter-pre
$ qsub submit_tusker.mpi
$ qsub submit_tusker.mpi
```
```
The Tusker cluster scheduler will pick machines (possibly several,
The cluster scheduler will pick machines (possibly several,
depending on availability) to run the parallel MPI application. You can
depending on availability) to run the parallel MPI application. You can
check the status of the job the same way you did with the Serial job:
check the status of the job the same way you did with the Serial job:
Loading