Commit 0d715206 authored by Caughlin Bohn's avatar Caughlin Bohn Committed by Carrie A Brown
Browse files

Updated Tusker things to Crane or Removed Tusker

parent 632ba850
...@@ -27,7 +27,7 @@ $ tophat2 -h ...@@ -27,7 +27,7 @@ $ tophat2 -h
Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file. Prior running TopHat/TopHat2, an index from the reference genome should be built using Bowtie/Bowtie2. Moreover, TopHat2 requires both, the index file and the reference file, to be in the same directory. If the reference file is not available,TopHat2 reconstructs it in its initial step using the index file.
An example of how to run TopHat2 on Tusker with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below: An example of how to run TopHat2 on Crane with paired-end fastq files `input_reads_pair_1.fastq` and `input_reads_pair_2.fastq`, reference index `index_prefix` and `8 CPUs` is shown below:
{{% panel header="`tophat2_alignment.submit`"%}} {{% panel header="`tophat2_alignment.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
......
...@@ -5,7 +5,7 @@ weight = "52" ...@@ -5,7 +5,7 @@ weight = "52"
+++ +++
HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on both Tusker and Crane. HCC hosts multiple databases (BLAST, KEGG, PANTHER, InterProScan), genome files, short read aligned indices etc. on Crane.
In order to use these resources, the "**biodata**" module needs to be loaded first. In order to use these resources, the "**biodata**" module needs to be loaded first.
For how to load module, please check [Module Commands](#module_commands). For how to load module, please check [Module Commands](#module_commands).
......
...@@ -16,7 +16,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig ...@@ -16,7 +16,7 @@ $ bamtools convert -format [bed|fasta|fastq|json|pileup|sam|yaml] -in input_alig
where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file. where the option **-format** specifies the type of the output file, **input_alignments.bam** is the input BAM file, and **-out** defines the name and the type of the converted file.
Running BamTools **convert** on Tusker with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below: Running BamTools **convert** on Crane with input file `input_alignments.bam` and output file `output_reads.fastq` is shown below:
{{% panel header="`bamtools_convert.submit`"%}} {{% panel header="`bamtools_convert.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
......
...@@ -14,7 +14,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b ...@@ -14,7 +14,7 @@ $ samtools view input_alignments.[bam|sam] [options] -o output_alignments.[sam|b
where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively. where **input_alignments.[bam|sam]** is the input file with the alignments in BAM/SAM format, and **output_alignments.[sam|bam]** file is the converted file into SAM or BAM format respectively.
Running **samtools view** on Tusker with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below: Running **samtools view** on Crane with `8 CPUs`, input file `input_alignments.sam` with available header (**-S**), output in BAM format (**-b**) and output file `output_alignments.bam` is shown below:
{{% panel header="`samtools_view.submit`"%}} {{% panel header="`samtools_view.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
......
...@@ -18,7 +18,7 @@ $ fastq-dump [options] input_reads.sra ...@@ -18,7 +18,7 @@ $ fastq-dump [options] input_reads.sra
{{< /highlight >}} {{< /highlight >}}
An example of running **fastq-dump** on Tusker to convert SRA file containing paired-end reads is: An example of running **fastq-dump** on Crane to convert SRA file containing paired-end reads is:
{{% panel header="`sratoolkit.submit`"%}} {{% panel header="`sratoolkit.submit`"%}}
{{< highlight bash >}} {{< highlight bash >}}
#!/bin/sh #!/bin/sh
......
...@@ -15,7 +15,7 @@ DMTCP are OpenMP, MATLAB, Python, Perl, MySQL, bash, gdb, X-Windows etc. ...@@ -15,7 +15,7 @@ DMTCP are OpenMP, MATLAB, Python, Perl, MySQL, bash, gdb, X-Windows etc.
DMTCP provides support for several resource managers, including SLURM, DMTCP provides support for several resource managers, including SLURM,
the resource manager used in HCC. The DMTCP module is available both on the resource manager used in HCC. The DMTCP module is available both on
Tusker and Crane, and is enabled by typing: Crane, and is enabled by typing:
{{< highlight bash >}} {{< highlight bash >}}
module load dmtcp module load dmtcp
...@@ -24,7 +24,7 @@ module load dmtcp ...@@ -24,7 +24,7 @@ module load dmtcp
After the module is loaded, the first step is to run the command: After the module is loaded, the first step is to run the command:
{{< highlight bash >}} {{< highlight bash >}}
[<username>@login.tusker ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command> [<username>@login.crane ~]$ dmtcp_launch --new-coordinator --rm --interval <interval_time_seconds> <your_command>
{{< /highlight >}} {{< /highlight >}}
where `--rm` option enables SLURM support, where `--rm` option enables SLURM support,
...@@ -36,7 +36,7 @@ Beside the general options shown above, more `dmtcp_launch` options ...@@ -36,7 +36,7 @@ Beside the general options shown above, more `dmtcp_launch` options
can be seen by using: can be seen by using:
{{< highlight bash >}} {{< highlight bash >}}
[<username>@login.tusker ~]$ dmtcp_launch --help [<username>@login.crane ~]$ dmtcp_launch --help
{{< /highlight >}} {{< /highlight >}}
`dmtcp_launch` creates few files that are used to resume the `dmtcp_launch` creates few files that are used to resume the
...@@ -62,7 +62,7 @@ will keep running with the options defined in the initial ...@@ -62,7 +62,7 @@ will keep running with the options defined in the initial
Simple example of using DMTCP with Simple example of using DMTCP with
[BLAST]({{< relref "/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment" >}}) [BLAST]({{< relref "/guides/running_applications/bioinformatics_tools/alignment_tools/blast/running_blast_alignment" >}})
on Tusker is shown below: on crane is shown below:
{{% panel theme="info" header="dmtcp_blastx.submit" %}} {{% panel theme="info" header="dmtcp_blastx.submit" %}}
{{< highlight batch >}} {{< highlight batch >}}
......
...@@ -8,7 +8,7 @@ This quick start demonstrates how to implement a Fortran/C program on ...@@ -8,7 +8,7 @@ This quick start demonstrates how to implement a Fortran/C program on
HCC supercomputers. The sample codes and submit scripts can be HCC supercomputers. The sample codes and submit scripts can be
downloaded from [serial_dir.zip](/attachments/serial_dir.zip). downloaded from [serial_dir.zip](/attachments/serial_dir.zip).
#### Login to a HCC Cluster (Tusker or Crane)  #### Login to a HCC Cluster
Log in to a HCC cluster through PuTTY ([For Windows Users]({{< relref "/quickstarts/connecting/for_windows_users">}})) or Terminal ([For Mac/Linux Log in to a HCC cluster through PuTTY ([For Windows Users]({{< relref "/quickstarts/connecting/for_windows_users">}})) or Terminal ([For Mac/Linux
Users]({{< relref "/quickstarts/connecting/for_maclinux_users">}})) and make a subdirectory called `serial_dir` under the `$WORK` directory.  Users]({{< relref "/quickstarts/connecting/for_maclinux_users">}})) and make a subdirectory called `serial_dir` under the `$WORK` directory. 
......
...@@ -21,8 +21,7 @@ of a **g09** license. ...@@ -21,8 +21,7 @@ of a **g09** license.
For access, contact us at For access, contact us at
 {{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)  {{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)
and include your HCC username. After your account has been added to the and include your HCC username. After your account has been added to the
group "*gauss*", here are four simple steps to run Gaussian 09 on group "*gauss*", here are four simple steps to run Gaussian 09 on Crane:
Tusker and Crane:
**Step 1:** Copy **g09** sample input file and SLURM script to your **Step 1:** Copy **g09** sample input file and SLURM script to your
"g09" test directory on the `/work` filesystem: "g09" test directory on the `/work` filesystem:
......
...@@ -3,8 +3,7 @@ title = "Running Theano" ...@@ -3,8 +3,7 @@ title = "Running Theano"
description = "How to run the Theano on HCC resources." description = "How to run the Theano on HCC resources."
+++ +++
Theano is available on HCC resources via the modules system.  CPU Theano is available on HCC resources via the modules system. Both CPU and GPU
versions are available on Tusker; both CPU and GPU
versions are available on Crane.  Additionally, installs for both Python versions are available on Crane.  Additionally, installs for both Python
2.7 and 3.6 are provided. 2.7 and 3.6 are provided.
......
...@@ -103,7 +103,7 @@ To start the workflow, submit Job A first: ...@@ -103,7 +103,7 @@ To start the workflow, submit Job A first:
{{% panel theme="info" header="Submit Job A" %}} {{% panel theme="info" header="Submit Job A" %}}
{{< highlight batch >}} {{< highlight batch >}}
[demo01@login.tusker demo01]$ sbatch JobA.submit [demo01@login.crane demo01]$ sbatch JobA.submit
Submitted batch job 666898  Submitted batch job 666898 
{{< /highlight >}} {{< /highlight >}}
{{% /panel %}} {{% /panel %}}
...@@ -113,9 +113,9 @@ dependency: ...@@ -113,9 +113,9 @@ dependency:
{{% panel theme="info" header="Submit Jobs B and C" %}} {{% panel theme="info" header="Submit Jobs B and C" %}}
{{< highlight batch >}} {{< highlight batch >}}
[demo01@login.tusker demo01]$ sbatch -d afterok:666898 JobB.submit [demo01@login.crane demo01]$ sbatch -d afterok:666898 JobB.submit
Submitted batch job 666899 Submitted batch job 666899
[demo01@login.tusker demo01]$ sbatch -d afterok:666898 JobC.submit [demo01@login.crane demo01]$ sbatch -d afterok:666898 JobC.submit
Submitted batch job 666900 Submitted batch job 666900
{{< /highlight >}} {{< /highlight >}}
{{% /panel %}} {{% /panel %}}
...@@ -124,7 +124,7 @@ Finally, submit Job D as depending on both jobs B and C: ...@@ -124,7 +124,7 @@ Finally, submit Job D as depending on both jobs B and C:
{{% panel theme="info" header="Submit Job D" %}} {{% panel theme="info" header="Submit Job D" %}}
{{< highlight batch >}} {{< highlight batch >}}
[demo01@login.tusker demo01]$ sbatch -d afterok:666899:666900 JobD.submit [demo01@login.crane demo01]$ sbatch -d afterok:666899:666900 JobD.submit
Submitted batch job 666901 Submitted batch job 666901
{{< /highlight >}} {{< /highlight >}}
{{% /panel %}} {{% /panel %}}
...@@ -135,7 +135,7 @@ of the dependency. ...@@ -135,7 +135,7 @@ of the dependency.
{{% panel theme="info" header="Squeue Output" %}} {{% panel theme="info" header="Squeue Output" %}}
{{< highlight batch >}} {{< highlight batch >}}
[demo01@login.tusker demo01]$ squeue -u demo01 [demo01@login.crane demo01]$ squeue -u demo01
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
666899 batch JobB demo01 PD 0:00 1 (Dependency) 666899 batch JobB demo01 PD 0:00 1 (Dependency)
666900 batch JobC demo01 PD 0:00 1 (Dependency) 666900 batch JobC demo01 PD 0:00 1 (Dependency)
......
+++ +++
title = "Partitions" title = "Partitions"
description = "Listing of partitions on Tusker and Crane." description = "Listing of partitions on Crane."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"] scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"] css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++ +++
Partitions are used in Crane and Tusker to distinguish different Partitions are used in Crane to distinguish different
resources. You can view the partitions with the command `sinfo`. resources. You can view the partitions with the command `sinfo`.
### Crane: ### Crane:
[Full list for Crane]({{< relref "crane_available_partitions" >}}) [Full list for Crane]({{< relref "crane_available_partitions" >}})
### Tusker:
[Full list for Tusker]({{< relref "tusker_available_partitions" >}})
#### Priority for short jobs #### Priority for short jobs
To run short jobs for testing and development work, a job can specify a To run short jobs for testing and development work, a job can specify a
...@@ -37,7 +33,7 @@ priority so it will run as soon as possible. ...@@ -37,7 +33,7 @@ priority so it will run as soon as possible.
Overall limitations of maximum job wall time. CPUs, etc. are set for Overall limitations of maximum job wall time. CPUs, etc. are set for
all jobs with the default setting (when thea "–qos=" section is omitted) all jobs with the default setting (when thea "–qos=" section is omitted)
and "short" jobs (described as above) on Tusker and Crane. and "short" jobs (described as above) on Crane.
The limitations are shown in the following form. The limitations are shown in the following form.
| | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User | | | SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User |
......
+++
title = "Available Partitions for Tusker"
description = "List of available partitions for tusker.unl.edu."
scripts = ["https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/jquery.tablesorter.min.js", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-pager.min.js","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/js/widgets/widget-filter.min.js","/js/sort-table.js"]
css = ["http://mottie.github.io/tablesorter/css/theme.default.css","https://mottie.github.io/tablesorter/css/theme.dropbox.css", "https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/jquery.tablesorter.pager.min.css","https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.31.1/css/filter.formatter.min.css"]
+++
### Tusker:
{{< table url="http://tusker-head.unl.edu:8192/slurm/partitions/json" >}}
Two nodes have 512GB of memory instead of 256GB (Max Request = 500GB), and two have 1024GB of memory (Max Request = 1000GB).
...@@ -192,10 +192,7 @@ mpirun -n 1 R CMD BATCH Rmpi.R ...@@ -192,10 +192,7 @@ mpirun -n 1 R CMD BATCH Rmpi.R
{{% /panel %}} {{% /panel %}}
When you run Rmpi job on Crane, please use the line `export When you run Rmpi job on Crane, please use the line `export
OMPI_MCA_mtl=^psm` in your submit script. On the other hand, if you OMPI_MCA_mtl=^psm` in your submit script. Regardless of how may cores your job uses, the Rmpi package should
run Rmpi job on Tusker, you **do not need** to add this line. This is
because of the different Infiniband cards Tusker and Crane use.
Regardless of how may cores your job uses, the Rmpi package should
always be run with `mpirun -n 1` because it spawns additional always be run with `mpirun -n 1` because it spawns additional
processes dynamically. processes dynamically.
......
...@@ -5,7 +5,7 @@ description = "A simple example of submitting an HTCondor job." ...@@ -5,7 +5,7 @@ description = "A simple example of submitting an HTCondor job."
This page describes a complete example of submitting an HTCondor job. This page describes a complete example of submitting an HTCondor job.
1. SSH to Tusker or Crane 1. SSH to Crane
{{% panel theme="info" header="ssh command" %}} {{% panel theme="info" header="ssh command" %}}
[apple@localhost]ssh apple@crane.unl.edu [apple@localhost]ssh apple@crane.unl.edu
......
...@@ -3,7 +3,7 @@ title = "How to submit an OSG job with HTCondor" ...@@ -3,7 +3,7 @@ title = "How to submit an OSG job with HTCondor"
description = "How to submit an OSG job with HTCondor" description = "How to submit an OSG job with HTCondor"
+++ +++
{{% notice info%}}Jobs can be submitted to the OSG from Crane or Tusker, so {{% notice info%}}Jobs can be submitted to the OSG from Crane, so
there is no need to logon to a different submit host or get a grid there is no need to logon to a different submit host or get a grid
certificate! certificate!
{{% /notice %}} {{% /notice %}}
...@@ -15,7 +15,7 @@ project provides software to schedule individual applications, ...@@ -15,7 +15,7 @@ project provides software to schedule individual applications,
workflows, and for sites to manage resources.  It is designed to enable workflows, and for sites to manage resources.  It is designed to enable
High Throughput Computing (HTC) on large collections of distributed High Throughput Computing (HTC) on large collections of distributed
resources for users and serves as the job scheduler used on the OSG. resources for users and serves as the job scheduler used on the OSG.
 Jobs are submitted from either the Crane or Tusker login nodes to the  Jobs are submitted from the Crane login node to the
OSG using an HTCondor submission script.  For those who are used to OSG using an HTCondor submission script.  For those who are used to
submitting jobs with SLURM, there are a few key differences to be aware submitting jobs with SLURM, there are a few key differences to be aware
of: of:
......
...@@ -25,8 +25,8 @@ Access to HCC Supercomputers ...@@ -25,8 +25,8 @@ Access to HCC Supercomputers
For Mac/Linux users, use the system program Terminal to access to the For Mac/Linux users, use the system program Terminal to access to the
HCC supercomputers. In the Terminal prompt, HCC supercomputers. In the Terminal prompt,
type `ssh <username>@tusker.unl.edu` and the corresponding password type `ssh <username>@crane.unl.edu` and the corresponding password
to get access to the HCC cluster **Tusker**. Note that &lt;username&gt; to get access to the HCC cluster **Crane**. Note that &lt;username&gt;
should be replaced by your HCC account username. If you do not have a should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
......
...@@ -30,16 +30,16 @@ Users]({{< relref "for_maclinux_users" >}}). ...@@ -30,16 +30,16 @@ Users]({{< relref "for_maclinux_users" >}}).
-------------- --------------
For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the For Windows 10 users, use the Command Prompt, accessed by entering `cmd` in the start menu, to access to the
HCC supercomputers. In the Command Prompt, HCC supercomputers. In the Command Prompt,
type `ssh <username>@tusker.unl.edu` and the corresponding password type `ssh <username>@crane.unl.edu` and the corresponding password
to get access to the HCC cluster **Tusker**. Note that &lt;username&gt; to get access to the HCC cluster **Crane**. Note that &lt;username&gt;
should be replaced by your HCC account username. If you do not have a should be replaced by your HCC account username. If you do not have a
HCC account, please contact a HCC specialist HCC account, please contact a HCC specialist
({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu)) ({{< icon name="envelope" >}}[hcc-support@unl.edu] (mailto:hcc-support@unl.edu))
or go to http://hcc.unl.edu/newusers. or go to http://hcc.unl.edu/newusers.
To use the **Crane** cluster, replace tusker.unl.edu with crane.unl.edu.
{{< highlight bash >}} {{< highlight bash >}}
C:\> ssh <username>@tusker.unl.edu C:\> ssh <username>@crane.unl.edu
C:\> <password> C:\> <password>
{{< /highlight >}} {{< /highlight >}}
...@@ -56,7 +56,7 @@ or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe) ...@@ -56,7 +56,7 @@ or [Direct Link](https://the.earth.li/~sgtatham/putty/latest/w32/putty.exe)
Here we use the HCC cluster **Tusker** for demonstration. To use the Here we use the HCC cluster **Tusker** for demonstration. To use the
**Crane** or cluster, replace `tusker.unl.edu` with `crane.unl.edu`. **Crane** cluster, replace `tusker.unl.edu` with `crane.unl.edu`.
1. On the first screen, type `tusker.unl.edu` for Host Name, then click 1. On the first screen, type `tusker.unl.edu` for Host Name, then click
**Open** **Open**
......
...@@ -20,7 +20,7 @@ the following instructions to work.** ...@@ -20,7 +20,7 @@ the following instructions to work.**
- [Tutorial Video](#tutorial-video) - [Tutorial Video](#tutorial-video)
Every HCC user has a password that is same on all HCC machines Every HCC user has a password that is same on all HCC machines
(Tusker, Crane, Anvil). This password needs to satisfy the HCC (Crane, Anvil). This password needs to satisfy the HCC
password requirements. password requirements.
### HCC password requirements ### HCC password requirements
...@@ -47,7 +47,7 @@ to change it: ...@@ -47,7 +47,7 @@ to change it:
#### Change your password via the command line #### Change your password via the command line
To change a current or temporary password, the user needs to login to To change a current or temporary password, the user needs to login to
any HCC cluster (Crane or Tusker) and use the ***passwd*** command:  any HCC cluster and use the ***passwd*** command: 
**Change HCC password** **Change HCC password**
......
...@@ -4,9 +4,9 @@ description = "How to submit jobs to HCC resources" ...@@ -4,9 +4,9 @@ description = "How to submit jobs to HCC resources"
weight = "10" weight = "10"
+++ +++
Crane and Tusker are managed by Crane is managed by
the [SLURM](https://slurm.schedmd.com) resource manager.   the [SLURM](https://slurm.schedmd.com) resource manager.  
In order to run processing on Crane or Tusker, you In order to run processing on Crane, you
must create a SLURM script that will run your processing. After must create a SLURM script that will run your processing. After
submitting the job, SLURM will schedule your processing on an available submitting the job, SLURM will schedule your processing on an available
worker node. worker node.
...@@ -81,10 +81,7 @@ sleep 60 ...@@ -81,10 +81,7 @@ sleep 60
- **mem** - **mem**
Specify the real memory required per node in MegaBytes. If you Specify the real memory required per node in MegaBytes. If you
exceed this limit, your job will be stopped. Note that for you exceed this limit, your job will be stopped. Note that for you
should ask for less memory than each node actually has. For should ask for less memory than each node actually has. For Crane, the
instance, Tusker has 1TB, 512GB and 256GB of RAM per node. You may
only request 1000GB of RAM for the 1TB node, 500GB of RAM for the
512GB nodes, and 250GB of RAM for the 256GB nodes. For Crane, the
max is 500GB. max is 500GB.
- **job-name** - **job-name**
The name of the job.  Will be reported in the job listing. The name of the job.  Will be reported in the job listing.
......
static/images/3178523.png

18.4 KB | W: | H:

static/images/3178523.png

27.2 KB | W: | H:

static/images/3178523.png
static/images/3178523.png
static/images/3178523.png
static/images/3178523.png
  • 2-up
  • Swipe
  • Onion skin
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment