diff --git a/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md b/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md index 29014762153930db9fb81ef5eaeef81f642dcbd3..50ede1cc6671e595d57c67fe90684ccd6a94a285 100644 --- a/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md +++ b/content/Events/2013/supercomputing_mini_workshop_february_27_2013.md @@ -22,7 +22,7 @@ state-of-the-art supercomputing resources. **Logging In** ``` syntaxhighlighter-pre -ssh tusker.unl.edu -l demoXXXX +ssh crane.unl.edu -l demoXXXX ``` **[Cypwin Link](http://cygwin.com/install.html)** @@ -49,7 +49,7 @@ two folders, `serial\_f90` and `parallel\_f90`, in this folder ``` syntaxhighlighter-pre $ ls -$ scp -r ./demo_code <username>@tusker.unl.edu:/work/demo/<username> +$ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username> <enter password> ``` @@ -59,7 +59,7 @@ Serial Job First, you need to login to the cluster ``` syntaxhighlighter-pre -$ ssh <username>@tusker.unl.edu +$ ssh <username>@crane.unl.edu <enter password> ``` @@ -133,14 +133,14 @@ code. It uses MPI for communication between the parallel processes. $ mpif90 fortran_mpi.f90 -o fortran_mpi.x ``` -Next, we will submit the MPI application to the Tusker cluster scheduler +Next, we will submit the MPI application to the cluster scheduler using the file `submit_tusker.mpi`. ``` syntaxhighlighter-pre $ qsub submit_tusker.mpi ``` -The Tusker cluster scheduler will pick machines (possibly several, +The cluster scheduler will pick machines (possibly several, depending on availability) to run the parallel MPI application. You can check the status of the job the same way you did with the Serial job: diff --git a/content/accounts/_index.md b/content/accounts/_index.md index 5ee6b3396a44248b9a0dab33568e00d7b5a48a34..b689d7a01c4da05a96c28f19d469318b53e2dbc5 100644 --- a/content/accounts/_index.md +++ b/content/accounts/_index.md @@ -6,7 +6,8 @@ weight = "20" Anyone affiliated with the Unveristy of Nebraska system can request an account on and use HCC shared resources for free. -How to create an account for HCC: +How to create an HCC account: + 1. **Identify or Setup a Group:** All HCC accounts must be associated with an HCC group. Usually, user's HCC group the research group owned by their advisor but it may also be a class group owned by the course instructor. To establish a new diff --git a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md index ba18fd08924a2d40eb7fbd5bbce34ce16145dfe2..28b25aa5027206c22b448fe4544288dd6ee91630 100644 --- a/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md +++ b/content/applications/app_specific/allinea_profiling_and_debugging/allinea_performance_reports/ray_with_allinea_performance_reports.md @@ -4,7 +4,7 @@ description = "Example of how to profile Ray using Allinea Performance Reports" +++ Simple example of using [Ray]({{< relref "/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray" >}}) -with Allinea PerformanceReports (`perf-report`) on Tusker is shown below: +with Allinea PerformanceReports (`perf-report`) is shown below: {{% panel theme="info" header="ray_perf_report.submit" %}} {{< highlight batch >}} diff --git a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md index 6028edfaac51fd8bfbc21ebb53b40fe01036618d..94fbd172aa458b6ced73f7927d1c42d5eba58c1a 100644 --- a/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md +++ b/content/applications/app_specific/bioinformatics_tools/alignment_tools/clustal_omega.md @@ -64,5 +64,5 @@ The basic Clustal Omega output produces one alignment file in the specified outp ### Useful Information -In order to test the Clustal Omega performance on Tusker, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega on Tusker are shown on the table below: +In order to test the Clustal Omega performance, we used three DNA and protein input fasta files, `data_1.fasta`, `data_2.fasta`, `data_3.fasta`. Some statistics about the input files and the time and memory resources used by Clustal Omega are shown on the table below: {{< readfile file="/static/html/clustal_omega.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md index 4ea2bb19101f678ffb45ce1202faef2c2c08919a..7f29d4a32a43422a08d435f31b3d51ebb95d84f0 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/oases.md @@ -59,5 +59,5 @@ Oases produces two additional output files: `transcripts.fa` and `contig-orderin ### Useful Information -In order to test the Oases (oases/0.2.8) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases on Tusker are shown in the table below: +In order to test the Oases (oases/0.2.8) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Oases are shown in the table below: {{< readfile file="/static/html/oases.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md index cea233648586ea25b4524524c90da741f65acf8f..795d16233e8487cd19b9834185433a55884eb13d 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/ray.md @@ -38,7 +38,7 @@ Ray supports both paired-end (`-p`) and single-end reads (`-s`). Moreover, Ray c Ray supports odd values for k-mer equal to or greater than 21 (`-k <kmer_value>`). Ray supports multiple file formats such as `fasta`, `fa`, `fasta.gz`, `fa.gz, `fasta.bz2`, `fa.bz2`, `fastq`, `fq`, `fastq.gz`, `fq.gz`, `fastq.bz2`, `fq.bz2`, `sff`, `csfasta`, `csfa`. -Simple SLURM script for running Ray on Tusker with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below: +Simple SLURM script for running Ray with both paired-end and single-end data with `k-mer=31`, `8 CPUs` and `4 GB RAM per CPU` is shown below: {{% panel header="`ray.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -76,5 +76,5 @@ One of the most important results are: ### Useful Information -In order to test the Ray performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray on Tusker are shown in the table below: +In order to test the Ray performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Ray are shown in the table below: {{< readfile file="/static/html/ray.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md index cd3b36d31d9914a6c0ac0c6a9dec53d91635d8dd..30b83f5686d6baff087cd1d0e544b2a717e2e5b2 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/soapdenovo2.md @@ -94,7 +94,7 @@ q=input_reads.fq After creating the configuration file **configFile**, the next step is to run the assembler using this file. -Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` on Tusker is shown below: +Simple SLURM script for running SOAPdenovo2 with `k-mer=31`, `8 CPUSs` and `50GB of RAM` is shown below: {{% panel header="`soapdenovo2.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -128,7 +128,7 @@ output31.contig output31.edge.gz output31.links output31.p ### Useful Information -In order to test the SOAPdenovo2 (soapdenovo2/r240) performance on Tusker, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below: +In order to test the SOAPdenovo2 (soapdenovo2/r240) performance, we used three different size input files. Some statistics about the input files and the time and memory resources used by SOAPdenovo2 are shown in the table below: {{< readfile file="/static/html/soapdenovo2.html" >}} In general, SOAPdenovo2 is a memory intensive assembler that requires approximately 30-60 GB memory for assembling 50 million reads. However, SOAPdenovo2 is a fast assembler and it takes around an hour to assemble 50 million reads. diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md index 1118484f6b60035b91d3835df19bf7318da04b46..3147c63996f152d32baec4b1cb6c876a60df5472 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/trinity/_index.md @@ -52,7 +52,7 @@ Each step may be run as its own job, providing a workaround for the single job w ### Useful Information -In order to test the Trinity (trinity/r2014-04-13p1) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity on Tusker are shown in the table below: +In order to test the Trinity (trinity/r2014-04-13p1) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Trinity are shown in the table below: {{< readfile file="/static/html/trinity.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md index 2635bdbd1452bf3c569fc63e7ad57993ebf1a900..427424d769b0a9e7e609fd22a05d7088793b0053 100644 --- a/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md +++ b/content/applications/app_specific/bioinformatics_tools/de_novo_assembly_tools/velvet/_index.md @@ -18,5 +18,5 @@ Each step of Velvet (**velveth** and **velvetg**) may be run as its own job. The ### Useful Information -In order to test the Velvet (velvet/1.2) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet on Tusker are shown in the table below: +In order to test the Velvet (velvet/1.2) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Velvet are shown in the table below: {{< readfile file="/static/html/velvet.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md index 4cd1db59fa9de90fe464200a613062925bdb32a3..0b18b6cd09e57f8328c01a97a3e83f2af063def9 100644 --- a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md +++ b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/scythe.md @@ -22,7 +22,7 @@ $ scythe --help {{< /highlight >}} -Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` for Tusker is shown below: +Simple Scythe script that uses the `illumina_adapters.fa` file and `input_reads.fastq` is shown below: {{% panel header="`scythe.submit`"%}} {{< highlight bash >}} #!/bin/sh @@ -52,5 +52,5 @@ Scythe returns fastq file of reads with removed adapter sequences. ### Useful Information -In order to test the Scythe (scythe/0.991) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe on Tusker are shown in the table below: +In order to test the Scythe (scythe/0.991) performance , we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Scythe are shown in the table below: {{< readfile file="/static/html/scythe.html" >}} diff --git a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md index eb6cc4e775d7c7bd59d3bd91144b2e1ce8afd910..6062f2cc9e3e57c9ec20564647b2b260f527143b 100644 --- a/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md +++ b/content/applications/app_specific/bioinformatics_tools/pre_processing_tools/sickle.md @@ -79,5 +79,5 @@ Sickle returns fastq file of reads with trimmed low quality bases from both 3' a ### Useful Information -In order to test the Sickle (sickle/1.210) performance on Tusker, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle on Tusker are shown in the table below: +In order to test the Sickle (sickle/1.210) performance, we used three paired-end input fastq files, `small_1.fastq` and `small_2.fastq`, `medium_1.fastq` and `medium_2.fastq`, and `large_1.fastq` and `large_2.fastq`. Some statistics about the input files and the time and memory resources used by Sickle are shown in the table below: {{< readfile file="/static/html/sickle.html" >}} diff --git a/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md b/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md index aec31a28dff7bd68ab4aa8a2d99c89935a62a6a8..2efd86d9809458b58df3a384884453df5273a0eb 100644 --- a/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md +++ b/content/applications/app_specific/running_ocean_land_atmosphere_model_olam.md @@ -3,7 +3,7 @@ title = "Running OLAM at HCC" description = "How to run the OLAM (Ocean Land Atmosphere Model) on HCC resources." +++ -### OLAM compilation on Tusker +### OLAM compilation ##### pgi/11 compilation with mpi and openmp enabled 1. Load modules: diff --git a/content/applications/user_software/r_packages.md b/content/applications/user_software/r_packages.md new file mode 100644 index 0000000000000000000000000000000000000000..aa9e06a138d8c058a90eb95f04dbab2cb90bd89c --- /dev/null +++ b/content/applications/user_software/r_packages.md @@ -0,0 +1,63 @@ ++++ +title = "Using R Libraries" +description = "How to install R packages on HCC resources." ++++ + +Many commonly used R packages are included in the base R installation available on HCC clusters, + such as `tidyverse` and `stringr`. However, users are able to install other packages in their +user libraries. + +- [Adding packages](#adding-packages) + - [Installing packages interactively](#installing-packages-interactively) + - [Installing packages using R CMD INSTALL](#installing-packages-using-r-cmd-install) + + +### Adding packages + +There are two options to install packages. The first is to run R on the +login node and run R interactively to install packages. The second is to +use the `R CMD INSTALL` command. + +{{% notice info %}} +All R packages must be installed from the login node. R libraries are +stored in user's home directories which are not writable from the worker +nodes. +{{% /notice %}} + +#### Installing packages interactively + +1. Load the R module with the command `module load R` + - Note that each version of R uses its own user libraries. To + install packages under a specific version of R, specify which + version by using the module load command followed by the version + number. For example, to load R version 3.5, you would use the + command `module load R/3.5` +2. Run R interactively using the command `R` +3. From within R, use the `install.packages()` command to install + desired packages. For example, to install the package `ggplot2` + use the command `install.packages("ggplot2")` + +Some R packages, require external compilers or additional libraries. If +you see an error when installing your package you might need to load +additional modules to make these compilers or libraries available. For +more information about this, refer to the package documentation. + +#### Installing packages using R CMD INSTALL + +To install packages using `R CMD INSTALL` the zipped package must +already be downloaded to the cluster. You can download package source +using `wget`. Then the `R CMD INSTALL` command can be used when +pointed to the full path of the source tar file. For example, to install +ggplot2 the following commands are used: + +{{< highlight bash >}} +# Download the package source: +wget https://cran.r-project.org/src/contrib/ggplot2_3.2.1.tar.gz + +# Install the package: +R CMD INSTALL ./ggplot2_2.2.1.tar.gz +{{< /highlight >}} + +Additional information on using the `R CMD INSTALL` command can be +found in the R documentation which can be seen by typing `?INSTALL` +within the R console. diff --git a/content/handling_data/data_storage/_index.md b/content/handling_data/data_storage/_index.md index 33f75fddb02cc63d68ab4c867e9d4c778a83ffd0..4016b287790c588cf5d3a908a353853f3d13a7fd 100644 --- a/content/handling_data/data_storage/_index.md +++ b/content/handling_data/data_storage/_index.md @@ -37,7 +37,7 @@ environmental variable (i.e. '`cd $COMMON`') The common directory operates similarly to work and is mounted with **read and write capability to worker nodes all HCC Clusters**. This -means that any files stored in common can be accessed from Crane and Tusker, making this directory ideal for items that need to be +means that any files stored in common can be accessed from Crane and Rhino, making this directory ideal for items that need to be accessed from multiple clusters such as reference databases and shared data files. diff --git a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md index cbc6014cde84ce633a86efef469b2d36317f2d46..5586d889fb63349c9c0f9789652a274f6f5d8223 100644 --- a/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md +++ b/content/handling_data/data_transfer/globus_connect/globus_command_line_interface.md @@ -75,14 +75,14 @@ can now transfer and manipulate files on the remote endpoint. {{% notice info %}} To make it easier to use, we recommend saving the UUID number as a bash variable to make the commands easier to use. For example, we will -continue to use the above endpoint (Tusker) by assigning its UUID code -to the variable \`tusker\` as follows: +continue to use the above endpoint (Crane) by assigning its UUID code +to the variable \`crane\` as follows: {{< figure src="/images/21073499.png" >}} This command must be repeated upon each new login or terminal session unless you save these in your environmental variables. If you do not wish to do this step, you can proceed by placing the correct UUID in -place of whenever you see \`$tusker\`. +place of whenever you see \`$crane\`. {{% /notice %}} --- @@ -98,7 +98,7 @@ command: To make a directory on the remote endpoint, we would use the \`globus mkdir\` command. For example, to make a folder in the users work -directory on Tusker, we would use the following command: +directory on Crane, we would use the following command: {{< figure src="/images/21073501.png" >}} To rename files on the remote endpoint, we can use the \`globus rename\` @@ -112,14 +112,14 @@ command: All transfers must take place between Globus endpoints. Even if you are transferring from an endpoint that you are already connected to, that endpoint must be activated in Globus. Here, we are transferring between -Crane and Tusker. We have activated the Crane endpoint and saved its -UUID to the variable $crane as we did for $tusker above. +Crane and Rhino. We have activated the Crane endpoint and saved its +UUID to the variable $crane as we did for $crane above. To transfer files, we use the command \`globus transfer\`. The format of this command is \`globus transfer <endpoint1>:<file\_path> <endpoint2>:<file\_path>\`. For example, here we are transferring the file \`testfile.txt\` from the home directory on Crane -to the home directory on Tusker: +to the home directory on Crane: {{< figure src="/images/21073505.png" >}} You can then check the status of a transfer, or delete it all together, @@ -129,7 +129,7 @@ using the given Task ID: To transfer entire directories, simply specify a directory in the file path as opposed to an individual file. Below, we are transferring the \`output\` directory from the home directory on Crane to the home -directory on Tusker: +directory on Crane: {{< figure src="/images/21073507.png" >}} For additional details and information on other features of the Globus diff --git a/content/submitting_jobs/app_specific/submitting_r_jobs.md b/content/submitting_jobs/app_specific/submitting_r_jobs.md index 77096d3b92011ec5ab4c9b2ca082482f7e1b363c..d0644b4fd2c798f5444a14c880ba2ba68d147979 100644 --- a/content/submitting_jobs/app_specific/submitting_r_jobs.md +++ b/content/submitting_jobs/app_specific/submitting_r_jobs.md @@ -4,16 +4,13 @@ description = "How to submit R jobs on HCC resources." +++ Submitting an R job is very similar to submitting a serial job shown -on [Submitting Jobs]({{< relref "/guides/submitting_jobs/_index.md" >}}). +on [Submitting Jobs]({{< relref "../submitting_jobs/_index.md" >}}). - [Running R scripts in batch](#running-r-scripts-in-batch) - [Running R scripts using `R CMD BATCH`](#running-r-scripts-using-r-cmd-batch) - [Running R scripts using `Rscript`](#running-r-scripts-using-rscript) - [Multicore (parallel) R submission](#multicore-parallel-r-submission) -- [Multinode R submission with Rmpi](#multinode-r-submission-with-rmpi) -- [Adding packages](#adding-packages) - - [Installing packages interactively](#installing-packages-interactively) - - [Installing packages using R CMD INSTALL](#installing-packages-using-r-cmd-install) +- [Multinode R submission with Rmpi](#multinode-r-submission-with-rmpi) ### Running R scripts in batch @@ -223,52 +220,3 @@ mpi.exit() --- -### Adding packages - -There are two options to install packages. The first is to run R on the -login node and run R interactively to install packages. The second is to -use the `R CMD INSTALL` command. - -{{% notice info %}} -All R packages must be installed from the login node. R libraries are -stored in user's home directories which are not writable from the worker -nodes. -{{% /notice %}} - -#### Installing packages interactively - -1. Load the R module with the command `module load R` - - Note that each version of R uses its own user libraries. To - install packages under a specific version of R, specify which - version by using the module load command followed by the version - number. For example, to load R version 3.5, you would use the - command `module load R/3.5` -2. Run R interactively using the command `R` -3. From within R, use the `install.packages()` command to install - desired packages. For example, to install the package `ggplot2` - use the command `install.packages("ggplot2")` - -Some R packages, require external compilers or additional libraries. If -you see an error when installing your package you might need to load -additional modules to make these compilers or libraries available. For -more information about this, refer to the package documentation. - -#### Installing packages using R CMD INSTALL - -To install packages using `R CMD INSTALL` the zipped package must -already be downloaded to the cluster. You can download package source -using `wget`. Then the `R CMD INSTALL` command can be used when -pointed to the full path of the source tar file. For example, to install -ggplot2 the following commands are used: - -{{< highlight bash >}} -# Download the package source: -wget https://cran.r-project.org/src/contrib/ggplot2_2.2.1.tar.gz - -# Install the package: -R CMD INSTALL ./ggplot2_2.2.1.tar.gz -{{< /highlight >}} - -Additional information on using the `R CMD INSTALL` command can be -found in the R documentation which can be seen by typing `?INSTALL` -within the R console. diff --git a/content/submitting_jobs/app_specific/submitting_cuda_or_openacc_jobs.md b/content/submitting_jobs/submitting_gpu_jobs.md similarity index 90% rename from content/submitting_jobs/app_specific/submitting_cuda_or_openacc_jobs.md rename to content/submitting_jobs/submitting_gpu_jobs.md index 1e1b17645bd19e013660c8efd2b135294b8d5650..61d479c51f9766695a9f7be51972377eda848774 100644 --- a/content/submitting_jobs/app_specific/submitting_cuda_or_openacc_jobs.md +++ b/content/submitting_jobs/submitting_gpu_jobs.md @@ -1,6 +1,7 @@ +++ title = "Submitting GPU Jobs" description = "How to submit GPU (CUDA/OpenACC) jobs on HCC resources." +weight=35 +++ ### Available GPUs @@ -53,15 +54,20 @@ You may request multiple GPUs by changing the` --gres` value to total of 4 GPUs. {{% /notice %}} -The GPU memory feature may be used to specify a GPU RAM amount either independent of architecture, or in combination with it. +The GPU memory feature may be used to specify a GPU RAM amount either +independent of architecture, or in combination with it. + For example, using {{< highlight batch >}} --partition=gpu --gres=gpu --constraint=gpu_16gb {{< /highlight >}} -will request a GPU with 16GB of RAM, independent of the type of card (K20, K40, P100, etc.). You may also -request both a GPU type _and_ memory amount using the `&` operator (single quotes are used because `&` is a special character). +will request a GPU with 16GB of RAM, independent of the type of card +(K20, K40, P100, etc.). You may also request both a GPU type _and_ +memory amount using the `&` operator (single quotes are used because +`&` is a special character). + For example, {{< highlight batch >}} @@ -71,7 +77,10 @@ For example, will request a V100 GPU with 32GB RAM. {{% notice warning %}} -You must verify the GPU type and memory combination is valid based on the [available GPU types.]({{< relref "submitting_cuda_or_openacc_jobs/#available-gpus" >}}). Requesting a nonexistent combination will cause your job to be rejected with a `Requested node configuration is not available` error. +You must verify the GPU type and memory combination is valid based on the +[available GPU types.]({{< relref "submitting_gpu_jobs/#available-gpus" >}}). +Requesting a nonexistent combination will cause your job to be rejected with +a `Requested node configuration is not available` error. {{% /notice %}} ### Compiling