Skip to content
Snippets Groups Projects

Add CS FAQ Section

Merged Natasha Pavlovikj requested to merge cs into master
1 file
+ 21
0
Compare changes
  • Side-by-side
  • Inline
@@ -101,4 +101,25 @@ If you want to test the CryoSPARC OOD App, you can use the [CryoSPARC Introducto
- If you need to transfer data to/from Swan as part of your CryoSPARC workflow please see the [Data Transfer]({{< relref "../handling_data/data_transfer/" >}}) page.
#### FAQ
- **How to check the logs generated from my CryoSPARC job?**
- The logs associated with the CryoSPARC Open OnDemand App can be found in `$WORK/.ondemand/batch_connect/sys/bc_hcc_cryosparc/swan/output/<session_id>/output.log`, where `<session_id>` should be replaced with the **Session ID** printed in the CryoSPARC Info Card. The format of the Session ID is `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
- The logs associated with your specific CryoSPARC project can be found in the `job.log` file in the Project directory.
- **Why am I getting _"Found no NVIDIA driver on your system."_ error?**
- The error means that your CryoSPARC job is not running on a GPU node, so please make sure you specify GPU partition for your _"master"_ process if you use the `default` lane.
- **How to check the status of my completed CryoSPARC jobs?**
- You can monitor completed jobs using [seff and sacct]({{< relref "../submitting_jobs/monitoring_jobs/#monitoring-completed-jobs" >}}). For example, to find the *SLURM Job ID, SLURM Job Name, SLURM Job State, the node the job ran on, the total runtime, as well as the requested and used memory* of all your jobs ran today, you can use:
```
sacct --format=JobId,JobName%50,State,Node,Elapsed,MaxRSS,ReqMem
```
- The *SLURM Job Name* of the CryoSPARC *"master"* process and the jobs that run on the `default` lane is always `ondemand/sys/dashboard/sys/bc_hcc_cryosparc/swan`.
- The *SLURM Job Name* of the CryoSPARC jobs that run on the `swan` or `swan-highmem` lanes is always in the format `cryosparc_<project_uid>_<job_uid>`, where `<project_uid>` and `<job_uid>` are replaced with the CryoSPARC Project ID and CryoSPARC Job ID respectively. For example, if your CryoSPARC Project ID is 3, and your CryoSPARC Job ID is 187, the SLURM Job Name will be `cryosparc_P3_J187`.
- **My CryoSPARC job failed and I don't know why.**
- If your CryoSPARC job fails, it is highly likely that the requested memory was exceeded. You can check this with [seff or sacct]({{< relref "../submitting_jobs/monitoring_jobs/#monitoring-completed-jobs" >}}).
- If you use the `default` lane, please increase the value in the **Requested RAM in GBs** field from the CryoSPARC Open OnDemand Form.
- If you use the `swan` lane, please try the `swan-highmem` lane instead and increase the **Highmem factor** value in the CryoSPARC Open OnDemand Form accordingly.
- **I tried everything, and my CryoSPARC job still fails, what can I do?**
- You can always email {{< icon name="envelope" >}} hcc-support@unl.edu with any additional questions or errors you have. To better assist you, please include the SLURM Job ID and the CryoSPARC Session ID of the erroneous job in your email.
If you have any questions or encounter any issues with the CryoSPARC OOD App, please email {{< icon name="envelope" >}} hcc-support@unl.edu.
Loading