acaprez2 created page: packaging fixes authored by Adam Caprez's avatar Adam Caprez
...@@ -10,12 +10,12 @@ ...@@ -10,12 +10,12 @@
- Default resource requirements are way off. (Avi mostly fixed) - Default resource requirements are way off. (Avi mostly fixed)
- The BOSCO installer for a remote resource doesn't include the helper script to translate Pegasus resource specifications to SLURM job attributes (`slurm_local_submit_attributes.sh`). Not really a chipathlon problem, but needs fixed. (Adam will fix BOSCO package) - ~~The BOSCO installer for a remote resource doesn't include the helper script to translate Pegasus resource specifications to SLURM job attributes (`slurm_local_submit_attributes.sh`). Not really a chipathlon problem, but needs fixed. (Adam will fix BOSCO package)~~ **Done, modified BOSCO package created.**
- The code that adds things in the `jobs/scripts` directory as executables is also picking up the `.pyc` byte-compiled files. Doesn't really break anything, but should be cleaned up. - The code that adds things in the `jobs/scripts` directory as executables is also picking up the `.pyc` byte-compiled files. Doesn't really break anything, but should be cleaned up.
- The Picard version used (1.139) is quite old. Would be nice to be able to use the current 2.9.0 version. (Natasha will test) - The Picard version used (1.139) is quite old. Would be nice to be able to use the current 2.9.0 version. (Natasha will test)
- The two SPP scripts (`run_spp.R` and `run_spp_nodups.R`) need to be packaged up in Conda in a sane way. (Adam will do) - ~~The two SPP scripts (`run_spp.R` and `run_spp_nodups.R`) need to be packaged up in Conda in a sane way. (Adam will do)~~ **Done now, `phantompeakqualtools` created.**
- How to distribute the MongoDB stuff in a sane way. Docker is currently the leading candidate. The size of the experiments and samples collection is a little over 1GB, which isn't terribly large. We could create a container with Mongo installed and the DB pre-populated, and then include scripts to do the update from ENCODE. - How to distribute the MongoDB stuff in a sane way. Docker is currently the leading candidate. The size of the experiments and samples collection is a little over 1GB, which isn't terribly large. We could create a container with Mongo installed and the DB pre-populated, and then include scripts to do the update from ENCODE.