First of all thank you so much for the release of cisTEM, it looks very promising.
I am trying to set cisTEM up with our queuing system, SLURM. In the documentation (https://cistem.org/documentation#tab-1-15) there is some information on how to do this.
Although, the descried way seems to submit every single process as a individual SLURM job, resulting in multiple nodes being fire up to run only a single process each.
This seems suboptimal and on top of that, it adds a significant load on the SLURM head node.
To make better usage of resources one could make a special partition e.g (cisTEM) in SLURM, which enables job sharing on all the cpu threads (e.g. "SHARED=YES:72" if machines in partition has 72 cpu thread each). Then the cisTEM job command would look something like this: "srun -n 1 --share -p cisTEM -o /dev/null /cisTEM_bin_directory/$command".
This would to some degree work, but it would still generate a lot of noise in the queue and a lot of cross talk if other cisTEM-SLURM jobs are started simultaneously.
At the moment we are running cisTEM through "srun.x11" which gives an option to run GUI jobs on a allocated SLURM node. Only downside to this is that users need to close cisTEM and exit the terminal for the SLURM allocation to terminate. A reasonable wall need to be set due to this.
I know cisTEM is not OpenMPI compatible, but it could be nice if it was possible to submit a single srun/sbatch job to a single node that uses all the threads on that node.
So "No. Copies #" was replaced by "-n #". Does that make sense?
I was hoping that everybody running cisTEM on SLURM could comments on my thoughts and especially if you have a better solution on running cisTEM through SLURM.