- 30 Jun, 2015 1 commit
-
-
Danny Auble authored
and test21.* updated to use them.
-
- 29 Jun, 2015 1 commit
-
-
Nathan Yee authored
Bug 1745
-
- 25 Jun, 2015 3 commits
-
-
David Bigagli authored
-
Danny Auble authored
ESLURM_DB_CONNECTION when in error.
-
Morris Jette authored
-
- 24 Jun, 2015 2 commits
-
-
Morris Jette authored
-
David Bigagli authored
-
- 23 Jun, 2015 2 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
- 22 Jun, 2015 9 commits
-
-
Morris Jette authored
Updates of existing bluegene advanced reservations did not work at all. Some multi-core configurations resulting in an abort due to creating core_bitmaps for the reservation that only had one bit per node rather than one bit per core. These bugs were introduced in commit 5f258072
-
Morris Jette authored
-
Morris Jette authored
-
David Bigagli authored
-
Thomas Cadeau authored
-
David Bigagli authored
-
Moe Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- 19 Jun, 2015 2 commits
-
-
David Bigagli authored
-
David Bigagli authored
job data structure.
-
- 18 Jun, 2015 3 commits
-
-
David Bigagli authored
-
-
Morris Jette authored
-
- 17 Jun, 2015 3 commits
-
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Morris Jette authored
-
- 15 Jun, 2015 2 commits
-
-
Brian Christiansen authored
-
Morris Jette authored
Logic was assuming the reservation had a node bitmap which was being used to check for overlapping jobs. If there is no node bitmap (e.g. a licenses only reservation), an abort would result.
-
- 12 Jun, 2015 2 commits
-
-
Brian Christiansen authored
Bug 1739
-
Brian Christiansen authored
Bug 1743
-
- 11 Jun, 2015 5 commits
-
-
Brian Christiansen authored
Prevent double free.
-
Brian Christiansen authored
-
Brian Christiansen authored
Bug 1733
-
Didier GAZEN authored
In your node_mgr fix to keep rebooted nodes down (commit 9cd15dfe), you forgot to consider the case of nodes that are powered up but are responding after ResumeTimeout seconds (the maximum time permitted). Such nodes are marked DOWN (because they didn't respond within ResumeTimeout seconds) than should become silently available when ReturnToService=1 (as stated in the slurm.conf manual) With your modification when such nodes are finally responding, they are seen as rebooted nodes and remain in the DOWN state (with the new reason: Node unexpectedly rebooted) even when ReturnToService=1 ! Correction of commit 3c2b46af
-
Didier GAZEN authored
-
- 10 Jun, 2015 3 commits
-
-
Morris Jette authored
-
Didier GAZEN authored
In your node_mgr fix to keep rebooted nodes down (commit 9cd15dfe), you forgot to consider the case of nodes that are powered up but are responding after ResumeTimeout seconds (the maximum time permitted). Such nodes are marked DOWN (because they didn't respond within ResumeTimeout seconds) than should become silently available when ReturnToService=1 (as stated in the slurm.conf manual) With your modification when such nodes are finally responding, they are seen as rebooted nodes and remain in the DOWN state (with the new reason: Node unexpectedly rebooted) even when ReturnToService=1 ! My patch to obtain the correct behaviour:
-
Morris Jette authored
Equivalent fix as e1a00772 for select/serial rather than select/cons_res
-
- 09 Jun, 2015 2 commits
-
-
David Bigagli authored
-
Morris Jette authored
1. I submit a first job that uses 1 GPU: $ srun --gres gpu:1 --pty bash $ echo $CUDA_VISIBLE_DEVICES 0 2. while the first one is still running, a 2-GPU job asking for 1 task per node waits (and I don't really understand why): $ srun --ntasks-per-node=1 --gres=gpu:2 --pty bash srun: job 2390816 queued and waiting for resources 3. whereas a 2-GPU job requesting 1 core per socket (so just 1 socket) actually gets GPUs allocated from two different sockets! $ srun -n 1 --cores-per-socket=1 --gres=gpu:2 -p testk --pty bash $ echo $CUDA_VISIBLE_DEVICES 1,2 With this change #2 works the same way as #3. bug 1725
-