- 22 Jun, 2015 3 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- 19 Jun, 2015 2 commits
-
-
David Bigagli authored
-
David Bigagli authored
job data structure.
-
- 18 Jun, 2015 4 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
-
Morris Jette authored
-
- 17 Jun, 2015 6 commits
-
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Brian Christiansen authored
Conflicts: contribs/README doc/html/crypto_plugins.shtml doc/html/plugins.shtml doc/html/preempt.shtml doc/html/preemption_plugins.shtml doc/html/priority_plugins.shtml doc/html/topology_plugin.shtml doc/man/man1/sbatch.1 doc/man/man3/slurm_allocate_resources.3 doc/man/man5/slurm.conf.5
-
Brian Christiansen authored
-
Morris Jette authored
-
- 15 Jun, 2015 7 commits
-
-
Brian Christiansen authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Logic was assuming the reservation had a node bitmap which was being used to check for overlapping jobs. If there is no node bitmap (e.g. a licenses only reservation), an abort would result.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- 12 Jun, 2015 4 commits
-
-
Brian Christiansen authored
Bug 1739
-
Brian Christiansen authored
Bug 1743
-
Brian Christiansen authored
-
Brian Christiansen authored
Bug 1743
-
- 11 Jun, 2015 9 commits
-
-
Brian Christiansen authored
Prevent double free.
-
Brian Christiansen authored
cpufreq variables weren't being intialized to NO_VAL when using task/none plugin. This caused the conditions in cpur_freq_reset to not stop test_cpu_owner_lock from being called.
-
Brian Christiansen authored
Conflicts: src/common/cpu_frequency.c
-
Brian Christiansen authored
Conflicts: src/common/cpu_frequency.c
-
Brian Christiansen authored
-
Brian Christiansen authored
Bug 1733
-
jette authored
-
Didier GAZEN authored
In your node_mgr fix to keep rebooted nodes down (commit 9cd15dfe), you forgot to consider the case of nodes that are powered up but are responding after ResumeTimeout seconds (the maximum time permitted). Such nodes are marked DOWN (because they didn't respond within ResumeTimeout seconds) than should become silently available when ReturnToService=1 (as stated in the slurm.conf manual) With your modification when such nodes are finally responding, they are seen as rebooted nodes and remain in the DOWN state (with the new reason: Node unexpectedly rebooted) even when ReturnToService=1 ! Correction of commit 3c2b46af
-
Didier GAZEN authored
-
- 10 Jun, 2015 5 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
It was always failing when a node list was supplied on job submission
-
Morris Jette authored
-
Morris Jette authored
-