- 17 Jun, 2014 21 commits
-
-
Morris Jette authored
-
Morris Jette authored
Conflicts: META
-
Morris Jette authored
-
Morris Jette authored
This is due to a bug introduced in commit 83d626caa SOme configurations could result in NULL names in the node list table (e.g. hidden partitions).
-
Morris Jette authored
-
Morris Jette authored
SLowness introduced in commit 83d626ca
-
Morris Jette authored
This reverts commit 0d6a9965 That patch would permit a job with shared resources to run on the same node as a job without shared resources, unfortunately it let those jobs share CPUs. Finer grained sharing might be possible with extensive code changes, but not something to work on now.
-
jette authored
-
https://github.com/SchedMD/slurmjette authored
-
jette authored
Without this change, the job's --shared option when used with a partition configuration of Shared=YES was not being honored by the select/cons_res or select/serial plugin.
-
jette authored
Original code was implicitly setting a job's shared field to 1 for select/cons_res.
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
Conflicts: META NEWS
-
David Bigagli authored
-
ggeorgakoudis authored
display larger sharing values.
-
David Bigagli authored
-
Morris Jette authored
cores specialization test 17.34 failed with bad assignment logic
-
David Bigagli authored
-
David Bigagli authored
-
- 16 Jun, 2014 15 commits
-
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
-
David Bigagli authored
including state, job ids and allocated nodes counter.
-
Danny Auble authored
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
-
Danny Auble authored
-
Morris Jette authored
Message was possibly misleading as specialized cores can be requested by users, which differ from the configured counts.
-
Danny Auble authored
-
Morris Jette authored
Provide more precise error message when job allocation can not be satisfied (e.g. memory, disk, cpu count, etc. rather than just "node configuration not available"). bug 836
-
Morris Jette authored
-
Morris Jette authored
-
- 14 Jun, 2014 3 commits
-
-
jette authored
-
jette authored
If FastSchedule=0 is configured and some nodes have not registered for service (so we do not know their actual resource counts), then leave the job pending rather than rejecting it without knowing if it can run later (when the node registers and we know its specs). bug 872
-
jette authored
-
- 13 Jun, 2014 1 commit
-
-
jette authored
Conflicts: src/common/slurm_protocol_defs.c
-