- 22 Oct, 2015 1 commit
-
-
Morris Jette authored
-
- 19 Oct, 2015 1 commit
-
-
David Bigagli authored
-
- 09 Oct, 2015 1 commit
-
-
David Bigagli authored
about job setup.
-
- 07 Oct, 2015 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
database but the start record hadn't made it yet.
-
- 06 Oct, 2015 3 commits
-
-
Thomas Cadeau authored
bug 2011
-
Danny Auble authored
','.
-
Morris Jette authored
bug 1999
-
- 05 Oct, 2015 1 commit
-
-
jette authored
-
- 03 Oct, 2015 1 commit
-
-
Morris Jette authored
Don't requeue RPC going out from slurmctld to DOWN nodes (can generate repeating communication errors). bug 2002
-
- 02 Oct, 2015 1 commit
-
-
Morris Jette authored
This will only happen if a PING RPC for the node is already queued when the decision is made to power it down, then fails to get a response for the ping (since the node is already down). bug 1995
-
- 30 Sep, 2015 3 commits
-
-
Morris Jette authored
If a job's CPUs/task ratio is increased due to configured MaxMemPerCPU, then increase it's allocated CPU count in order to enforce CPU limits. Previous logic would increase/set the cpus_per_task as needed if a job's --mem-per-cpu was above the configured MaxMemPerCPU, but NOT increase the min_cpus or max_cpus varilable. This resulted in allocating the wrong CPU count.
-
Brian Christiansen authored
Continuation of 1252d1a1 Bug 1938
-
Morris Jette authored
Requeue/hold batch job launch request if job already running. This is possible if node went to DOWN state, but jobs remained active. In addition, if a prolog/epilog failed DRAIN the node rather than setting it down, which could kill jobs that could continue to run. bug 1985
-
- 29 Sep, 2015 3 commits
-
-
Morris Jette authored
Previous logic would not report termiation siganl, only exit code, which could be meaningless.
-
Brian Christiansen authored
Bug 1938
-
Brian Christiansen authored
Bug 1984
-
- 28 Sep, 2015 1 commit
-
-
Morris Jette authored
When nodes have been allocated to a job and then released by the job while resizing, this patch prevents the nodes from continuing to appear allocated and unavailable to other jobs. Requires exclusive node allocation to trigger. This prevents the previously reported failure, but a proper fix will be quite complex and delayed to the next major release of Slurm (v 16.05). bug 1851
-
- 23 Sep, 2015 1 commit
-
-
Danny Auble authored
The 2 came from the nodelist being "None assigned", which would be treated as 2 hosts when sent into hostlist.
-
- 22 Sep, 2015 2 commits
-
-
Brian Gilmer authored
If user belongs to a group which has split entries in /etc/group search for its username in all groups. Ammendment to commit 93ead71a bug 1738
-
Danny Auble authored
Correct counting for job array limits, job count limit underflow possible when master cancellation of master job record. bug 1952
-
- 21 Sep, 2015 2 commits
-
-
Danny Auble authored
Also a very minor sanity check in job_mgr.c to make sure we at least have a task count. This shouldn't matter, but just to be as robust as possible.
-
Nathan Yee authored
only 1 job was accounted (against MaxSubmitJob) for when an array was submitted.
-
- 17 Sep, 2015 2 commits
-
-
David Bigagli authored
-
Tommi Tervo authored
-
- 11 Sep, 2015 3 commits
-
-
Morris Jette authored
-
Morris Jette authored
This prevents a step from being launched if the job is killed while the prolog is running. Reproducing the original failure requires use of srun to trigger the prolog and using scancel while that prolog is running. bug 1755
-
Brian Christiansen authored
And add missing documenation. Bug 1921
-
- 10 Sep, 2015 5 commits
-
-
Morris Jette authored
GRES were not being properly tracks for multiple simultaneous steps. A step which could have run later could be rejected as never being able to run. Replacement for commit dd842d79, which was reverted in commit 6f73812875c bug 1925
-
Morris Jette authored
That commit would address a limited subset of problems and introduce other bugs rather than fixing the root of the problem.
-
David Bigagli authored
-
David Bigagli authored
-
Danny Auble authored
and you use all the GRES up instead of reporting the configuration isn't available you hold the requesting step until the GRES is available.
-
- 09 Sep, 2015 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
Don't trucate task ID information in "squeue --array/-r" output. Task ID info in sview also expanded to 64 characters (from ~16 chars).
-
- 08 Sep, 2015 5 commits
-
-
Morris Jette authored
-
Morris Jette authored
At the start of a scheduling cycle, the job's "reason" field can be cleared. If the scheduler fails to reach that job and set its value to a new reason, the original reason was lost and the state reports would report NoReason. This change saves the last reason for a job being in a pending state and reports that value to the user until we have a new valid reason for it still being in a PENDING state. bug 1919
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
bug 1920
-