- 29 Apr, 2016 4 commits
-
-
Danny Auble authored
Backport of commit cca1616b from 16.05
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
- 28 Apr, 2016 3 commits
-
-
Artem Polyakov authored
See bug 2672 for details
-
Tim Wickberg authored
-
Danny Auble authored
of Slurm.
-
- 27 Apr, 2016 2 commits
-
-
Tim Wickberg authored
Compiler errors out preventing these 13 from running without fixing the implied int type for main.
-
Morris Jette authored
Avoid error message of "Requested cpu_bind option requires entire node to be allocated; disabling affinity" being generated in some cases where task/affinity and task/cgroup plugins used together.
-
- 26 Apr, 2016 2 commits
-
-
Danny Auble authored
restart of the slurmctld.
-
Sam Gallop authored
Otherwise miscalculated limit will lead to job cancellation even when well inside the allocated amount. Bug 2660.
-
- 23 Apr, 2016 1 commit
-
-
Tim Wickberg authored
in the slurmdbd segfaulting. Bug 2656
-
- 20 Apr, 2016 1 commit
-
-
Morris Jette authored
burst_buffer/cray - Don't call Datawarp "paths" function if script includes only create or destroy of persistent burst buffer. Some versions of Datawarp software return an error for such scripts, causing the job to be held. bug 2624
-
- 13 Apr, 2016 2 commits
-
-
Morris Jette authored
-
Danny Auble authored
that wasn't set up correctly.
-
- 12 Apr, 2016 2 commits
-
-
Morris Jette authored
power/cray - Fix bug introduced in 15.08.10 preventin operation in many cases. bug 2628
-
Morris Jette authored
-
- 11 Apr, 2016 4 commits
-
-
Morris Jette authored
burst_buffer/cray - Fix for script creating or deleting persistent buffer would fail "paths" operation and hold the job. bug 2624
-
Danny Auble authored
and it doesn't meet basic requirements.
-
Morris Jette authored
burst_buffer/cray - Decrement job's prolog_running counter if pre_run fails. bug 2621
-
Morris Jette authored
If a job is no longer in configuring state, then clear the prolog_running counter on slurmctld restart or reconfigure. bug 2621
-
- 09 Apr, 2016 1 commit
-
-
Morris Jette authored
When determining when a pending job will be able to start, rather than testing after removing each running job and trying to schedule the pending jobs, remove multiple jobs that all end about the same time before testing. This reduces the number of calls to the job placement logic, which is time consuming.
-
- 07 Apr, 2016 2 commits
-
-
Sami Ilvonen authored
-
Morris Jette authored
Fix for job "--contiguous" option that could cause job allocation/launch failure or slurmctld crash. bug 2573
-
- 06 Apr, 2016 5 commits
-
-
Morris Jette authored
-
Danny Auble authored
This reverts commit f559a55c.
-
Danny Auble authored
constraints mattered in a job. Details include: A job doesn't request memory but the system is running with CR_*MEMORY with no default memory limit and the job requests nodes with features of different sizes. Previously the order of constraints mattered where the smaller memory node would need to be requested first or the job would fail. Bug 2608
-
Morris Jette authored
Previous logic would get an account and/or QOS time limit and use that value to overwrite the incoming RPC's NO_VAL value, which would change a job's time limit when changing an unrelated field (e.g. priority, QOS, etc.). bug 2610
-
Danny Auble authored
-
- 05 Apr, 2016 1 commit
-
-
Morris Jette authored
Fix backfill scheduler race condition that could cause invalid pointer in select/cons_res plugin. Bug introduced in 15.08.9, commit: efd9d35e The scenario is as follows 1. Backfill scheduler is running, then releases locks 2. Main scheduling loop starts a job "A" 3. Backfill scheduler resumes, finds job "A" in its queue and resets it's partition pointer. 4. Job "A" completes and tries to remove resource allocation record from select/cons_res data structure, but fails to find it because it is looking in the table for the wrong partition. 5. Job "A" record gets purged from slurmctld 6. Select/cons_res plugin attempts to operate on resource allocation data structure, finds pointer into the now purged data structure of job "A" and aborts or gets SEGV Bug 2603
-
- 04 Apr, 2016 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
canceled while launching.
-
- 02 Apr, 2016 2 commits
-
-
Morris Jette authored
-
Danny Auble authored
-
- 31 Mar, 2016 2 commits
-
-
Morris Jette authored
Power/cray: Don't specify NID list to Cray APIs. If any of those nodes are not in a ready state, the API returned an error for ALL nodes rather than valid data for nodes in ready state. bug 2332
-
Matthieu Hautreux authored
and retries are done making the error message a little misleading.
-
- 30 Mar, 2016 2 commits
-
-
Danny Auble authored
rollup would effectively never run again. bug 2575 and sort of bug 2596
-
Morris Jette authored
-
- 28 Mar, 2016 2 commits
-
-
Morris Jette authored
There was a subtle bug in how tasks were bound to CPUs which could result in an "infinite loop" error. The problem was various socket/core/threasd calculations were based upon the resources allocated to a step rather than all resources on the node and rounding errors could occur. Consider for example a node with 2 sockets, 6 cores per socket and 2 threads per core. On the idle node, a job requesting 14 CPUs is submitted. That job would be allocted 4 cores on the first socket and 3 cores on the second socket. The old logic would get the number of sockets for the job at 2 and the number of cores at 7, then calculate the number of cores per socket at 7/2 or 3 (rounding down to an integer). The logic layouting out tasks would bind the first 3 cores on each socket to the job then not find any remaining cores, report the "infinite loop" error to the user, and run the job without one of the expected cores. The problem gets even worse when there are some allocated cores on a node. In a more extreme case, a job might be allocated 6 cores on one socket and 1 core on a second socket. In that case, 3 of that job's cores would be unused. bug 2502
-
Morris Jette authored
This is a revision to commit 1ed38f26 The root problem is that a pthread is passed an argument which is a pointer to a variable on the stack. If that variable is over-written, the signal number recieved will be garbage, and that bad signal number will be interpretted by srun to possible abort the request.
-