- 22 Jan, 2016 1 commit
-
-
Danny Auble authored
-
- 21 Jan, 2016 11 commits
-
-
Danny Auble authored
Bug 2364
-
Danny Auble authored
Commit fa331e30 fixes this. The logic was bad to begin with... uint32_t new_cpus = detail_ptr->num_tasks / detail_ptr->cpus_per_task; The / should had been * this whole time. This was the reason we found this in the first place.
-
Morris Jette authored
bug 2369
-
Gennaro Oliva authored
-
Morris Jette authored
If scancel is operating on large number of jobs and RPC responses from slurmctld daemon are slow then introduce a delay in sending the cancel job requests from scancel in order to reduce load on slurmctld. bug 2256
-
Morris Jette authored
If a job launch is delayed, the test was failing due to bad parsing. These lines were being interpretted as a counter folloed by node names of "queued" and "has": srun: job 1332712 queued and waiting for resources srun: job 1332712 has been allocated resources
-
Morris Jette authored
-
Morris Jette authored
bug 2366
-
Danny Auble authored
-
Morris Jette authored
Backfill scheduling properly synchronized with Cray Node Health Check. Prior logic could result in highest priority job getting improperly postponed. bug 2350
-
Danny Auble authored
-
- 20 Jan, 2016 8 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
-
Morris Jette authored
This corrects logic from commit e5a61746 that could result in use of NULL pointer
-
Morris Jette authored
It was previously triggered by executing "scontrol reconfig" on a front-end system while there was a job in completing state.
-
Morris Jette authored
Properly account for memory, CPUs and GRES when slurmctld is reconfigured while there is a suspended job. Previous logic would add the CPUs, but not memory or GPUs. This would result in underflow/overflow errors in select cons_res plugin. bug 2353
-
Morris Jette authored
The counter is really intended to reflect the count of running or suspended jobs rather than running jobs alone. Previous logic would report an underflow for the "job_cnt_run" variable if 1. job submitted 2. job suspended 3. scontrol reconfig 4. job cancelled
-
- 19 Jan, 2016 3 commits
-
-
Morris Jette authored
Log the length of bitmaps in addition to the bits set. Also increase the string length used for logging.
-
Morris Jette authored
Previous logic would prevent allocation of sockets to a job unless the entire socket was available. If there were any specialized cores, the socket was treated as being not available and unusable. For example, if a node had 2 sockets, then a job requesting 2 specialized cores would reserve one core on each of the two sockets and render the job not runnable.
-
Morris Jette authored
There was logic in sinfo's print state function that determined if the state was MIXED. This logic was duplicated logic from the _query_server() function in sinfo.c and has been removed. Also note the logic was already gone from the "short state" print function (I noticed the discrepeancy in the print functions, but discovered they both printed the correct state information).
-
- 17 Jan, 2016 1 commit
-
-
jette authored
Fix backfill scheduling bug which could postpone the scheduling of jobs due to avoidance of nodes in COMPLETING state. bug 2350
-
- 16 Jan, 2016 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
No need to look up the Reason string for a job, we just set the value.
-
- 15 Jan, 2016 4 commits
-
-
Brian Christiansen authored
-
Brian Christiansen authored
Bug 2255
-
Morris Jette authored
-
Brian Christiansen authored
Bug 2343
-
- 14 Jan, 2016 3 commits
-
-
Morris Jette authored
-
Morris Jette authored
Fix for configuration of "AuthType=munge" and "AuthInfo=socket=..." with alternate munge socket path. bug 2348
-
Morris Jette authored
If a node is out of memory, then the malloc performed by slurmstepd periodically may fail, killing the slurmstepd and orphaning it's processes. bug 2341
-
- 13 Jan, 2016 2 commits
-
-
Morris Jette authored
Backfill scheduling fix: If a job can't be started due to a "group" resource limit, rather than reserve resources for it when the next job ends, don't reserve any resources for it. The problem with the original logic is that if a lot of resources are reserved for such pending jobs, then jobs futher down the queue may defered when they really can and should be started. An ideal solution would track all of the TRES resources through time as jobs start and end, but we don't have that logic in the backfill scheduler and don't want that extra overhead in the backfill scheduler. bugs 2326 and 2282
-
Alejandro Sanchez authored
bug 2303
-
- 12 Jan, 2016 5 commits
-
-
Tim Wickberg authored
Handle unexpectedly large lines for hostlists. (Bug 2333.) While here rework to avoid extraneous xstrcat calls by using xstrfmtcat instead of snprintf + xstrcat. Collapse line end into own string for readability. No performance or functional change, aside from removing possible line truncation (which will silence additional Coverity warnings). Removes a double xfree() in slurm_sprint_reservation_info().
-
Morris Jette authored
When a reservation is created or updated, compress user provided node names using hostlist functions (e.g. translate user input of "Nodes=tux1,tux2" into "Nodes=tux[1-2]"). bug 2333
-
Brian Christiansen authored
Reported by CLANG Continuation of 7eff526c
-
Tim Wickberg authored
Match behavior of other PBS-like resource managers. Bug 2330.
-
Danny Auble authored
using TRES as a key word.
-