- 29 Aug, 2013 1 commit
-
-
Morris Jette authored
-
- 28 Aug, 2013 11 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
due to multiple free calls caused by job arrays submitted to multiple partitions. The root cause is the job priority array of the original job being re-used by the subsequent job array entries. A similar problem that could be induced by the user specifying a job accounting frequency when submitting a job array is also fixed. bug 401
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Danny Auble authored
sacctmgr.
-
Danny Auble authored
-
Morris Jette authored
Some uninitialized variables, possible NULL pointers, etc. None of these have been seen in practice that we know of, but these changes will bulletproof the code
-
Morris Jette authored
Never observed, but "clang" tool reports these as possible failures
-
- 27 Aug, 2013 10 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
If reservation create request included a CoreCnt value and more nodes are required than configured, the logic in select/cons_res could go off the end of the core_cnt array. This patch adds a check for a zero value in the core_cnt array, which terminates the user-specified array. Back-port from master of commit 211c224b
-
Morris Jette authored
-
- 26 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 24 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 23 Aug, 2013 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
This is a correction of a bug introduced in commit https://github.com/SchedMD/slurm/commit/ac44db862c8d1f460e55ad09017d058942ff6499 That commit eliminated the need of reading the node state information from squeue for performance reasons (mostly for large parallel systems in which the Prolog ran squeue, which generates a lot of simultaneous RPCs, slowing down the job launch process). It also assumed 1 CPU per node. If a pending job specified a node count of 1 and a task count larger than one, squeue was reporting the node count of the job as the same as the task count. This patch moves that same calculation of a pending job's minimum node count into slurmctld, so the squeue still does not need to read the node information, but can report the correct node count for pending jobs with minimal overhead.
-
- 22 Aug, 2013 8 commits
-
-
Danny Auble authored
in slurmctld.h which is included by slurm_accounting_storage.h which is included by slurmdbd.c which would cause confusion at the very least.
-
Danny Auble authored
to avoid it thinking we don't have a cluster name.
-
https://github.com/SchedMD/slurmjette authored
-
jette authored
Previously there was a sleep(5) during which the backup controller was non responsive during its startup mode or returning from primary mode.
-
jette authored
This will prevent possible confusion for the backup controller when it switches from primary back to backup modes since those pthread IDs are no longer value. Note the thred_id_rpc could be used by the backup controller after returning to backup mode
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
they are coordinators over.
-
- 21 Aug, 2013 2 commits
-
-
Hongjia Cao authored
If there are completing jobs, a reconfigure will set wrong job/node state: all nodes of the completing job will be set allocated, and the job will not be removed even if the completing nodes are released. The state can only be restored by restarting slurmctld after the completing nodes released.
-
Morris Jette authored
-
- 20 Aug, 2013 4 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
-
jette authored
-