- 09 Sep, 2013 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 06 Sep, 2013 1 commit
-
-
Morris Jette authored
Caused by allocating single adapter per node of specific adapter type.
-
- 04 Sep, 2013 1 commit
-
-
Morris Jette authored
Previous logic would pick CPUs then reject jobs that can not match GRES to the allocated CPUs. New logic first filters out CPUs that can not use the GRES, next picks CPUs for the job, and finally picks the GRES that best match those CPUs. bug 410
-
- 30 Aug, 2013 1 commit
-
-
Morris Jette authored
Report anything that is world writable.
-
- 29 Aug, 2013 3 commits
-
-
Danny Auble authored
/* Current code (<= 2.1) has it so we start the new * job with the next step id. This could be used * when restarting to figure out which step the * previous run of this job stopped on. */
-
Danny Auble authored
-
-
- 28 Aug, 2013 2 commits
-
-
Morris Jette authored
due to multiple free calls caused by job arrays submitted to multiple partitions. The root cause is the job priority array of the original job being re-used by the subsequent job array entries. A similar problem that could be induced by the user specifying a job accounting frequency when submitting a job array is also fixed. bug 401
-
Danny Auble authored
sacctmgr.
-
- 27 Aug, 2013 1 commit
-
-
Morris Jette authored
If reservation create request included a CoreCnt value and more nodes are required than configured, the logic in select/cons_res could go off the end of the core_cnt array. This patch adds a check for a zero value in the core_cnt array, which terminates the user-specified array. Back-port from master of commit 211c224b
-
- 24 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 23 Aug, 2013 1 commit
-
-
Morris Jette authored
This is a correction of a bug introduced in commit https://github.com/SchedMD/slurm/commit/ac44db862c8d1f460e55ad09017d058942ff6499 That commit eliminated the need of reading the node state information from squeue for performance reasons (mostly for large parallel systems in which the Prolog ran squeue, which generates a lot of simultaneous RPCs, slowing down the job launch process). It also assumed 1 CPU per node. If a pending job specified a node count of 1 and a task count larger than one, squeue was reporting the node count of the job as the same as the task count. This patch moves that same calculation of a pending job's minimum node count into slurmctld, so the squeue still does not need to read the node information, but can report the correct node count for pending jobs with minimal overhead.
-
- 22 Aug, 2013 2 commits
-
-
Danny Auble authored
to avoid it thinking we don't have a cluster name.
-
Danny Auble authored
-
- 21 Aug, 2013 1 commit
-
-
Hongjia Cao authored
If there are completing jobs, a reconfigure will set wrong job/node state: all nodes of the completing job will be set allocated, and the job will not be removed even if the completing nodes are released. The state can only be restored by restarting slurmctld after the completing nodes released.
-
- 20 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 17 Aug, 2013 1 commit
-
-
Morris Jette authored
-
- 16 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 15 Aug, 2013 4 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
could end up before the job started. Bug 371
-
Danny Auble authored
-
- 14 Aug, 2013 4 commits
-
-
Morris Jette authored
This avoids waiting for the job's initiation to fail.
-
Morris Jette authored
Fix job state recovery logic in which a job's accounting frequency was not set. This would result in a value of 65534 seconds being used (the equivalent of NO_VAL in uint16_t), which could result in the job being requeued or aborted.
-
David Bigagli authored
-
Morris Jette authored
Problem reported by BYU. slurm.conf included a file one byte in length. Logic created a buffer one byte long and used fgets() to read the file. fgets() reads one byte less than the buffer size to include a trailing '\0', so it fails to read the file.
-
- 13 Aug, 2013 3 commits
-
-
Morris Jette authored
-
jette authored
This problem was reported by Harvard University and could be reproduced with a command line of "srun -N1 --tasks-per-node=2 -O id". With other job types, the error message could be logged many times for each job. This change logs the error once per job and only if the job request does not include the -O/--overcommit option.
-
Danny Auble authored
was down (slurmctld not running) during that time period.
-
- 09 Aug, 2013 1 commit
-
-
Danny Auble authored
version of Slurm.
-
- 07 Aug, 2013 1 commit
-
-
Danny Auble authored
-
- 06 Aug, 2013 1 commit
-
-
Danny Auble authored
of at multifactor poll.
-
- 01 Aug, 2013 1 commit
-
-
David Bigagli authored
to drain the node and log error slurmd log file.
-
- 31 Jul, 2013 1 commit
-
-
David Bigagli authored
-
- 30 Jul, 2013 1 commit
-
-
Thomas Cadeau authored
-
- 26 Jul, 2013 2 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
- 25 Jul, 2013 1 commit
-
-
Alexander Bersenev authored
gres_alloc, gres_req, and gres_used fields were empty if the job was not started immediately. bug 380
-
- 23 Jul, 2013 1 commit
-
-
David Bigagli authored
-