- 15 May, 2013 10 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
The fake munge signature was not null terminated so unallocated memory would be referenced. Fix for bug 289
-
Danny Auble authored
-
Danny Auble authored
add code to handle profiling of tasks
-
Danny Auble authored
are called.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
- 14 May, 2013 6 commits
-
-
David Bigagli authored
Call the correct plugin fini function.
-
jette authored
Conflicts: config.h.in
-
Morris Jette authored
-
Morris Jette authored
-
jette authored
Without this change, when the slurmctld daemon is reconfigured and jobs are in completing state, then all nodes allocated to the job are resent TERMINATE_JOB RPCs. After this change, only nodes which have not processed the TERMINATE_JOB RPC are sent another one.
-
David Bigagli authored
gpu_device global data structure.
-
- 13 May, 2013 12 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Downing the node will kill all jobs allocated to the node, very bad on something like a BlueGene system
-
Danny Auble authored
-
Tommi T authored
-
David Bigagli authored
multiple memory operations which are not necessary.
-
Morris Jette authored
If slurm.conf defines fewer of a gres than gres.conf AND gres.conf defines specific CPUs to be associated with the gres, this patch prevents the bit_overlap call being made with bitmaps of different sizes, which would cause an abort.
-
Danny Auble authored
-
Martin Perry authored
-
Morris Jette authored
Also modify the epilog script to use this new output field
-
- 11 May, 2013 5 commits
-
-
jette authored
Timing changes due to faster job step launch
-
jette authored
-
Morris Jette authored
-
Morris Jette authored
This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue.
-
David Bigagli authored
-
- 10 May, 2013 7 commits
-
-
Morris Jette authored
This happens when a job has multiple partitions and priority/multifactor is NOT in use
-
Morris Jette authored
-
Hongjia Cao authored
fix of the following problem: if a node is excised from a job and a reconfiguration(e.g., update partition) is done when the job is still running, the node will be left in state idle but not available any more until the next reconfiguration/restart of slurmctld after the job finished.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-