- 11 May, 2013 4 commits
-
-
jette authored
Timing changes due to faster job step launch
-
jette authored
-
Morris Jette authored
-
Morris Jette authored
This can be especially useful to schedule GPUs. For example a node can be associated with two Slurm partitions (e.g. "cpu" and "gpu") and the partition/queue "cpu" could be limited to only a subset of the node's CPUs, insuring that one or more CPUs would be available to jobs in the "gpu" partition/queue.
-
- 10 May, 2013 24 commits
-
-
Morris Jette authored
This happens when a job has multiple partitions and priority/multifactor is NOT in use
-
Morris Jette authored
-
Hongjia Cao authored
fix of the following problem: if a node is excised from a job and a reconfiguration(e.g., update partition) is done when the job is still running, the node will be left in state idle but not available any more until the next reconfiguration/restart of slurmctld after the job finished.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
to be edited more.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
to be running in the calling program.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Rod Schultz authored
-
David Bigagli authored
-
- 09 May, 2013 1 commit
-
-
David Bigagli authored
-
- 08 May, 2013 4 commits
-
-
David Bigagli authored
-
David Bigagli authored
-
jette authored
-
Danny Auble authored
the node tab and we didn't notice.
-
- 07 May, 2013 4 commits
-
-
David Bigagli authored
-
David Bigagli authored
which reads the array boundary.
-
David Bigagli authored
-
David Bigagli authored
the daemon to core dump.
-
- 05 May, 2013 1 commit
-
-
Hongjia Cao authored
-
- 04 May, 2013 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
Response to bug 274
-