- 10 May, 2013 24 commits
-
-
Morris Jette authored
This happens when a job has multiple partitions and priority/multifactor is NOT in use
-
Morris Jette authored
-
Hongjia Cao authored
fix of the following problem: if a node is excised from a job and a reconfiguration(e.g., update partition) is done when the job is still running, the node will be left in state idle but not available any more until the next reconfiguration/restart of slurmctld after the job finished.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
to be edited more.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
to be running in the calling program.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Rod Schultz authored
-
David Bigagli authored
-
- 09 May, 2013 1 commit
-
-
David Bigagli authored
-
- 08 May, 2013 4 commits
-
-
David Bigagli authored
-
David Bigagli authored
-
jette authored
-
Danny Auble authored
the node tab and we didn't notice.
-
- 07 May, 2013 4 commits
-
-
David Bigagli authored
-
David Bigagli authored
which reads the array boundary.
-
David Bigagli authored
-
David Bigagli authored
the daemon to core dump.
-
- 05 May, 2013 1 commit
-
-
Hongjia Cao authored
-
- 04 May, 2013 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
Response to bug 274
-
- 03 May, 2013 4 commits
-
-
Morris Jette authored
If a user requests multiple counts of a specific GRES type (e.g. "--gres=gpu:2") and those GRES are associated with specific CPUs, the job submit could fail without this change.
-
Morris Jette authored
-
David Bigagli authored
-
David Bigagli authored
-