- 24 Sep, 2015 11 commits
-
-
Nathan Yee authored
Validate that sbatch, srun, salloc return partition error message on invalid partition name. bug 1223
-
Danny Auble authored
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
option.
-
Danny Auble authored
we are root or slurmuser. This was hiding a bug that will be fixed in the next commit.
-
Gennaro Oliva authored
-
Morris Jette authored
Previous logic would stop at "/".
-
Nathan Yee authored
bug 1228
-
Morris Jette authored
Modify scontrol requeue and requeue_hold commands to accept comma delimited list of job IDs. bug 1929
-
Morris Jette authored
Previously, scontrol would generate an error if passed a comma delimited list of job IDs. A space delimited list would be accepted. This increases compatability with some other Slurm commands. bug 1929
-
- 23 Sep, 2015 12 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
Conflicts: src/sacct/print.c
-
Danny Auble authored
The 2 came from the nodelist being "None assigned", which would be treated as 2 hosts when sent into hostlist.
-
Danny Auble authored
the default qos for the association.
-
Danny Auble authored
jobs. Bug 1969
-
Morris Jette authored
Pending job array records will be combined into single line by default, even if started and requeued or modified. bug 1759
-
Danny Auble authored
diversion.
-
Nathan Yee authored
bug 1874
-
Morris Jette authored
-
- 22 Sep, 2015 13 commits
-
-
Brian Gilmer authored
If user belongs to a group which has split entries in /etc/group search for its username in all groups. Ammendment to commit 93ead71a bug 1738
-
Morris Jette authored
-
Morris Jette authored
The file is not installed, but this should eliminate any possible confusion in its use.
-
Danny Auble authored
-
Morris Jette authored
If GRES are associated with specific CPUs and a job allocation includes GRES, which are not associated with the specific CPUs allocated to the job, then when the job is deallocated, an underflow error results. To reproduce: gres.conf: Name=gpu File=/dev/tty0 CPUs=0-5 Name=gpu File=/dev/tty1 CPUs=6-11 Name=gpu File=/dev/tty2 CPUs=12-17 Name=gpu File=/dev/tty3 CPUs=18-23 Then $ srun --gres=gpu:2 -N1 --ntasks-per-node=2 hostname In slurmctld log file: error: gres/gpu: job 695 dealloc node smd1 topo gres count underflow Logic modified to increment the count based upon the specific GRES actually allocated, ignoring the associated CPUs (too late to consider that after the GRES as picked).
-
Danny Auble authored
Conflicts: NEWS src/slurmctld/acct_policy.c
-
Danny Auble authored
-
Danny Auble authored
Also a very minor sanity check in job_mgr.c to make sure we at least have a task count. This shouldn't matter, but just to be as robust as possible.
-
Nathan Yee authored
only 1 job was accounted (against MaxSubmitJob) for when an array was submitted.
-
David Bigagli authored
-
Tommi Tervo authored
-
Morris Jette authored
-
Danny Auble authored
Correct counting for job array limits, job count limit underflow possible when master cancellation of master job record. bug 1952
-
- 21 Sep, 2015 4 commits
-
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Morris Jette authored
-
Morris Jette authored
-