- 15 Feb, 2014 3 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
Morris Jette authored
-
- 14 Feb, 2014 3 commits
-
-
Daniele Didomizio authored
Added sbatch '--parsable' option to output only the job id number and the cluster name separated by a semicolon rather than "Submitted batch job....". Errors will still be displayed.
-
David Bigagli authored
-
Danny Auble authored
needed to forward a message the slurmd would core dump.
-
- 13 Feb, 2014 2 commits
-
-
Morris Jette authored
-
David Bigagli authored
describing that jobs must be drained from cluster before deploying any checkpoint plugin.
-
- 12 Feb, 2014 2 commits
-
-
David Bigagli authored
-
Morris Jette authored
Properly enforce a job's cpus-per-task option when a job's allocation is constrained on some nodes by the mem-per-cpu option. bug 590
-
- 11 Feb, 2014 1 commit
-
-
Morris Jette authored
-
- 10 Feb, 2014 5 commits
-
-
David Bigagli authored
-
Morris Jette authored
limit scheduling logic depth by partition.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- 09 Feb, 2014 1 commit
-
-
Moe Jette authored
-
- 08 Feb, 2014 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 07 Feb, 2014 2 commits
-
-
Morris Jette authored
bug 586
-
Morris Jette authored
Partial response to bug 521
-
- 06 Feb, 2014 3 commits
-
-
Morris Jette authored
No change in logic, just change name of recently added env var
-
Morris Jette authored
Set the environment variable SLURM_PARTITION to the partition in which a job is running. Set for salloc, sbatch and srun.
-
Danny Auble authored
-
- 05 Feb, 2014 5 commits
-
-
David Bigagli authored
-
Martin Perry authored
-
Danny Auble authored
-
Dominik Bartkiewicz authored
Set GPU_DEVICE_ORDINAL environment variable.
-
Danny Auble authored
-
- 04 Feb, 2014 4 commits
-
-
Morris Jette authored
Previous logic would try to pick a specific node count and on a heterogeneous system, this would cause a problem. This change largely reverts commit a270417b
-
David Bigagli authored
beside the numerical values.
-
Danny Auble authored
-
Morris Jette authored
Added whole_node field to job_resources structure Enable gang scheduling for jobs with core specialization and other jobs allocated whole nodes.
-
- 03 Feb, 2014 1 commit
-
-
Danny Auble authored
-
- 31 Jan, 2014 3 commits
-
-
David Bigagli authored
-
Danny Auble authored
i.e. salloc -n32 doesn't request the number of nodes and with the previous code if this request used 4 nodes and only 1 was left in GrpNodes it would just run with no issue since we were checking things before we selected how many nodes it ran on. Now we check this afterwards so we always check the limits on how many nodes, cpus and how much memory is to be used.
-
Morris Jette authored
Fix step allocation when some CPUs are not available due to memory limits. This happens when one step is active and using memory that blocks the scheduling of another step on a portion of the CPUs needed. The new step is now delayed rather than aborting with "Requested node configuration is not available". bug 577
-
- 29 Jan, 2014 1 commit
-
-
David Bigagli authored
incorrectly when using the hostlist_push_host function and input surrounded by [].
-
- 28 Jan, 2014 1 commit
-
-
Danny Auble authored
based on ionode count correctly on slurmctld restart.
-
- 25 Jan, 2014 1 commit
-
-
jette authored
-