- 20 Jul, 2011 2 commits
-
-
Morris Jette authored
Fix bug in select/cons_res task distribution logic when tasks-per-node=0. Eliminates misleading slurmctld message "error: cons_res: _compute_c_b_task_dist oversubscribe." This problem was introduced in SLURM version 2.2.5 in order to fix a task distribution problem when cpus_per_task=0. Patch from Rod Schultz, Bull.
-
Morris Jette authored
This fixes a possible race condition when running test15.5 depending which message arrives first at shutdown.
-
- 19 Jul, 2011 4 commits
-
-
Morris Jette authored
In the srun/aprun wrapper man page, clarify how conflicing command line options for --alps and native srun options are handled.
-
Morris Jette authored
Improve documentation with respect to preemption rules, namely PreemptMode=suspend is incompatible with PreemptType=preempt/qos. Patch from Bill Brophy, Bull.
-
Danny Auble authored
-
Danny Auble authored
using gang scheduling to finish. Before the pending jobs would fail waiting for all other time slicing jobs to finish.
-
- 18 Jul, 2011 14 commits
-
-
Danny Auble authored
common .la for the block allocator
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
in the block allocator
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
common location
-
Danny Auble authored
-
Danny Auble authored
block allocator in the bluegene plugin
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
srun wrapper for srun not getting all arguments when job allocation did not exist at execution time.
-
- 15 Jul, 2011 4 commits
-
-
Morris Jette authored
If the srun wrapper is executed when there is no job allocation, then create an allocation using salloc and call the srun wrapper again so that we can configure memory limits in aprun's execute line. Without this change, aprun would lack the memory allocation information and the task launch would fail if the job were allocated less than the full node.
-
Morris Jette authored
Prevent duplicate arguments to aprun from the srun.pl wrapper. This could happen if the command line included "--alps" arguments plus other arguments generated by the normal srun options. For example: srun -t 5 --alps="-t300" a.out specifies the job time limit in two places.
-
Danny Auble authored
-
Danny Auble authored
-
- 14 Jul, 2011 4 commits
-
-
Morris Jette authored
Set SLURM_MEM_PER_CPU or SLURM_MEM_PER_NODE environment variables for both interactive (salloc) and batch jobs if the job has a memory limit. For Cray systems also set CRAY_AUTO_APRUN_OPTIONS environment variable with the memory limit.
-
Morris Jette authored
Clarify in the srun (aprun wrapper) which options apply to an existing job allocation or new allocation and which are not applicable to Cray computers.
-
Danny Auble authored
asking for less than 1 mb per PE.
-
Morris Jette authored
Correction to srun man page. Get SIGINT working when srun spawns salloc.
-
- 13 Jul, 2011 3 commits
-
-
Morris Jette authored
For front-end configurations (Cray and IBM BlueGene), bind each batch job to a unique CPU to limit the damage which a single job can cause. Previously any single job could use all CPUs causing problems for other jobs or system daemons. This addresses a problem reported by Steve Trofinoff, CSCS.
-
Morris Jette authored
There was a table added, but it all ran together without being put into a table or list. I changed it to an unordered list.
-
Morris Jette authored
-
- 12 Jul, 2011 9 commits
-
-
Danny Auble authored
enforce memory limits.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
man pages. Patch by Nancy Kritkausky, Bull.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
Bill Brophy, Bull.
-
Morris Jette authored
Note the job and partition state file formats have changed and RPCs with information for jobs and partitions have changed.
-