- 16 Feb, 2011 1 commit
-
-
Danny Auble authored
-
- 15 Feb, 2011 6 commits
-
-
Don Lipari authored
-
Moe Jette authored
-
Don Lipari authored
-- Removed remnant code for enforcing max sockets/cores/threads in the cons_res plugin. This was responsible for a bug reported by Rod Schultz.
-
Moe Jette authored
set, then salloc can execute in the background. Otherwise a message will be printed and the job allocation halted until brought into the foreground.
-
Moe Jette authored
terminal's state.
-
Danny Auble authored
-
- 14 Feb, 2011 1 commit
-
-
Danny Auble authored
-
- 11 Feb, 2011 1 commit
-
-
Danny Auble authored
-
- 10 Feb, 2011 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
Fix issue when updating database for clusters that were previously deleted before upgrade to 2.2 database.
-
- 09 Feb, 2011 2 commits
-
-
Danny Auble authored
-
Don Lipari authored
-
- 08 Feb, 2011 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Moe Jette authored
IDLE (in case the previous node state was DOWN).
-
- 02 Feb, 2011 1 commit
-
-
Moe Jette authored
job is in a pending state, then send the request directly to the slurmctld daemon and do not attempt to send the request to slurmd daemons, which are not running the job anyway.
-
- 31 Jan, 2011 3 commits
-
-
Don Lipari authored
-
Moe Jette authored
consider the job's time limit when attempting to backfill schedule. The job will just be preempted as needed at any time.
-
Moe Jette authored
Priority.
-
- 28 Jan, 2011 3 commits
-
-
Moe Jette authored
expected start time was too far in the future for the backfill scheduler to compute.
-
Moe Jette authored
values as "*" rather than 65534. Patch from Rod Schulz, BULL.
-
Moe Jette authored
the job's priority using scontrol's "update jobid=..." rather than its "hold" or "holdu" commands.
-
- 27 Jan, 2011 2 commits
-
-
Moe Jette authored
-
Danny Auble authored
Fixed issue when using a storage_accounting plugin directly without the slurmDBD updates weren't always sent correctly to the slurmctld, appears to OS dependent, reported by Fredrik Tegenfeldt.
-
- 26 Jan, 2011 3 commits
-
-
Don Lipari authored
-
Moe Jette authored
it were removed from the job's allocation. Now only the tasks on those nodes are terminated.
-
Danny Auble authored
Fix for checking QOS to override partition limits, previously if not using QOS some limits would be overlooked.
-
- 25 Jan, 2011 1 commit
-
-
Moe Jette authored
SelectTypeParameters=CR_ONE_TASK_PER_CORE.
-
- 24 Jan, 2011 1 commit
-
-
Danny Auble authored
-
- 21 Jan, 2011 2 commits
-
-
Danny Auble authored
Added flag "NoReserve" to a QOS to make it so all jobs are created equal within a QOS. So if larger, higher priority jobs are unable to run they don't prevent smaller jobs from running even if running the smaller jobs delay the start of the larger, higher priority jobs.
-
Danny Auble authored
BLUEGENE - fixed issue where jobs wouldn't wait log enough for blocks to free and wanted to use blocks that are being freed for other jobs
-
- 20 Jan, 2011 1 commit
-
-
Danny Auble authored
-- BLUEGENE - This fixes a race condition with dynamic mode to make it so we copy the booted and job list of blocks before trying to create.
-
- 18 Jan, 2011 1 commit
-
-
Danny Auble authored
-
- 15 Jan, 2011 1 commit
-
-
Danny Auble authored
-
- 14 Jan, 2011 4 commits
-
-
Danny Auble authored
-
Danny Auble authored
BLUEGENE - fixed race condition with preemption where if the wind blows the right way the slurmctld could lock up when preempting jobs to run others.
-
Danny Auble authored
-
Danny Auble authored
Fixed issue where QOS priority wasn't re-normalized until a slurmctld restart when a QOS priority was changed.
-
- 13 Jan, 2011 1 commit
-
-
Danny Auble authored
Made it so QOS with UsageFactor set to 0 would make it so jobs running under that QOS wouldn't add time to fairshare or association/qos limits.
-