- 12 Feb, 2014 1 commit
-
-
Morris Jette authored
Properly enforce a job's cpus-per-task option when a job's allocation is constrained on some nodes by the mem-per-cpu option. bug 590
-
- 10 Feb, 2014 1 commit
-
-
Morris Jette authored
-
- 09 Feb, 2014 1 commit
-
-
Moe Jette authored
-
- 08 Feb, 2014 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 07 Feb, 2014 1 commit
-
-
Morris Jette authored
bug 586
-
- 05 Feb, 2014 3 commits
-
-
Danny Auble authored
-
Dominik Bartkiewicz authored
Set GPU_DEVICE_ORDINAL environment variable.
-
Danny Auble authored
-
- 04 Feb, 2014 2 commits
-
-
Morris Jette authored
Previous logic would try to pick a specific node count and on a heterogeneous system, this would cause a problem. This change largely reverts commit a270417b
-
Danny Auble authored
-
- 03 Feb, 2014 1 commit
-
-
Danny Auble authored
-
- 31 Jan, 2014 3 commits
-
-
David Bigagli authored
-
Danny Auble authored
i.e. salloc -n32 doesn't request the number of nodes and with the previous code if this request used 4 nodes and only 1 was left in GrpNodes it would just run with no issue since we were checking things before we selected how many nodes it ran on. Now we check this afterwards so we always check the limits on how many nodes, cpus and how much memory is to be used.
-
Morris Jette authored
Fix step allocation when some CPUs are not available due to memory limits. This happens when one step is active and using memory that blocks the scheduling of another step on a portion of the CPUs needed. The new step is now delayed rather than aborting with "Requested node configuration is not available". bug 577
-
- 28 Jan, 2014 1 commit
-
-
Danny Auble authored
based on ionode count correctly on slurmctld restart.
-
- 23 Jan, 2014 2 commits
-
-
Danny Auble authored
connect in a loop instead of producing a fatal.
-
Danny Auble authored
-
- 21 Jan, 2014 2 commits
-
-
David Bigagli authored
-
David Bigagli authored
This reverts commit 2fa28eb6. Conflicts: NEWS
-
- 18 Jan, 2014 1 commit
-
-
David Bigagli authored
data correctly accumulating differences between sampling intervals. Fix the data structure mismatch between acct_gather_filesystem_lustre.c and slurm_jobacct_gather.h which caused the hdf5 plugin to log incorrect data.
-
- 16 Jan, 2014 2 commits
-
-
David Bigagli authored
the srun help.
-
David Bigagli authored
network traffic accounting plugin.
-
- 15 Jan, 2014 1 commit
-
-
Danny Auble authored
add/remove columns. caused by commit 68f0f5db
-
- 13 Jan, 2014 2 commits
-
-
Morris Jette authored
Do not reset a job's priority when the slurmctld restarts if previously set to some specific value. bug 561
-
John Morrissey authored
groups.
-
- 08 Jan, 2014 3 commits
-
-
David Bigagli authored
-
David Bigagli authored
This reverts commit 3464295e.
-
David Bigagli authored
-
- 07 Jan, 2014 2 commits
-
-
Danny Auble authored
-
Morris Jette authored
Do not mark the node DOWN if its memory or tmp disk space is lower than configured, just log it using debug message type
-
- 06 Jan, 2014 2 commits
-
-
Morris Jette authored
If a job is explicitly suspended, its priority is set to zero. This resets the priority when requeued and also documents that if the job is requeued (e.g. due to a node failure), then it is placed in a held state.
-
Morris Jette authored
Without this patch, the job's RunTime includes its RunTime from before it's prior suspend (i.e. the job's full RunTime rather than just the RunTime of the requeued job).
-
- 27 Dec, 2013 1 commit
-
-
Filip Skalski authored
Hello, I think I found another bug in the code (I'm using 2.6.3 but I checked the 2.6.5 and 14.03 versions and it's the same there). In file sched/backfill/backfill.c: 1) _add_reservation function, from lines 1172: if (placed == true) { j = node_space[j].next; if (j && (end_reserve < node_space[j].end_time)) { /* insert end entry record */ i = *node_space_recs; node_space[i].begin_time = end_reserve; node_space[i].end_time = node_space[j].end_time; node_space[j].end_time = end_reserve; node_space[i].avail_bitmap = bit_copy(node_space[j].avail_bitmap); node_space[i].next = node_space[j].next; node_space[j].next = i; (*node_space_recs)++; } break; } I draw a picture with `node_space` state after 2 iterations (see attachment). In case where the new reservation is fully inside another reservation, then everything is OK. But if the new reservation spans multiple existing reservations then the `end entry record` is not created. This is because only the newly created `start entry record` is checked. Easy fix would be to change the if into a loop, for example: if (placed == true) { while((j = node_space[j].next) > 0) { if (end_reserve < node_space[j].end_time) { //same as above break; } } break; } 2) You could also change line 612: node_space = xmalloc(sizeof(node_space_map_t) * (max_backfill_job_cnt + 3)); To `(max_backfill_job_cnt * 2 + 1)` , since each reservation can add at most two entries (check at line 982 should never execute). At the moment, in a worst case scenario this only checks half of the max_backfill_job_cnt. NOTE: However this is all based on the assumption, that it is not done on purpose to speed up the calculations and trading some of the accuracy (especially point 2). Best regards, Filip Skalski
-
- 23 Dec, 2013 2 commits
-
-
Morris Jette authored
-
David Bigagli authored
-
- 20 Dec, 2013 2 commits
-
-
Danny Auble authored
for better debug
-
Danny Auble authored
midplane block that starts on a higher coordinate than it ends (i.e if a block has midplanes [0010,0013] 0013 is the start even though it is listed second in the hostlist).
-
- 19 Dec, 2013 1 commit
-
-
Morris Jette authored
It has been changed to improve the calculated value for pending jobs and use the actual node count value for jobs that have been started (including suspended, completed, etc.) bug 549
-
- 18 Dec, 2013 1 commit
-
-
Danny Auble authored
being in error.
-