- 27 Jul, 2012 1 commit
-
-
Morris Jette authored
I would like to make two changes to this: 1) since the reservation name can easily exceed 9 characters, I would like the field to be however large it needs to be without truncating the name. I did this by looking at the names then setting the field size to that width. 2) The other headers are in capitals, so I changed ResName State StartTime EndTime Duration Nodelist to RESV_NAME STATE START_TIME END_TIME DURATION NODELIST
-
- 26 Jul, 2012 2 commits
-
-
Morris Jette authored
-
Morris Jette authored
Correct parsing of srun/sbatch input/output/error file names so that only the name "none" is mapped to /dev/null and not any file name starting with "none" (e.g. "none.o"). This fixes bug #98.
-
- 24 Jul, 2012 1 commit
-
-
Morris Jette authored
Gres: If a gres has a count of one and an associated file then when doing a reconfiguration, the node's bitmap was not cleared resulting in an underflow upon job termination or removal from scheduling matrix by the backfill scheduler.
-
- 23 Jul, 2012 1 commit
-
-
Morris Jette authored
Cray and BlueGene - Do not treat lack of usable front-end nodes when slurmctld deamon starts as a fatal error. Also preserve correct front-end node for jobs when there is more than one front-end node and the slurmctld daemon restarts.
-
- 19 Jul, 2012 2 commits
-
-
Danny Auble authored
while it is attempting to free underlying hardware is marked in error making small blocks overlapping with the freeing block. This only applies to dynamic layout mode.
-
Alejandro Lucero Palau authored
-
- 16 Jul, 2012 1 commit
-
-
Morris Jette authored
-
- 13 Jul, 2012 2 commits
-
-
Danny Auble authored
is always set when sending or receiving a message.
-
Tim Wickberg authored
-
- 12 Jul, 2012 4 commits
-
-
Danny Auble authored
than 1 midplane but not the entire allocation.
-
Danny Auble authored
multi midplane block allocation.
-
Danny Auble authored
-
Danny Auble authored
where other blocks on an overlapping midplane are running jobs.
-
- 11 Jul, 2012 3 commits
-
-
Danny Auble authored
hardware is marked bad remove the larger block and create a block over just the bad hardware making the other hardware available to run on.
-
Danny Auble authored
allocation.
-
Danny Auble authored
for a job to finish on it the number of unused cpus wasn't updated correctly.
-
- 10 Jul, 2012 1 commit
-
-
Morris Jette authored
When using the jobcomp/script interface, we have noticed the NODECNT environment variable is off-by-one when logging completed jobs in the NODE_FAIL state (though the NODELIST is correct). This appears to be because in many places in job_completion_logger() is called after deallocate_nodes(), which appears to decrement job->node_cnt for DOWN nodes. If job_completion_logger() only called the job completion plugin, then I would guess that it might be safe to move this call ahead of deallocate_nodes(). However, it seems like job_completion_logger() also does a bunch of accounting stuff (?), so perhaps that would need to be split out first? Also, there is the possibility that this is working as designed, though if so a well placed comment in the code might be appreciated. If the decreased nodecount is intended, though, should the DOWN nodes also be removed from the job's NODELIST? - Mark Grondona
-
- 09 Jul, 2012 1 commit
-
-
Martin Perry authored
See Bugzilla #73 for more complete description of the problem. Patch by Martin Perry, Bull.
-
- 06 Jul, 2012 1 commit
-
-
Carles Fenoy authored
If job is submitted to more than one partition, it's partition pointer can be set to an invalid value. This can result in the count of CPUs allocated on a node being bad, resulting in over- or under-allocation of its CPUs. Patch by Carles Fenoy, BSC. Hi all, After a tough day I've finally found the problem and a solution for 2.4.1 I was able to reproduce the explained behavior by submitting jobs to 2 partitions. This makes the job to be allocated in one partition but in the schedule function the partition of the job is changed to the NON allocated one. This makes that the resources can not be free at the end of the job. I've solved this by changing the IS_PENDING test some lines above in the schedule function in (job_scheduler.c) This is the code from the git HEAD (Line 801). As this file has changed a lot from 2.4.x I have not done a patch but I'm commenting the solution here. I've moved the if(!IS_JOB_PENDING) after the 2nd line (part_ptr...). This prevents the partition of the job to be changed if it is already starting in another partition. job_ptr = job_queue_rec->job_ptr; part_ptr = job_queue_rec->part_ptr; job_ptr->part_ptr = part_ptr; xfree(job_queue_rec); if (!IS_JOB_PENDING(job_ptr)) continue; /* started in other partition */ Hope this is enough information to solve it. I've just realized (while writing this mail) that my solution has a memory leak as job_queue_rec is not freed. Regards, Carles Fenoy
-
- 03 Jul, 2012 3 commits
-
-
Danny Auble authored
there are jobs running on that hardware.
-
Morris Jette authored
-
Alexjandro Lucero Palau authored
Add support for advanced reservation for specific cores rather than whole nodes. Current limiations: homogeneous cluster, nodes idle when reservation created, and no more than one reservation per node. Code is still under development. Work by Alejandro Lucero Palau, et. al, BSC.
-
- 02 Jul, 2012 1 commit
-
-
Carles Fenoy authored
correctly when transitioning. This also applies for 2.4.0 -> 2.4.1, no state will be lost. (Thanks to Carles Fenoy)
-
- 29 Jun, 2012 2 commits
-
-
Bill Brophy authored
Add reservation flag of Part_Nodes to allocate all nodes in a partition to a reservation and automatically change the reservation when nodes are added to or removed from the reservation. Based upon work by Bill Brophy, Bull.
-
Morris Jette authored
When running with multiple slurmd daemons per node, enable specifying a range of ports on a single line of the node configuration in slurm.conf. For example: NodeName=tux[0-999] NodeAddr=localhost Port=9000-9999 ...
-
- 28 Jun, 2012 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 26 Jun, 2012 4 commits
-
-
Danny Auble authored
-
Danny Auble authored
bg.properties in order for the runjob_mux to run correctly. Signed-off-by: Danny Auble <da@schedmd.com>
-
Danny Auble authored
but job is going to be canceled because it is interactive or other reason it now receives the grace time.
-
Morris Jette authored
-
- 25 Jun, 2012 3 commits
-
-
Danny Auble authored
check if a block is still makable if the cable wasn't in error.
-
Danny Auble authored
removal of the job on the block failed.
-
Danny Auble authored
-
- 22 Jun, 2012 3 commits
-
-
Danny Auble authored
29d79ef8
-
Danny Auble authored
same time a block is destroyed and that block just happens to be the smallest overlapping block over the bad hardware.
-
Danny Auble authored
-
- 20 Jun, 2012 2 commits
-
-
Danny Auble authored
but not node count the node count is correctly figured out.
-
Morris Jette authored
Without this fix, gang scheduling mode could start without creating a list resulting in an assert when jobs are submitted.
-