- 24 Mar, 2016 1 commit
-
-
Danny Auble authored
isn't kept up to date in the cache.
-
- 23 Mar, 2016 4 commits
-
-
Morris Jette authored
Fix gang scheduling resource selection bug which could prevent multiple jobs from being allocated the same resources. Bug was introduced in 15.08.6, commit 44f491b8
-
Morris Jette authored
Here's how to reproduce on smd-server with 2 sockets, 6 cores per socket and 2 threads per core, just run the following command line 3 times in quick succession (all active at the same time): srun --cpus-per-task=4 -m block sleep 30 What was happening is the first job would be allocated cores 0+1 The second job would be allocated cores 2+3 The thrid job would test use of cores 0-3 then exit because the job only needs 4 CPUs. The resulting core binding would include NO CPUs. The new logic tests that the core being considered for use actually has some resources available to the job before updating the counter which is being tested against the needed CPU counter.
-
Morris Jette authored
Specifically add the HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM flag when loading configuration from HWLOC library. Previous logic in task/cgroup did not do this, which was different behaviour from how slurmd gets configuration information. Here's the HWLOC documentation: HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM Detect the whole system, ignore reservations and offline settings. Gather all resources, even if some were disabled by the administrator. For instance, ignore Linux Cpusets and gather all processors and memory nodes, and ignore the fact that some resources may be offline. Without this flag, I was rarely observing a bad core count, which resulted in the logic layout out tasks wrong and generating an error: task/cgroup: task[0] infinite loop broken while trying to provision compute elements using cyclic bug 2502
-
Danny Auble authored
-
- 21 Mar, 2016 2 commits
-
-
Morris Jette authored
burst_buffer/cray: Set environment variables just before starting job rather than at job submission time to reflect persistent buffers created or modified while the job is pending. bug 2545
-
Danny Auble authored
buffer is found. Bug 2576 What happened was a function was doing a double read lock which isn't awesome to begin with, but not really horrible (if all you are doing is read locks anyway). The problem was after the first lock was locked a different thread was going for a write lock and so when the second read lock came in it created deadlocked.
-
- 18 Mar, 2016 1 commit
-
-
Morris Jette authored
Avoid possibly aborting srun that gets simultaneous SIGSTOP+SIGCONT while creating the job step. The result is that the signal hanlder gets a argument (the signal received) of zero. Here's a log, window 1: $ srun hostname srun: Job step creation temporarily disabled, retrying srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 18 srun: I Got signal 0 srun: Cancelled pending job step Window 2: $ kill -STOP 18696 ; kill -CONT 18696 $ kill -STOP 18696 ; kill -CONT 18696 $ kill -STOP 18696 ; kill -CONT 18696 .... bug 2494
-
- 17 Mar, 2016 1 commit
-
-
Tim Wickberg authored
The uid is used as part of the hash function, must remove old reference and recalculate if it may change, otherwise _delete_assoc_hash will not find it again when the association is removed, causing slurmctld to segfault. Bug 2560.
-
- 16 Mar, 2016 5 commits
-
-
Morris Jette authored
Previous gang scheduling logic maintained information about resources originally allocated to the job and made scheduling decisions on that basis. bug 2494
-
Morris Jette authored
Update gang scheduling table when job manually suspended or resumed. Prior logic could mess up job suspend/resume sequencing. bug 2494
-
Danny Auble authored
time. https://bugs.schedmd.com/show_bug.cgi?id=2547 The code just wasn't fully baked before and was probably written before a lot of the other supporting code was done i.e assoc_mgr_set_assoc|qos_tres_cnt were done specifically for this kind of thing. Many of the usage structures weren't realloced either as well as the tres_cnt local to each qos and assoc wasn't updated. So all in all pretty bad code - bad Danny. This makes sure all this sets up and no memory corruption happens.
-
Morris Jette authored
Generate burst buffer use completion email immediately afer teardown completes rather than at job purge time (likely minutes later). bug 2539
-
Morris Jette authored
Change burst buffer use completion message from "SLURM Job_id=1360353 Name=tmp Staged Out, StageOut time 00:01:47" to "SLURM Job_id=1360353 Name=tmp StageOut/Teardown time 00:01:47"
-
- 15 Mar, 2016 2 commits
-
-
Alejandro Sanchez authored
-
Tim Wickberg authored
Bug 2543.
-
- 14 Mar, 2016 2 commits
-
-
Danny Auble authored
on only one port like TopologyParam=NoInAddrAny does for everything else.
-
Tim Wickberg authored
There's no /proc on *BSD, and BSD handles OOM in a completely different way.
-
- 11 Mar, 2016 1 commit
-
-
Tim Wickberg authored
Return [0-100:2] formatting, rather than [0,2,4,6,8,...] when using a step function. Was inadvertantly broken in 14.11 with commit 5ffdca92. Bug 2535.
-
- 10 Mar, 2016 1 commit
-
-
Morris Jette authored
-
- 09 Mar, 2016 2 commits
-
-
Morris Jette authored
Fix Cray NHC spawning on job requeue. Previous logic would leave nodes allocated to a requeued job as non-usable on job termination. Specifically, each job has a "cleaning/cleaned" flag. Once a job terminates, the cleaning flag is set, then after the job node health check completes, the value gets set to cleaned. If the job is requeued, on its second (or subsequent) termination, the select/cray plugin is called to launch the NHC. The plugin sees the "cleaned" flag already set, it then logs: error: select_p_job_fini: Cleaned flag already set for job 1283858, this should never happen and returns, never launching the NHC. Since the termination of the job NHC triggers releasing job resources (CPUs, memory, and GRES), those resources are never released for use by other jobs. Bug 2384
-
David Gloe authored
An error in slurmconfgen_smw.py caused it to parse the nic as the nid. On some systems those values differ, causing the generated slurm.conf file to be incorrect. Bug 2532.
-
- 08 Mar, 2016 2 commits
-
-
Bill Brophy authored
route_p_split_hostlist was not thread-safe, and would cause one of several segfaults depending on where in the initialization code each thread was. Bug 2495.
-
Tim Wickberg authored
Was incorrectly displaying "(null)" even when loaded successfully.
-
- 05 Mar, 2016 1 commit
-
-
Danny Auble authored
-
- 04 Mar, 2016 1 commit
-
-
Danny Auble authored
-
- 03 Mar, 2016 4 commits
-
-
Danny Auble authored
-
Brian Christiansen authored
Bug 2507
-
Morris Jette authored
Step GRES value changed from type "int" to "int64_t" to support larger values. Previous logic could fail in step allocation values over 32-bits. Other GRES values are 64-bit.
-
Danny Auble authored
slurmstepd to close potential open ones. It was pointed out the slurmd using acct_gather_energy/ipmi links to freeipmi which could possibly open /dev/ipmi0 without the close on exec flag set as root while launching a step leaving it open in the users app. What this does is sets the flag on the first 256 to mitigate the concern. Reported by Maksym Planeta. Bug 2506
-
- 02 Mar, 2016 2 commits
-
-
Gary B Skouson authored
Previous logic tested whatever the job's partition pointer indicated rather than the partition we are trying to run the job in. This bug was introduced in Slurm version 15.08.5, Nov 16, 2015, commit 94f0e948 bug 2499
-
Thomas Cadeau authored
-
- 01 Mar, 2016 2 commits
-
-
Tim Wickberg authored
-
Morris Jette authored
Insure that a job is completely launched before trying to suspend it. Previous logic would start suspend logic early in the life of the slurmstepd process, after it's listening socket was open but before the tasks were launched. This defers the suspend logic until after all prologs and setup completes and the tasks are launched. This is important in the case of gang scheduling, in which newly launched jobs can be immediately suspended. bug 2494
-
- 26 Feb, 2016 2 commits
-
-
Danny Auble authored
-
Tim Wickberg authored
Add note to slurm.conf man page about setting "--cpu_bind=no" as part of SallocDefaultCommand if a TaskPlugin is in use.
-
- 25 Feb, 2016 1 commit
-
-
Danny Auble authored
was also given.
-
- 24 Feb, 2016 3 commits
-
-
Danny Auble authored
a partition.
-
Danny Auble authored
This also reverts most of commit fa331e30 as well as commit bd9fa830 which would try to set the pn_min_cpus every time a job was updated. If a job didn't request node counts then they were hosed. This commit takes away the magic which was screwing things up. Now the person gets what they asked for without magic changing things. Bug 2302 Bug 2742 Bug 2478
-
Danny Auble authored
erroneously.
-