- 28 Jul, 2015 1 commit
-
-
Thomas Cadeau authored
-
- 27 Jul, 2015 2 commits
-
-
Brian Christiansen authored
Bug 1819 Composite indexes search left to right. E.g. an index of (inx1, inx2, inx3) will search from left to right. inx2 can't be used in a where statement by itself, it requires inx1 to be present (inx3 can be optional). For the rollup index having time_end first speeds up the below query. The actual rollup queries still benefit from the original rollup index. sacct -S 07/22-09:41:36 -E 07/22-09:42:37 -i 1-4 -ojobid,start,end,nnodes,nodelist -n -a: mysql> explain select t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_alloc, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user from compy_job_table as t1 left join compy_assoc_table as t2 on t1.id_assoc=t2.id_assoc left join compy_resv_table as t3 on t1.id_resv=t3.id_resv where ((t1.nodes_alloc between 1 and 4)) && ((t1.time_eligible < 1437550957 && (t1.time_end >= 1437550896 || t1.time_end = 0))) group by id_job, time_submit desc; +----+-------------+-------+--------+---------------+---------+---------+------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------+---------+---------+------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | t1 | ALL | id_job,rollup | NULL | NULL | NULL | 120953 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_assoc | 1 | Using where | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+---------------+---------+---------+------------------------+--------+----------------------------------------------+ 3 rows in set (0.00 sec) mysql> explain select t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_alloc, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user from compy_job_table as t1 left join compy_assoc_table as t2 on t1.id_assoc=t2.id_assoc left join compy_resv_table as t3 on t1.id_resv=t3.id_resv where ((t1.nodes_alloc between 1 and 4)) && ((t1.time_eligible < 1437550957 && (t1.time_end >= 1437550896 || t1.time_end = 0))) group by id_job, time_submit desc; +----+-------------+-------+--------+-----------------------+---------+---------+------------------------+------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+------------------------+------+---------------------------------------------------------------------+ | 1 | SIMPLE | t1 | range | id_job,rollup,rollup2 | rollup2 | 8 | NULL | 6 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_assoc | 1 | Using where | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+-----------------------+---------+---------+------------------------+------+---------------------------------------------------------------------+ 3 rows in set (0.00 sec) rollup: mysql> explain select job.job_db_inx, job.id_job, job.id_assoc, job.id_wckey, job.array_task_pending, job.time_eligible, job.time_start, job.time_end, job.time_suspended, job.cpus_alloc, job.cpus_req, job.id_resv, SUM(step.consumed_energy) from compy_job_table as job left outer join compy_step_table as step on job.job_db_inx=step.job_db_inx and (step.id_step>=0) where (job.time_eligible < 1420102800 && (job.time_end >= 1420099200 || job.time_end = 0)) group by job.job_db_inx order by job.id_assoc, job.time_eligible; +----+-------------+-------+-------+--------------------------------------------------------------------------------+---------+---------+---------------------------+------+--------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+--------------------------------------------------------------------------------+---------+---------+---------------------------+------+--------------------------------------------------------+ | 1 | SIMPLE | job | range | PRIMARY,id_job,rollup,rollup2,wckey,qos,association,array_job,reserv,sacct_def | rollup | 4 | NULL | 1 | Using index condition; Using temporary; Using filesort | | 1 | SIMPLE | step | ref | PRIMARY | PRIMARY | 4 | slurm_1412.job.job_db_inx | 1 | Using where | +----+-------------+-------+-------+--------------------------------------------------------------------------------+---------+---------+---------------------------+------+--------------------------------------------------------+ 2 rows in set (0.01 sec) A plain sacct is sped up by moving time_end into the middle of the index (ex. id_user, time_end, time_eligible). sacct_def is for sacct calls with a state specified, sacct_def2 is for a plain sacct call. plain sacct: mysql> explain select t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_alloc, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user from compy_job_table as t1 left join compy_assoc_table as t2 on t1.id_assoc=t2.id_assoc left join compy_resv_table as t3 on t1.id_resv=t3.id_resv where (t1.id_user='1003') && ((t1.time_end >= 1437548400 || t1.time_end = 0)) group by id_job, time_submit desc; +----+-------------+-------+--------+------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ | 1 | SIMPLE | t1 | ref | id_job,sacct_def | sacct_def | 4 | const | 60476 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_assoc | 1 | Using where | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ 3 rows in set (0.00 sec)o mysql> explain select t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_alloc, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user from compy_job_table as t1 left join compy_assoc_table as t2 on t1.id_assoc=t2.id_assoc left join compy_resv_table as t3 on t1.id_resv=t3.id_resv where (t1.id_user='1003') && ((t1.time_end >= 1437548400 || t1.time_end = 0)) group by id_job, time_submit desc; +----+-------------+-------+--------+-------------------------------------+------------+---------+------------------------+------+--------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-------------------------------------+------------+---------+------------------------+------+--------------------------------------------------------+ | 1 | SIMPLE | t1 | range | id_job,rollup2,sacct_def,sacct_def2 | sacct_def2 | 8 | NULL | 68 | Using index condition; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_assoc | 1 | Using where | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+-------------------------------------+------------+---------+------------------------+------+--------------------------------------------------------+ 3 rows in set (0.00 sec) Adding the sacct_def2 index order didn't affect other queries: sacct -s CA,CD,F,R: mysql> explain select t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_alloc, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user from compy_job_table as t1 left join compy_assoc_table as t2 on t1.id_assoc=t2.id_assoc left join compy_resv_table as t3 on t1.id_resv=t3.id_resv where (t1.id_user='1003') && ((t1.state='4' && (t1.time_end && (t1.time_end >= 1438028802))) || (t1.state='3' && (t1.time_end && (t1.time_end >= 1438028802))) || (t1.state='5' && (t1.time_end && (t1.time_end >= 1438028802))) || (t1.time_start && ((!t1.time_end && t1.state=1) || (1438028802 between t1.time_start and t1.time_end)))) group by id_job, time_submit desc; +----+-------------+-------+--------+-------------------------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-------------------------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ | 1 | SIMPLE | t1 | ref | id_job,rollup2,sacct_def,sacct_def2 | sacct_def | 4 | const | 60513 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_assoc | 1 | Using where | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | slurm_1412.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+-------------------------------------+-----------+---------+------------------------+-------+---------------------------------------------------------------------+ 3 rows in set (0.00 sec) Adding nodes_alloc index speeds up quries with queries like: sacct -i 2-10000 mysql> EXPLAIN SELECT t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user FROM compy_job_table AS t1 LEFT JOIN compy_assoc_table AS t2 ON t1.id_assoc = t2.id_assoc LEFT JOIN compy_resv_table AS t3 ON t1.id_resv = t3.id_resv WHERE ((t1.nodes_alloc between 2 and 10000)) && ((t1.time_start && ((1434384740 BETWEEN t1.time_start AND t1.time_end) || (t1.time_start BETWEEN 1434384740 AND 1434384741)))) GROUP BY id_job , time_submit DESC; +----+-------------+-------+--------+----------------+---------+---------+-------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------+---------+---------+-------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | t1 | ALL | id_job,rollup2 | NULL | NULL | NULL | 117549 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | 1411_master.t1.id_assoc | 1 | NULL | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | 1411_master.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+----------------+---------+---------+-------------------------+--------+----------------------------------------------+ 3 rows in set (0.00 sec) mysql> EXPLAIN SELECT t1.account, t1.array_max_tasks, t1.array_task_str, t1.cpus_req, t1.derived_ec, t1.derived_es, t1.exit_code, t1.id_array_job, t1.id_array_task, t1.id_assoc, t1.id_block, t1.id_group, t1.id_job, t1.id_qos, t1.id_resv, t3.resv_name, t1.id_user, t1.id_wckey, t1.job_db_inx, t1.job_name, t1.kill_requid, t1.mem_req, t1.node_inx, t1.nodelist, t1.nodes_alloc, t1.partition, t1.priority, t1.state, t1.time_eligible, t1.time_end, t1.time_start, t1.time_submit, t1.time_suspended, t1.timelimit, t1.track_steps, t1.wckey, t1.gres_alloc, t1.gres_req, t1.gres_used, t2.acct, t2.lft, t2.user FROM compy_job_table AS t1 LEFT JOIN compy_assoc_table AS t2 ON t1.id_assoc = t2.id_assoc LEFT JOIN compy_resv_table AS t3 ON t1.id_resv = t3.id_resv WHERE ((t1.nodes_alloc between 2 and 10000)) && ((t1.time_start && ((1434384740 BETWEEN t1.time_start AND t1.time_end) || (t1.time_start BETWEEN 1434384740 AND 1434384741)))) GROUP BY id_job , time_submit DESC; +----+-------------+-------+--------+----------------------------+-------------+---------+-------------------------+------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------+-------------+---------+-------------------------+------+---------------------------------------------------------------------+ | 1 | SIMPLE | t1 | range | id_job,rollup2,nodes_alloc | nodes_alloc | 4 | NULL | 720 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | t2 | eq_ref | PRIMARY | PRIMARY | 4 | 1411_master.t1.id_assoc | 1 | NULL | | 1 | SIMPLE | t3 | ref | PRIMARY | PRIMARY | 4 | 1411_master.t1.id_resv | 1 | NULL | +----+-------------+-------+--------+----------------------------+-------------+---------+-------------------------+------+---------------------------------------------------------------------+ 3 rows in set (0.00 sec)
-
Morris Jette authored
If node definitions in slurm.conf are spread across multiple lines and topology/tree is configured, then sub-optimal node selection can occur. bug 1645
-
- 23 Jul, 2015 1 commit
-
-
Morris Jette authored
On Cray we were seeing an srun error reading slurmstepd message header. This was due to a shutdown race condition and the error message has been removed. Cray: Disable LDAP references from slurmstepd on job launch due for improved scalability. Document EioTimeout configuration parameter for large system bug 1786
-
- 22 Jul, 2015 4 commits
-
-
Nicolas Joly authored
Previously only batch job completions were being captured. bug 1820
-
David Bigagli authored
-
Morris Jette authored
If a job was running on the node when slurmctld restarted. The slurmd would notify slurmctld when the job ended and slurmctld would change its state from UNKNOWN to IDLE, at least if the job termination happened prior to the slurmd being asked for configuration information. The configuration information might then not be collected for some time. I've modified the code to address this problem and try to collect configuration information from every node after slurmctld startup, eliminating this race condition. bug 1805
-
Brian Christiansen authored
Bug 1208
-
- 21 Jul, 2015 2 commits
-
-
Chandler Wilkerson authored
This patch provides a rewrite of how /proc/cpuinfo is parsed in common_jag.c, as the original code made the incorrect assumption that cpuinfo follows a sane format across architectures ;-) The motivation for this patch is that the original code was producing stack smashing on a POWER7 running RHEL6.4 Red Hat adds -fstack-protector along with a lot of other protective CFLAGS when building RPMs. The code ran okay with -fno-stack-protector, but that is not the best work-around. So, the relevant /proc/cpuinfo line on an Intel (Xeon X5675) system looks like: cpu MHz : 3066.915 Whereas the relevant line in a POWER7 system is clock : 3550.000000MHz My patch replaces the assumption that the relevant number starts 11 characters into the string with another assumption: That the relevant number starts two characters after a colon in a string that matches (M|G)Hz. All in all, the function has a few more calls, which may be a performance issue if it has to be called multiple times, but since the section I edited only gets evaluated if we don't know the cpu frequency, getting it right will actually result in fewer string operations and unnecessary opens of /proc/cpuinfo for systems likewise affected. Finally, I also read the actual value into a double and multiply it up to the size indicated by the suffix, so we end up with KHz? It was unclear what the original code intended, since it matched both MHz and GHz, replaced the decimal point with a zero, and read the result as an int. -- Chandler Wilkerson Center for Research Computing Rice University
-
Danny Auble authored
This reverts commit 2c95e2d2. Conflicts: src/plugins/select/alps/basil_interface.c This is related to bug 1822. It isn't clear why the code was taken out in this commit in the first place and based off of commit 2e2de6a4 (which is the reason for the conflict) we tried unsuccessfully to put it back. It appears the only difference here is the addition of always setting mppnppn = 1 instead of always to job_ptr->details->ntasks_per_node when no ntasks is set. This appears to only be an issue with salloc or sbatch as ntasks is always set for srun.
-
- 20 Jul, 2015 1 commit
-
-
Brian Christiansen authored
Bug 1783
-
- 18 Jul, 2015 1 commit
-
-
Brian Christiansen authored
Prevent slurmctld abort on update of advanced reservation that contains no nodes. bug 1814
-
- 17 Jul, 2015 4 commits
-
-
Morris Jette authored
srun command line of either --mem or --mem-per-cpu will override both the SLURM_MEM_PER_CPU and SLURM_MEM_PER_NODE environment variables. Without this change, salloc or sbatch setting --mem-per-cpu (or a configuration of DefMemPerCPU) would over-ride the step's --mem value.
-
Danny Auble authored
change was made.
-
Danny Auble authored
when removing a limit from an association on multiple clusters at the same time.
-
Danny Auble authored
to gain the correct limit when a parent account is root and you remove a subaccount's limit which exists on the parent account.
-
- 16 Jul, 2015 2 commits
-
-
Morris Jette authored
-
Brian Christiansen authored
Bug 1770
-
- 15 Jul, 2015 3 commits
-
-
Morris Jette authored
-
Nathan Yee authored
-
Nathan Yee authored
Bug 1798
-
- 14 Jul, 2015 3 commits
-
-
Danny Auble authored
-
Morris Jette authored
Previous logic could fail to update some tasks of a job array for some fields. bug 1777
-
Danny Auble authored
bind to the interface of return of gethostname instead of any address on the node which avoid RSIP issues in Cray systems. This is most likely useful in other systems as well.
-
- 13 Jul, 2015 2 commits
-
-
Morris Jette authored
Fix to job array update logic that can result in a task ID of 4294967294. To reproduce: $ sbatch --exclusive -a 1,3,5 tmp Submitted batch job 11825 $ scontrol update jobid=11825_[3,4,5] timelimit=3 $ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 11825_3 debug tmp jette PD 0:00 1 (None) 11825_4 debug tmp jette PD 0:00 1 (None) 11825_5 debug tmp jette PD 0:00 1 (None) 11825 debug tmp jette PD 0:00 1 (Resources) A new job array entry was created for task ID 4 and the "master" job array record now has a task ID of 4294967294. The logic with the bug was using the wrong variable in a test. bug 1790
-
Gene Soudlenkov authored
Bug 1799
-
- 11 Jul, 2015 1 commit
-
-
Nathan Yee authored
Increase total backfill scheduler run time in stats_info_response_msg data structure from 32 to 64 bits in order to prevent overflow.
-
- 10 Jul, 2015 4 commits
-
-
Morris Jette authored
remove new capabilities added in comit ad9c2413 Leave the new logic only in version 15.08, which has related performance improvements in the slurmctld agent code, see commit 53534f49
-
Morris Jette authored
Modify slurmctld outgoing RPC logic to support more parallel tasks (up to 85 RPCs and 256 pthreads; the old logic supported up to 21 RPCs and 256 threads). This change can dramatically improve performance for RPCs operating on small node counts. bug 1786
-
Morris Jette authored
Correct "sdiag" backfill cycle time calculation if it yields locks. A microsecond value was being treated as a second value resulting in an overflow in the calcuation. bug 1788
-
Danny Auble authored
-
- 09 Jul, 2015 2 commits
-
-
Morris Jette authored
The slurmctld logic throttles some RPCs so that only one of them can execute at a time in order to reduce contention for the job, partition and node locks (only one of the effected RPCs can execute at any time anyway and this lets other RPC types run). While an RPC is stuck in the throttle function, do not count that thread against the slurmctld thread limit. but 1794
-
Morris Jette authored
Changed spaces to tabs at start of lines. Minor changes to some formatting. Added the new files to the RPM (slurm.spec file). Prevent memory leak of "l_name" variable if which_power_layout() function is called more than once. Initialize "cpufreq" variable in powercap_get_cpufreq() function. Array "tmp_max_watts_dvfs" could be NULL and used if "max_watts_dvfs" variable is NULL in powercap_get_node_bitmap_maxwatts_dvfs() Variable "tmp_pcap_cpu_freq" could be used with uninitialized value in function _get_req_features() Variable "tmp_max_watts" could be used with uninitialized value in function _get_req_features() Array "tmp_max_watts_dvfs" could be used with uninitialized value in function _get_req_features() Array "allowed_freqs" could be NULL and used if "node_record_count" variable is zero in powercap_get_job_nodes_numfreq() Overwriting a memory buffer header (especially with different data types) is just asking for something bad to happen. This code from function powercap_get_job_nodes_numfreq(): allowed_freqs = xmalloc(sizeof(int)*((int)num_freq+2)); allowed_freqs[-1] = (int) num_freq; Clean up memory on slurmctld shutdown
-
- 08 Jul, 2015 3 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
Morris Jette authored
-
- 07 Jul, 2015 4 commits
-
-
Trey Dockendorf authored
-
Trey Dockendorf authored
Add job record qos field and partition record allow_qos field.
-
Trey Dockendorf authored
-
Trey Dockendorf authored
This patch moves the QOS update of an existing job to be before the partition update. This ensures a new QOS value is the value used when doing validations against things like a partition's AllowQOS and DenyQOS. Currently if a two partitions have AllowQOS that do not share any QOS, the order of updates prevents a job from being moved from one partition to another using something like the following: scontrol update job=<jobID> partition=<new part> qos=<new qos>
-