- 10 May, 2016 2 commits
-
-
Brian Christiansen authored
Thread names can only be 16 characters long, plus we already know that the threads are from the slurmctld.
-
Brian Christiansen authored
-
- 09 May, 2016 1 commit
-
-
Brian Christiansen authored
-
- 06 May, 2016 5 commits
-
-
Morris Jette authored
If node_feature/knl_cray plugin is configured and a GresType of "hbm" is not defined, then add it the the GRES tables. Without this, references to a GRES of "hbm" (either by a user or Slurm's internal logic) will generate error messages. bug 2708
-
Morris Jette authored
-
John Thiltges authored
With slurm-15.08.10, we're seeing occasional segfaults in slurmstepd. The logs point to the following line: slurm-15.08.10/src/slurmd/slurmstepd/mgr.c:2612 On that line, _get_primary_group() is accessing the results of getpwnam_r(): *gid = pwd0->pw_gid; If getpwnam_r() cannot find a matching password record, it will set the result (pwd0) to NULL, but still return 0. When the pointer is accessed, it will cause a segfault. Checking the result variable (pwd0) to determine success should fix the issue.
-
Morris Jette authored
Note that Slurm can not support heterogenous core counts for each NUMA nodes. bug 2704
-
Marco Ehlert authored
I would like to mention a problem which seems to be a calculation bug of used_cores in slurm version 15.08.7 If a node is divided into 2 partitions using MaxCPUsPerNode like this slurm.conf configuration NodeName=n1 CPUs=20 PartitionName=cpu NodeName=n1 MaxCPUsPerNode=16 PartitionName=gpu NodeName=n1 MaxCPUsPerNode=4 I run into a strange scheduling situation. The situation occurs after a fresh restart of the slurmctld daemon. I start jobs one by one: case 1 systemctl restart slurmctld.service sbatch -n 16 -p cpu cpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh => Problem now: The gpu jobs are kept in PENDING state. This picture changes if I start the jobs this way case 2 systemctl restart slurmctld.service sbatch -n 1 -p gpu gpu.sh scancel <gpu job_id> sbatch -n 16 -p cpu cpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh sbatch -n 1 -p gpu gpu.sh and all jobs are running fine. By looking into the code I figured out a wrong calculation of 'used_cores' in function _allocate_sc() plugins/select/cons_res/job_test.c _allocate_sc(...) ... for (c = core_begin; c < core_end; c++) { i = (uint16_t) (c - core_begin) / cores_per_socket; if (bit_test(core_map, c)) { free_cores[i]++; free_core_count++; } else { used_cores[i]++; } if (part_core_map && bit_test(part_core_map, c)) used_cpu_array[i]++; This part of code seems to work only if the part_core_map exists for a partition or on a completly free node. But in case 1 there is no part_core_map for gpu created yet. Starting a gpu the core_map contains 4 cores left from the cpu job. Now all non free cores of the cpu partion are counted as used cores in the gpu partition and this condition will match in the next code parts free_cpu_count + used_cpu_count > job_ptr->part_ptr->max_cpus_per_node what is definitely wrong. As soon as a part_core_map appears, means a gpu job was started on a free node (case 2) then there is no problem at all. To get case 1 work I changed the above code to the following and all works fine: for (c = core_begin; c < core_end; c++) { i = (uint16_t) (c - core_begin) / cores_per_socket; if (bit_test(core_map, c)) { free_cores[i]++; free_core_count++; } else { if (part_core_map && bit_test(part_core_map, c)){ used_cpu_array[i]++; used_cores[i]++; } } I am not sure this code change is really good, but it fixes my problem.
-
- 05 May, 2016 9 commits
-
-
Morris Jette authored
-
Tim Wickberg authored
-
Morris Jette authored
RHEL6 requires resetting the processes "dumpable" flag after all seteuid calls complete in order to generate a core file. bug 2334
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Do not attempt to power down a node which has never responded if the slurmctld daemon restarts without state. bug 2698
-
Danny Auble authored
commit 17a9d97e.
-
Danny Auble authored
they are in a step.
-
- 04 May, 2016 10 commits
-
-
Tim Wickberg authored
1) step_ptr->step_layout has already been dereferenced plenty of times. 2) Can't possible have rpc_version >= MIN_PROTOCOL_VERSION and < 8, this code is dead.
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Bill Brophy authored
-
Tim Wickberg authored
-
Morris Jette authored
Issue the "node_reinit" command on all nodes identified in a single call to capmc. Only if that fails will individual nodes be restarted using multiple pthreads. This improves efficiency while retaining the ability to operate on individual nodes when some failure occurs. bug 2659
-
Morris Jette authored
Issue the "node_off" command on all nodes identified in a single call to capmc. Only if that fails will individual nodes be powered down using multiple pthreads. This improves efficiency while retaining the ability to operate on individual nodes when some failure occurs. bug 2659
-
Danny Auble authored
-
Danny Auble authored
# Conflicts: # META
-
Danny Auble authored
-
- 03 May, 2016 10 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Tim Wickberg authored
-
Danny Auble authored
-
Danny Auble authored
-
Brian Christiansen authored
E.g. info, debug, etc.
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Tim Wickberg authored
-
Eric Martin authored
-
- 02 May, 2016 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-