- 23 Jan, 2013 1 commit
-
-
Morris Jette authored
-
- 22 Jan, 2013 4 commits
-
-
Danny Auble authored
-
Magnus Jonsson authored
-
jette authored
Correction to CPU allocation logic for cores without hyperthreading Backport of https://github.com/SchedMD/slurm/commit/1ef41ac9590e018e631eaefb31254622984b7d2d
-
jette authored
-
- 19 Jan, 2013 1 commit
-
-
jette authored
-
- 18 Jan, 2013 7 commits
-
-
Morris Jette authored
From Chris Holmes, HP: After several days of brainstorming and debugging, I have identified a bug in SLURM 2.5.0rc2, related to the 'tree' topology. It was so early in the execution of the whole SLURM machinery that it took me some time to figure it out (say, 100 or 200 jobs showing the issue, with more or less debugging levels increased and extra instrumentation, with sometimes an uncertain reliability)... For every “switch” a bitmap of nodes (seen down by the switch) is built as the topology is discovered through 'topology.conf'. There is code in read_config.c, executed when the SLURM control daemon starts, that reorders the nodes (according to their hostname by default), while the switches table (ie the bitmaps) has already being built. To reorder the nodes means that the bitmaps of the switches become wrong.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
slurm.MEM_PER_CPU, slurm.NO_VAL, etc.
-
Morris Jette authored
-
Morris Jette authored
-
Phil Eckert authored
About a year ago I submitted a modification that you incorporated into SLURM 2.4, which was to allow an admin to modify a job to use a QOS even though the user did not have access to the QOS. However, I must have tested it without having the Accounting set to enforce QOS's. So, if an admin modifies a job to a QOS they don't have access to, it will be modified, but the job will result in a state of InvalidQOS, which is reasonable, since this would handle the case where a user has their QOS removed. A problem, however, is that even though the scheduler won't schedule the job, backfill still will. One approach would be to fix backfill to be consistent with the scheduler (which should probably occur regardless), but my thought would be to modify the scheduler to allow the QOS as long as it was set by an admin, since that was the intent of the modification to begin with. I believe it would only take a single line to change, just adding a check on the job_ptr->limit_set_qos, to make sure it was set by an admin: if (job_ptr->qos_id) { slurmdb_association_rec_t *assoc_ptr; assoc_ptr = (slurmdb_association_rec_t *)job_ptr->assoc_ptr; if (assoc_ptr && !bit_test(assoc_ptr->usage->valid_qos, job_ptr->qos_id) && !job_ptr->limit_set_qos) { info("sched: JobId=%u has invalid QOS", job_ptr->job_id); xfree(job_ptr->state_desc); job_ptr->state_reason = FAIL_QOS; continue; } else if (job_ptr->state_reason == FAIL_QOS) { xfree(job_ptr->state_desc); job_ptr->state_reason = WAIT_NO_REASON; } } Phil
-
- 17 Jan, 2013 3 commits
-
-
David Bigagli authored
-
Morris Jette authored
-
David Bigagli authored
-
- 16 Jan, 2013 16 commits
-
-
Morris Jette authored
-
David Bigagli authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Without this change a high priority batch job may not start at submit time. In addtion, a pending job with mutltiple partitions be cancelled when the scheduler runs if any of it's partitions can not be used by the job.
-
David Bigagli authored
-
Morris Jette authored
The original work this was based upon has been replaced with new logic.
-
Morris Jette authored
Without this patch, if the first listed partition lacks nodes with required features the job would be rejected.
-
Morris Jette authored
While this will validate job at submit time, it results in redundant looping when scheduling jobs. Working on alternate patch now.
-
Danny Auble authored
-
Danny Auble authored
submission.
-
Morris Jette authored
-
-
Morris Jette authored
-
Morris Jette authored
The gres_plugin_job_test was returning a count of cores available to a job, but the select plugins was treating this as a CPU count. This change converts the core count into a CPU count as needed in the select plugin and changes the comments related to the function gres_plugin_job_test().
-
Danny Auble authored
-
- 15 Jan, 2013 1 commit
-
-
Matthieu Hautreux authored
QoS limits enforcement on the controller side is based on a list of used_limits per user. When a user is not yet added to the list, which is common when the controller is restarted and the user has no running jobs, the current logic is to not check some of the "per user limits" and let the submission succeed. However, if one of these limits is a zero-valued limit, the check chould failed as it means that no job should be submitted at all as it would necessarily result in a crossing of the limit. This patch ensures that even when a user is not yet present in the per user used_limits list, the 0-valued limits are correctly treated.
-
- 14 Jan, 2013 6 commits
-
-
jette authored
-
Hongjia Cao authored
On job step launch failure, the function "slurm_step_launch_wait_finish()" will be called twice in launch/slurm, which causes srun to be aborted: srun: error: Task launch for 22495.0 failed on node cn6: Job credential expired srun: error: Application launch failed: Job credential expired srun: Job step aborted: Waiting up to 2 seconds for job step to finish. cn5 cn4 cn7 srun: error: Timed out waiting for job step to complete srun: Job step aborted: Waiting up to 2 seconds for job step to finish. srun: error: Timed out waiting for job step to complete srun: bitstring.c:174: bit_test: Assertion `(b) != ((void *)0)' failed. Aborted (core dumped) The attached patch(version 2.5.1) fixes it. But the message of " Job step aborted: Waiting up to 2 seconds for job step to finish. Timed out waiting for job step to complete " will still be printed twice.
-
Morris Jette authored
-
Yair Yarom authored
-
Morris Jette authored
-
Morris Jette authored
-
- 11 Jan, 2013 1 commit
-
-
https://github.com/SchedMD/slurmjette authored
-