1. 26 Feb, 2013 2 commits
  2. 25 Feb, 2013 1 commit
  3. 22 Feb, 2013 3 commits
  4. 21 Feb, 2013 2 commits
  5. 20 Feb, 2013 1 commit
  6. 15 Feb, 2013 1 commit
  7. 13 Feb, 2013 2 commits
  8. 12 Feb, 2013 3 commits
  9. 08 Feb, 2013 2 commits
  10. 07 Feb, 2013 1 commit
  11. 06 Feb, 2013 2 commits
  12. 05 Feb, 2013 6 commits
  13. 04 Feb, 2013 1 commit
  14. 01 Feb, 2013 2 commits
  15. 31 Jan, 2013 2 commits
  16. 29 Jan, 2013 2 commits
  17. 23 Jan, 2013 1 commit
    • jette's avatar
      In select/cons_res, correct logic when job removed from only some nodes. · eb3c1046
      jette authored
      I run into a problem with slurm-2.5.1 that IDLE nodes can not be
      allocated to jobs. This can be reproduced as follows:
      
      First, submit a job with --no-kill option (I have SLURM_EXCLUSIVE set to
      allocate nodes exclusively by default). Then set one of the nodes
      allocated to the job(cn2) to state DOWN:
      
      srun: error: Node failure on cn2
      srun: error: Node failure on cn2
      srun: error: cn2: task 0: Killed
      ^Csrun: interrupt (one more within 1 sec to abort)
      srun: task 1: running
      srun: task 0: exited abnormally
      ^Csrun: sending Ctrl-C to job 22605.0
      srun: Job step aborted: Waiting up to 2 seconds for job step to finish.
      srun: Force Terminated job step 22605.0
      
      Then change state of the node to IDLE again. But it can not be allocated
      to jobs:
      
      srun: job 22606 queued and waiting for resources
      
        JOBID PARTITION     NAME     USER  ST       TIME  NODES
      NODELIST(REASON)
        22606      work hostname     root  PD       0:00      1 (Resources)
        22604      work   sbatch     root   R       3:06      1 cn1
      
      NodeName=cn2 Arch=x86_64 CoresPerSocket=8
         CPUAlloc=16 CPUErr=0 CPUTot=16 CPULoad=0.05 Features=abc
         Gres=(null)
         NodeAddr=cn2 NodeHostName=cn2
         OS=Linux RealMemory=30000 Sockets=2 Boards=1
         State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=1
         BootTime=2012-12-24T15:22:34 SlurmdStartTime=2013-01-14T11:06:32
         CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
      
      I traced and located the problem in select/cons_res. The call sequence
      is:
      
      slurmctld/node_mgr.c: update_node() =>
      slurmctld/job_mgr.c: kill_running_job_by_node_name() =>
      excise_node_from_job() =>
      plugins/select/cons_res/select_cons_res.c: select_p_job_resized() =>
      _rm_job_from_one_node() => _build_row_bitmaps() =>
      common/job_resources: remove_job_from_cores()
      
      If there are other jobs running in the partition, the partition row
      bitmap will not be set correctly. In the example above, before
      _build_row_bitmaps(), output of _dump_part() is:
      
      [2013-01-19T13:24:56+08:00] part:work rows:1 pri:1
      [2013-01-19T13:24:56+08:00]   row0: num_jobs 2: bitmap: 16,32-63
      
      after setting the node down, output of _dump_part() is
      
      [2013-01-19T13:24:56+08:00] part:work rows:1 pri:1
      [2013-01-19T13:24:56+08:00]   row0: num_jobs 2: bitmap: 16,32-47
      
      Cores of cn2 are not marked as available. Instead, cores of other nodes
      are released. When another job requires the node cn2, the following log
      message appears:
      
      [2013-01-19T13:25:03+08:00] debug3: cons_res: _vns: node cn2 busy
      
      I do not understand the design of select/cons_res well and I do not know
      how to fix this. But it seems that _build_row_bitmaps() should not be
      called, since the job is not removed totally, but only one of the nodes
      released.
      eb3c1046
  18. 22 Jan, 2013 1 commit
  19. 18 Jan, 2013 3 commits
    • Morris Jette's avatar
      Fix topology/tree logic when nodes defined in slurm.conf get re-ordered · 29df4c83
      Morris Jette authored
      From Chris Holmes, HP:
      After several days of brainstorming and debugging, I have identified
      a bug in SLURM 2.5.0rc2, related to the 'tree' topology. It was so
      early in the execution of the whole SLURM machinery that it took me
      some time to figure it out (say, 100 or 200 jobs showing the issue,
      with more or less debugging levels increased and extra
      instrumentation, with sometimes an uncertain reliability)...
      
      For every “switch” a bitmap of nodes (seen down by the switch) is
      built as the topology is discovered through 'topology.conf'.
      
      There is code in read_config.c, executed when the SLURM control
      daemon starts, that reorders the nodes (according to their hostname
      by default), while the switches table (ie the bitmaps) has already
      being built. To reorder the nodes means that the bitmaps of the switches become wrong.
      29df4c83
    • Morris Jette's avatar
      Make more variables available to job_submit/lua plugin · 28740196
      Morris Jette authored
      slurm.MEM_PER_CPU, slurm.NO_VAL, etc.
      28740196
    • Phil Eckert's avatar
      Permit job with invalid QOS to run if QOS set by administrator · 7aef4f80
      Phil Eckert authored
      About a year ago I submitted a modification that you incorporated
      into SLURM 2.4, which was to allow an admin to modify a job to use
      a QOS even though the user did not have access to the QOS.
      
      However, I must have tested it without having the Accounting set
      to enforce QOS's. So, if an admin modifies a job to a QOS they
      don't have access to, it will be modified, but the job will result
      in a state of InvalidQOS, which is reasonable, since this would
      handle the case where a user has their QOS removed. A problem,
      however, is that even though the scheduler won't schedule the job,
      backfill still will.
      
      One approach would be to fix backfill to be consistent with
      the scheduler (which should probably occur regardless), but
      my thought would be to modify the scheduler to allow the QOS
      as long as it was set by an admin, since that was the intent
      of the modification to begin with.
      
      I believe it  would only take a single line to change, just
      adding a check on the job_ptr->limit_set_qos, to make sure
      it was set by an admin:
      
                      if (job_ptr->qos_id) {
                              slurmdb_association_rec_t *assoc_ptr;
                              assoc_ptr = (slurmdb_association_rec_t *)job_ptr->assoc_ptr;
                              if (assoc_ptr &&
                                  !bit_test(assoc_ptr->usage->valid_qos,
                                            job_ptr->qos_id) &&
                                  !job_ptr->limit_set_qos) {
                                      info("sched: JobId=%u has invalid QOS",
                                              job_ptr->job_id);
                                      xfree(job_ptr->state_desc);
                                      job_ptr->state_reason = FAIL_QOS;
                                      continue;
                              } else if (job_ptr->state_reason == FAIL_QOS) {
                                      xfree(job_ptr->state_desc);
                                      job_ptr->state_reason = WAIT_NO_REASON;
                              }
                      }
      
      Phil
      7aef4f80
  20. 16 Jan, 2013 2 commits