1. 11 May, 2012 15 commits
  2. 10 May, 2012 1 commit
  3. 09 May, 2012 6 commits
  4. 07 May, 2012 4 commits
    • Morris Jette's avatar
      Merge from v2.3 with slight logic change · ec996c21
      Morris Jette authored
      Job priority of 1 is no longer used as a special case in slurm v2.4
      ec996c21
    • Morris Jette's avatar
      Merge branch 'slurm-2.3' · b2c0cff8
      Morris Jette authored
      b2c0cff8
    • Morris Jette's avatar
      1490e835
    • Don Lipari's avatar
      Job priority reset bug on slurmctld restart · 5e9dca41
      Don Lipari authored
      The commit 8b14f388 on Jan 19, 2011 is causing problems with Moab cluster-scheduled machines.  Under this case, Moab hands off every job submitted immediately to SLURM which gets a zero priority.  Once Moab schedules the job, Moab raises the job's priority to 10,000,000 and the job runs.
      
      When you happen to restart the slurmctld under such conditions, the sync_job_priorities() function runs which attempts to raise job priorities into a higher range if they are getting too close to zero.  The problem as I see it is that you include the "boost" for zero priority jobs.  Hence the problem we are seeing is that once the slurmctld is restarted, a bunch of zero priority jobs are suddenly eligible.  So there becomes a disconnect between the top priority job Moab is trying to start and the top priority job SLURM sees.
      
      I believe the fix is simple:
      
      diff job_mgr.c~ job_mgr.c
      6328,6329c6328,6331
      <       while ((job_ptr = (struct job_record *) list_next(job_iterator)))
      <               job_ptr->priority += prio_boost;
      ---
             while ((job_ptr = (struct job_record *) list_next(job_iterator))) {
                     if (job_ptr->priority)
                             job_ptr->priority += prio_boost;
             }
      Do you agree?
      
      Don
      5e9dca41
  5. 04 May, 2012 4 commits
  6. 03 May, 2012 5 commits
  7. 02 May, 2012 5 commits