1. 02 Oct, 2015 2 commits
    • Morris Jette's avatar
      Don't mark powered down node as not responding · c0bb562a
      Morris Jette authored
      This will only happen if a PING RPC for the node is already queued
        when the decision is made to power it down, then fails to get
        a response for the ping (since the node is already down).
      bug 1995
      c0bb562a
    • Morris Jette's avatar
      Reset job CPU count if CPUs/task ratio increased for mem limit · 29fe3eae
      Morris Jette authored
      If a job's CPUs/task ratio is increased due to configured MaxMemPerCPU,
      then increase it's allocated CPU count in order to enforce CPU limits.
      Previous logic would increase/set the cpus_per_task as needed if a
      job's --mem-per-cpu was above the configured MaxMemPerCPU, but NOT
      increase the min_cpus or max_cpus varilable. This resulted in allocating
      the wrong CPU count.
      29fe3eae
  2. 01 Oct, 2015 2 commits
  3. 30 Sep, 2015 2 commits
    • Morris Jette's avatar
      Make cgroup paths consistent · c5c566ff
      Morris Jette authored
      Correct some cgroup paths ("step_batch" vs. "step_4294967294", "step_exter"
          vs. "step_extern", and "step_extern" vs. "step_4294967295").
      c5c566ff
    • Morris Jette's avatar
      Don't start duplicate batch job · c1513956
      Morris Jette authored
      Requeue/hold batch job launch request if job already running. This is
        possible if node went to DOWN state, but jobs remained active.
      In addition, if a prolog/epilog failed DRAIN the node rather than
        setting it down, which could kill jobs that could continue to
        run.
      bug 1985
      c1513956
  4. 29 Sep, 2015 2 commits
  5. 28 Sep, 2015 2 commits
    • Morris Jette's avatar
      Fix for node state when shrinking jobs · 16f4b6a9
      Morris Jette authored
      When nodes have been allocated to a job and then released by the
        job while resizing, this patch prevents the nodes from continuing
        to appear allocated and unavailable to other jobs. Requires
        exclusive node allocation to trigger. This prevents the previously
        reported failure, but a proper fix will be quite complex and
        delayed to the next major release of Slurm (v 16.05).
      bug 1851
      16f4b6a9
    • Morris Jette's avatar
      Fix for node state when shrinking jobs · 6c9d4540
      Morris Jette authored
      When nodes have been allocated to a job and then released by the
        job while resizing, this patch prevents the nodes from continuing
        to appear allocated and unavailable to other jobs. Requires
        exclusive node allocation to trigger. This prevents the previously
        reported failure, but a proper fix will be quite complex and
        delayed to the next major release of Slurm (v 16.05).
      bug 1851
      6c9d4540
  6. 25 Sep, 2015 2 commits
  7. 24 Sep, 2015 2 commits
  8. 23 Sep, 2015 8 commits
  9. 22 Sep, 2015 4 commits
  10. 21 Sep, 2015 4 commits
  11. 17 Sep, 2015 1 commit
  12. 16 Sep, 2015 1 commit
  13. 15 Sep, 2015 1 commit
  14. 13 Sep, 2015 1 commit
  15. 11 Sep, 2015 6 commits