1. 09 Sep, 2013 2 commits
  2. 06 Sep, 2013 1 commit
  3. 04 Sep, 2013 1 commit
    • Morris Jette's avatar
      Improve GRES support for CPU topology · 6f50943c
      Morris Jette authored
      Previous logic would pick CPUs then
      reject jobs that can not match GRES to the allocated CPUs. New logic first
      filters out CPUs that can not use the GRES, next picks CPUs for the job,
      and finally picks the GRES that best match those CPUs.
      bug 410
      6f50943c
  4. 30 Aug, 2013 1 commit
  5. 29 Aug, 2013 3 commits
  6. 28 Aug, 2013 2 commits
  7. 27 Aug, 2013 1 commit
    • Morris Jette's avatar
      Reservation with CoreCnt: Avoid possible invalid memory reference · e0541f93
      Morris Jette authored
      If reservation create request included a CoreCnt value and more
      nodes are required than configured, the logic in select/cons_res
      could go off the end of the core_cnt array. This patch adds a
      check for a zero value in the core_cnt array, which terminates
      the user-specified array.
      Back-port from master of commit 211c224b
      e0541f93
  8. 24 Aug, 2013 1 commit
  9. 23 Aug, 2013 1 commit
    • Morris Jette's avatar
      Correct value of min_nodes returned by loading job info · 98e24b0d
      Morris Jette authored
      This is a correction of a bug introduced in commit
      https://github.com/SchedMD/slurm/commit/ac44db862c8d1f460e55ad09017d058942ff6499
      That commit eliminated the need of reading the node state information
      from squeue for performance reasons (mostly for large parallel systems
      in which the Prolog ran squeue, which generates a lot of simultaneous
      RPCs, slowing down the job launch process). It also assumed 1 CPU per
      node. If a pending job specified a node count of 1 and a task count
      larger than one, squeue was reporting the node count of the job as
      the same as the task count. This patch moves that same calculation
      of a pending job's minimum node count into slurmctld, so the squeue
      still does not need to read the node information, but can report the
      correct node count for pending jobs with minimal overhead.
      98e24b0d
  10. 22 Aug, 2013 2 commits
  11. 21 Aug, 2013 1 commit
    • Hongjia Cao's avatar
      Fix of wrong node/job state problem after reconfig · d80c8667
      Hongjia Cao authored
      If there are completing jobs, a reconfigure will set wrong job/node
      state: all nodes of the completing job will be set allocated, and the
      job will not be removed even if the completing nodes are released. The
      state can only be restored by restarting slurmctld after the completing
      nodes released.
      d80c8667
  12. 20 Aug, 2013 1 commit
  13. 17 Aug, 2013 1 commit
  14. 16 Aug, 2013 1 commit
  15. 15 Aug, 2013 4 commits
  16. 14 Aug, 2013 4 commits
  17. 13 Aug, 2013 3 commits
  18. 09 Aug, 2013 1 commit
  19. 07 Aug, 2013 1 commit
  20. 06 Aug, 2013 1 commit
  21. 01 Aug, 2013 1 commit
  22. 31 Jul, 2013 1 commit
  23. 30 Jul, 2013 1 commit
  24. 26 Jul, 2013 2 commits
  25. 25 Jul, 2013 1 commit
  26. 23 Jul, 2013 1 commit