1. 13 Sep, 2013 4 commits
  2. 12 Sep, 2013 1 commit
    • Morris Jette's avatar
      Add qsub support for some more options: · 454ee59b
      Morris Jette authored
      -l accelerator=true|false	(GPU use)
      -l mpiprocs=#	(processors per node)
      -l naccelerators=#	(GPU count)
      -l select=#		(node count)
      -l ncpus=#		(task count)
      -v key=value	(environment variable)
      -W umask=#		(set job's umask)
      Note: the -v option does NOT support quoted commas.
      454ee59b
  3. 11 Sep, 2013 2 commits
  4. 10 Sep, 2013 3 commits
  5. 09 Sep, 2013 2 commits
  6. 06 Sep, 2013 1 commit
  7. 04 Sep, 2013 1 commit
    • Morris Jette's avatar
      Improve GRES support for CPU topology · 6f50943c
      Morris Jette authored
      Previous logic would pick CPUs then
      reject jobs that can not match GRES to the allocated CPUs. New logic first
      filters out CPUs that can not use the GRES, next picks CPUs for the job,
      and finally picks the GRES that best match those CPUs.
      bug 410
      6f50943c
  8. 30 Aug, 2013 1 commit
  9. 29 Aug, 2013 3 commits
  10. 28 Aug, 2013 2 commits
  11. 27 Aug, 2013 1 commit
    • Morris Jette's avatar
      Reservation with CoreCnt: Avoid possible invalid memory reference · e0541f93
      Morris Jette authored
      If reservation create request included a CoreCnt value and more
      nodes are required than configured, the logic in select/cons_res
      could go off the end of the core_cnt array. This patch adds a
      check for a zero value in the core_cnt array, which terminates
      the user-specified array.
      Back-port from master of commit 211c224b
      e0541f93
  12. 24 Aug, 2013 1 commit
  13. 23 Aug, 2013 1 commit
    • Morris Jette's avatar
      Correct value of min_nodes returned by loading job info · 98e24b0d
      Morris Jette authored
      This is a correction of a bug introduced in commit
      https://github.com/SchedMD/slurm/commit/ac44db862c8d1f460e55ad09017d058942ff6499
      That commit eliminated the need of reading the node state information
      from squeue for performance reasons (mostly for large parallel systems
      in which the Prolog ran squeue, which generates a lot of simultaneous
      RPCs, slowing down the job launch process). It also assumed 1 CPU per
      node. If a pending job specified a node count of 1 and a task count
      larger than one, squeue was reporting the node count of the job as
      the same as the task count. This patch moves that same calculation
      of a pending job's minimum node count into slurmctld, so the squeue
      still does not need to read the node information, but can report the
      correct node count for pending jobs with minimal overhead.
      98e24b0d
  14. 22 Aug, 2013 2 commits
  15. 21 Aug, 2013 1 commit
    • Hongjia Cao's avatar
      Fix of wrong node/job state problem after reconfig · d80c8667
      Hongjia Cao authored
      If there are completing jobs, a reconfigure will set wrong job/node
      state: all nodes of the completing job will be set allocated, and the
      job will not be removed even if the completing nodes are released. The
      state can only be restored by restarting slurmctld after the completing
      nodes released.
      d80c8667
  16. 20 Aug, 2013 1 commit
  17. 17 Aug, 2013 1 commit
  18. 16 Aug, 2013 1 commit
  19. 15 Aug, 2013 4 commits
  20. 14 Aug, 2013 4 commits
  21. 13 Aug, 2013 3 commits