1. 19 Feb, 2015 1 commit
  2. 18 Feb, 2015 2 commits
  3. 17 Feb, 2015 2 commits
  4. 13 Feb, 2015 1 commit
  5. 12 Feb, 2015 3 commits
  6. 11 Feb, 2015 1 commit
  7. 10 Feb, 2015 2 commits
  8. 09 Feb, 2015 3 commits
  9. 05 Feb, 2015 1 commit
  10. 04 Feb, 2015 3 commits
    • Morris Jette's avatar
      Report correct job "shared" field value · 3de14946
      Morris Jette authored
      Previously it was not possible to distinguish between a job needing
      exclusive nodes and the default job/partition configuration.
      3de14946
    • Morris Jette's avatar
      job array slurmctld abort fix · 0ff342b5
      Morris Jette authored
      Fix job array logic that can cause slurmctld to abort.
      bug 1426
      0ff342b5
    • Morris Jette's avatar
      Fix for CUDA v7.0+ · da2fba48
      Morris Jette authored
      Enable CUDA v7.0+ use with a Slurm configuration of TaskPlugin=task/cgroup
      ConstrainDevices=yes (in cgroup.conf). With that configuration
      CUDA_VISIBLE_DEVICES will start at 0 rather than the device number.
      bug 1421
      da2fba48
  11. 03 Feb, 2015 6 commits
  12. 02 Feb, 2015 3 commits
  13. 31 Jan, 2015 2 commits
  14. 30 Jan, 2015 2 commits
  15. 28 Jan, 2015 3 commits
  16. 27 Jan, 2015 1 commit
  17. 26 Jan, 2015 1 commit
  18. 23 Jan, 2015 1 commit
  19. 22 Jan, 2015 1 commit
  20. 21 Jan, 2015 1 commit
    • Morris Jette's avatar
      fix job array scheduling anomaly · 3787c01f
      Morris Jette authored
      If some tasks of a job array are runnable and the meta-job array
      record is not runable (e.g. held), the old logic could start a
      runable task then try to start the non-runable meta-job, discover
      it can not run, and set its reason to "BadConstraints".
      
      Test case:
      Make it so no jobs can start (partition stopped, slurmd down, etc.)
      submit a job array
      hold the job array
      release the first two tasks of the job array
      Make it so jobs can start
      3787c01f