1. 06 Jul, 2015 1 commit
    • Morris Jette's avatar
      Add backfill scheduler timeout · 7e944220
      Morris Jette authored
      Backfill scheduler: The configured backfill_interval value (default 30
          seconds) is now interpretted as a maximum run time for the backfill
          scheduler. Once reached, the scheduler will build a new job queue and
          start over, even if not all jobs have been tested.
      bub 1774
      7e944220
  2. 30 Jun, 2015 2 commits
  3. 29 Jun, 2015 1 commit
  4. 25 Jun, 2015 1 commit
  5. 24 Jun, 2015 1 commit
  6. 23 Jun, 2015 1 commit
  7. 22 Jun, 2015 3 commits
    • Morris Jette's avatar
      Advanced reservation fixes · a6454176
      Morris Jette authored
      Updates of existing bluegene advanced reservations did not work at all.
      Some multi-core configurations resulting in an abort due to creating
        core_bitmaps for the reservation that only had one bit per node rather
        than one bit per core.
      These bugs were introduced in commit 5f258072
      a6454176
    • David Bigagli's avatar
      Update NEWS · c8545598
      David Bigagli authored
      c8545598
    • David Bigagli's avatar
      Update NEWS · 38007f9b
      David Bigagli authored
      38007f9b
  8. 19 Jun, 2015 1 commit
  9. 15 Jun, 2015 1 commit
  10. 12 Jun, 2015 2 commits
  11. 11 Jun, 2015 1 commit
  12. 10 Jun, 2015 1 commit
  13. 09 Jun, 2015 2 commits
    • David Bigagli's avatar
      Search for user in all groups · 93ead71a
      David Bigagli authored
      93ead71a
    • Morris Jette's avatar
      Fix scheduling inconsistency with GRES · e1a00772
      Morris Jette authored
      1. I submit a first job that uses 1 GPU:
      $ srun --gres gpu:1 --pty bash
      $ echo $CUDA_VISIBLE_DEVICES
      0
      
      2. while the first one is still running, a 2-GPU job asking for 1 task per node
      waits (and I don't really understand why):
      $ srun --ntasks-per-node=1 --gres=gpu:2 --pty bash
      srun: job 2390816 queued and waiting for resources
      
      3. whereas a 2-GPU job requesting 1 core per socket (so just 1 socket) actually
      gets GPUs allocated from two different sockets!
      $ srun -n 1  --cores-per-socket=1 --gres=gpu:2 -p testk --pty bash
      $ echo $CUDA_VISIBLE_DEVICES
      1,2
      
      With this change #2 works the same way as #3.
      bug 1725
      e1a00772
  14. 05 Jun, 2015 1 commit
  15. 04 Jun, 2015 2 commits
  16. 03 Jun, 2015 1 commit
    • Morris Jette's avatar
      switch/cray: Refine PMI_CRAY_NO_SMP_ENV set · ef66b2eb
      Morris Jette authored
      switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable.
      Rather than testing for the task distribution option, test the actual
      task IDs to see fi they are monotonically increasing across all nodes.
      Based upon idea from Brian Gilmer (Cray).
      ef66b2eb
  17. 02 Jun, 2015 3 commits
  18. 01 Jun, 2015 1 commit
  19. 30 May, 2015 1 commit
  20. 29 May, 2015 5 commits
  21. 28 May, 2015 1 commit
  22. 27 May, 2015 1 commit
    • Morris Jette's avatar
      Map job --mem-per-cpu=0 to --mem=0. · 33c77302
      Morris Jette authored
      However, --mem=0 now reflects the appropriate amount of memory in the
      system, --mem-per-cpu=0 hasn't changed.  This allows all the memory to
      be allocated in a cgroup but is not "consumed" and is available for
      other jobs running on the same host.
      Eric Martin, Washington University School of Medicine
      33c77302
  23. 26 May, 2015 1 commit
  24. 22 May, 2015 1 commit
  25. 21 May, 2015 1 commit
  26. 20 May, 2015 2 commits
  27. 19 May, 2015 1 commit