1. 18 Apr, 2014 1 commit
    • Morris Jette's avatar
      switch/nrt - free partial allocation · a197a1da
      Morris Jette authored
      On switch resource allocation failure, free partial allocation.
      Failure mode was CAU could be allocated on some nodes, but not
      others. The CAU allocated on nodes and switches up to the failure
      point were never released.
      a197a1da
  2. 08 Apr, 2014 4 commits
  3. 07 Apr, 2014 3 commits
  4. 05 Apr, 2014 1 commit
  5. 04 Apr, 2014 3 commits
  6. 03 Apr, 2014 2 commits
  7. 02 Apr, 2014 1 commit
    • Morris Jette's avatar
      launch/poe - fix network value · ad7100b8
      Morris Jette authored
      if an job step's network value is set by poe, either by directly
      executing poe or srun launching poe, that value was not being
      propagated to the job step creation RPC and the network was not
      being set up for the proper protocol (e.g. mpi, lapi, pami, etc.).
      The previous logic would only work if the srun execute line
      explicitly set the protocol using the --network option.
      ad7100b8
  8. 31 Mar, 2014 1 commit
  9. 26 Mar, 2014 1 commit
  10. 25 Mar, 2014 1 commit
  11. 24 Mar, 2014 1 commit
    • Morris Jette's avatar
      job array dependency recovery fix · fca71890
      Morris Jette authored
      When slurmctld restarted, it would not recover dependencies on
      job array elements and would just discard the depenency. This
      corrects the parsing problem to recover the dependency. The old code
      would print a mesage like this and discard it:
      slurmctld: error: Invalid dependencies discarded for job 51: afterany:47_*
      fca71890
  12. 21 Mar, 2014 1 commit
    • Danny Auble's avatar
      NRT - Fix issue with 1 node jobs. It turns out the network does need to · 440932df
      Danny Auble authored
      be setup for 1 node jobs.  Here are some of the reasons from IBM...
      
      1. PE expects it.
      2. For failover, if there was some challenge or difficulty with the
         shared-memory method of data transfer, the protocol stack might
         want to go through the adapter instead.
      3. For flexibility, the protocol stack might want to be able to transfer
         data using some variable combination of shared memory and adapter-based
         communication, and
      4. Possibly most important, for overall performance, it might be that
         bandwidth or efficiency (BW per CPU cycles) might be better using the
         adapter resources.  (An obvious case is for large messages, it might
         require a lot fewer CPU cycles to program the DMA engines on the
         adapter to move data between tasks, rather than depend on the CPU
         to move the data with loads and stores, or page re-mapping -- and
         a DMA engine might actually move the data more quickly, if it's well
         integrated with the memory system, as it is in the P775 case.)
      440932df
  13. 20 Mar, 2014 2 commits
  14. 19 Mar, 2014 2 commits
  15. 18 Mar, 2014 3 commits
  16. 17 Mar, 2014 1 commit
  17. 15 Mar, 2014 1 commit
    • Morris Jette's avatar
      Add support for Torque/PBS job arrays · 11968284
      Morris Jette authored
      Add support for job array options in the qsub command, in #PBS
      options for sbatch scripts and set the appropriate environment
      variables in the spank_pbs plugin (PBS_ARRAY_ID and PBS_ARRAY_INDEX).
      Note that Torque uses the "-t" option and PBS Pro uses the "-J"
      option.
      11968284
  18. 14 Mar, 2014 2 commits
  19. 11 Mar, 2014 2 commits
  20. 08 Mar, 2014 1 commit
  21. 07 Mar, 2014 5 commits
  22. 06 Mar, 2014 1 commit