1. 01 Jun, 2018 1 commit
  2. 31 May, 2018 3 commits
  3. 30 May, 2018 4 commits
  4. 29 May, 2018 3 commits
  5. 28 May, 2018 1 commit
  6. 18 May, 2018 2 commits
    • Morris Jette's avatar
      flesh out some tres_freq work · 192d0b49
      Morris Jette authored
      create src/common/tres_frequency.[ch] module based upon cpu_frequency.[ch]
      modify launch RPCs to pass the value from slurmctld to slurmstepd
      validate --gpu-freq value from salloc, sbatch, and srun
      192d0b49
    • Morris Jette's avatar
      Add v18.08 versions of un/pack functions · 9977bf88
      Morris Jette authored
      Add v18.08 versions of un/pack functions for REQUEST_LAUNCH_TASKS
      and REQUEST_BATCH_JOB_LAUNCH RPCs
      9977bf88
  7. 17 May, 2018 3 commits
  8. 16 May, 2018 9 commits
  9. 14 May, 2018 2 commits
  10. 11 May, 2018 7 commits
  11. 10 May, 2018 5 commits
    • Tim Wickberg's avatar
      Remove AIX pieces from testsuite. · 5cfbd15d
      Tim Wickberg authored
      Support for AIX was removed before 17.02.
      5cfbd15d
    • Morris Jette's avatar
      Merge branch 'slurm-17.11' · fa40dbd6
      Morris Jette authored
      fa40dbd6
    • Morris Jette's avatar
      dc7ca7be
    • Alejandro Sanchez's avatar
      Merge branch 'slurm-17.11' · 1ab63842
      Alejandro Sanchez authored
      1ab63842
    • Alejandro Sanchez's avatar
      Fix different issues when requesting memory per cpu/node. · bf4cb0b1
      Alejandro Sanchez authored
      
      
      First issue was identified on multi partition requests. job_limits_check()
      was overriding the original memory requests, so the next partition
      Slurm validating limits against was not using the original values. The
      solution consists in adding three members to job_details struct to
      preserve the original requests. This issue is reported in bug 4895.
      
      Second issue was memory enforcement behavior being different depending on
      job the request issued against a reservation or not.
      
      Third issue had to do with the automatic adjustments Slurm did underneath
      when the memory request exceeded the limit. These adjustments included
      increasing pn_min_cpus (even incorrectly beyond the number of cpus
      available on the nodes) or different tricks increasing cpus_per_task and
      decreasing mem_per_cpu.
      
      Fourth issue was identified when requesting the special case of 0 memory,
      which was handled inside the select plugin after the partition validations
      and thus that could be used to incorrectly bypass the limits.
      
      Issues 2-4 were identified in bug 4976.
      
      Patch also includes an entire refactor on how and when job memory is
      is both set to default values (if not requested initially) and how and
      when limits are validated.
      
      Co-authored-by: default avatarDominik Bartkiewicz <bart@schedmd.com>
      bf4cb0b1