1. 15 May, 2018 3 commits
    • Morris Jette's avatar
      Make a test more robust · b1c2a6fb
      Morris Jette authored
      If ReturnToService=2 is configured, the test could generate an error
      changing node state to resume after setting it to down. The reason
      is if the node communicates with slurmctld, then its state will
      automatically be changed from down to idle and resuming an idle
      node triggers an error.
      b1c2a6fb
    • Alejandro Sanchez's avatar
      Run autogen.sh after previous commit. · ac24b431
      Alejandro Sanchez authored
      Bug 5168.
      ac24b431
    • Alejandro Sanchez's avatar
      PMIx - override default paths at configure time if --with-pmix is used. · 635c0232
      Alejandro Sanchez authored
      Previously the default paths continued to be tested even when new ones
      were requested. This had as a consequence that if any of the new paths
      was the same as any of the default ones (i.e. /usr or /usr/local), the
      configure script was incorrectly erroring out specifying that a version
      of PMIx was already found in a previous path.
      
      Bug 5168.
      635c0232
  2. 11 May, 2018 2 commits
  3. 10 May, 2018 2 commits
    • Morris Jette's avatar
      dc7ca7be
    • Alejandro Sanchez's avatar
      Fix different issues when requesting memory per cpu/node. · bf4cb0b1
      Alejandro Sanchez authored
      
      
      First issue was identified on multi partition requests. job_limits_check()
      was overriding the original memory requests, so the next partition
      Slurm validating limits against was not using the original values. The
      solution consists in adding three members to job_details struct to
      preserve the original requests. This issue is reported in bug 4895.
      
      Second issue was memory enforcement behavior being different depending on
      job the request issued against a reservation or not.
      
      Third issue had to do with the automatic adjustments Slurm did underneath
      when the memory request exceeded the limit. These adjustments included
      increasing pn_min_cpus (even incorrectly beyond the number of cpus
      available on the nodes) or different tricks increasing cpus_per_task and
      decreasing mem_per_cpu.
      
      Fourth issue was identified when requesting the special case of 0 memory,
      which was handled inside the select plugin after the partition validations
      and thus that could be used to incorrectly bypass the limits.
      
      Issues 2-4 were identified in bug 4976.
      
      Patch also includes an entire refactor on how and when job memory is
      is both set to default values (if not requested initially) and how and
      when limits are validated.
      
      Co-authored-by: default avatarDominik Bartkiewicz <bart@schedmd.com>
      bf4cb0b1
  4. 09 May, 2018 18 commits
  5. 08 May, 2018 3 commits
    • Brian Christiansen's avatar
      Prevent slurmd from launching steps if prolog fail · 3b029021
      Brian Christiansen authored
      Bug 5146
      3b029021
    • Tim Wickberg's avatar
      Fix issue with invalid protocol_version when using srun on ppc64. · 77d65f4f
      Tim Wickberg authored
      Caused by a corrupted protocol_version field value being received
      by the slurmstepd, as we cannot safely write/read a uint16_t across
      the pipe as if it was an int.
      
      Regression caused by commit 90b116c2.
      
      Bug 5133.
      77d65f4f
    • Brian Christiansen's avatar
      Fix checkpointing requeued jobs in a bad state · f9f395af
      Brian Christiansen authored
      Requeued jobs are marked as PENDING|COMPLETING until the epilog checks
      in. The issue is that if job_set_alloc_tres gets called while in the
      PENDING|COMPLETING state, the job's alloc_tres_str will be free'd. If
      this job then gets checkpointed in this state (PENDING|COMPLETING + no
      tres_alloc_str) on startup the controller would crash because it
      expected the job to have a tres_alloc_str/cnt when in the COMPLETING
      state. This could be triggered if starting the controller without the
      dbd up. When the dbd comes up, the assoc_cache_mgr calls
      _update_job_tres() which calls job_set_alloc_tres. It could also be
      triggered by adding new tres.
      
      This most likely started happening in 17.11.5 because of commit
      865b672f which introduced calling _update_job_tres() on each job
      after the dbd comes up.
      
      Bugs 5137,4522
      f9f395af
  6. 04 May, 2018 2 commits
  7. 03 May, 2018 6 commits
  8. 02 May, 2018 4 commits