1. 08 Oct, 2013 1 commit
    • Morris Jette's avatar
      EpilogSlurmctld race condition/SEGV fix · 04f06338
      Morris Jette authored
      EpilogSlurmctld pthread is passed required arguments rather than a pointer
      to the job record, which under some conditions could be purged and result
      in an invalid memory reference.
      04f06338
  2. 02 Oct, 2013 1 commit
  3. 23 Sep, 2013 1 commit
  4. 13 Aug, 2013 2 commits
    • jette's avatar
      select/cons_res - Add test for zero node allocation · e180d341
      jette authored
      I don't see how this could happen, but it might explain something
      reported by Harvard University. In any case, this could prevent
      an infinite loop if the task distribution funciton is passed a
      job allocation with zero nodes.
      e180d341
    • jette's avatar
      select/cons_res - Avoid extraneous "oversubscribe" error messages · 302d8b3f
      jette authored
      This problem was reported by Harvard University and could be
      reproduced with a command line of "srun -N1 --tasks-per-node=2 -O id".
      With other job types, the error message could be logged many times
      for each job. This change logs the error once per job and only if
      the job request does not include the -O/--overcommit option.
      302d8b3f
  5. 05 Jul, 2013 1 commit
  6. 28 Jun, 2013 1 commit
  7. 25 Jun, 2013 1 commit
  8. 21 Jun, 2013 4 commits
  9. 19 Jun, 2013 1 commit
  10. 12 Jun, 2013 1 commit
  11. 11 Jun, 2013 2 commits
  12. 10 Jun, 2013 1 commit
  13. 06 Jun, 2013 1 commit
  14. 05 Jun, 2013 4 commits
  15. 04 Jun, 2013 3 commits
  16. 03 Jun, 2013 2 commits
    • jette's avatar
      Fix for job step allocation with required hostlist and exclusive option · 523b1992
      jette authored
      Previously if the required node has no available CPUs left, then other
      nodes in the job allocation would be used
      523b1992
    • Hongjia Cao's avatar
      restore max_nodes of desc to NO_VAL when checkpointing job · f82e0fb8
      Hongjia Cao authored
      We're having some trouble getting our slurm jobs to successfully
      restart after a checkpoint.  For this test, I'm using sbatch and a
      simple, single-threaded executable.  Slurm is 2.5.4, blcr is 0.8.5.
      I'm submitting the job using sbatch:
      
      $ sbatch -n 1 -t 12:00:00 bin/bowtie-ex.sh
      
      I am able to create the checkpoint and vacate the node:
      
      $ scontrol checkpoint create 137
      .... time passes ....
      $ scontrol vacate 137
      
      At that point, I see the checkpoint file from blcr in the current
      directory and the checkpoint file from Slurm
      in /var/spool/slurm-llnl/checkpoint.  However, when I attempt to
      restart the job:
      
      $ scontrol checkpoint restart 137
      scontrol_checkpoint error: Node count specification invalid
      
      In slurmctld's log (at level 7) I see:
      
      [2013-05-29T12:41:08-07:00] debug2: Processing RPC: REQUEST_CHECKPOINT(restart) from uid=*****
      [2013-05-29T12:41:08-07:00] debug3: Version string in job_ckpt header is JOB_CKPT_002
      [2013-05-29T12:41:08-07:00] _job_create: max_nodes == 0
      [2013-05-29T12:41:08-07:00] _slurm_rpc_checkpoint restart 137: Node count specification invalid
      f82e0fb8
  17. 30 May, 2013 1 commit
  18. 29 May, 2013 1 commit
  19. 23 May, 2013 8 commits
  20. 22 May, 2013 3 commits