1. 06 Jun, 2013 1 commit
  2. 05 Jun, 2013 5 commits
  3. 04 Jun, 2013 3 commits
  4. 03 Jun, 2013 2 commits
    • Morris Jette's avatar
      Start NEWS for v2.5.8 · c795724d
      Morris Jette authored
      c795724d
    • Hongjia Cao's avatar
      restore max_nodes of desc to NO_VAL when checkpointing job · f82e0fb8
      Hongjia Cao authored
      We're having some trouble getting our slurm jobs to successfully
      restart after a checkpoint.  For this test, I'm using sbatch and a
      simple, single-threaded executable.  Slurm is 2.5.4, blcr is 0.8.5.
      I'm submitting the job using sbatch:
      
      $ sbatch -n 1 -t 12:00:00 bin/bowtie-ex.sh
      
      I am able to create the checkpoint and vacate the node:
      
      $ scontrol checkpoint create 137
      .... time passes ....
      $ scontrol vacate 137
      
      At that point, I see the checkpoint file from blcr in the current
      directory and the checkpoint file from Slurm
      in /var/spool/slurm-llnl/checkpoint.  However, when I attempt to
      restart the job:
      
      $ scontrol checkpoint restart 137
      scontrol_checkpoint error: Node count specification invalid
      
      In slurmctld's log (at level 7) I see:
      
      [2013-05-29T12:41:08-07:00] debug2: Processing RPC: REQUEST_CHECKPOINT(restart) from uid=*****
      [2013-05-29T12:41:08-07:00] debug3: Version string in job_ckpt header is JOB_CKPT_002
      [2013-05-29T12:41:08-07:00] _job_create: max_nodes == 0
      [2013-05-29T12:41:08-07:00] _slurm_rpc_checkpoint restart 137: Node count specification invalid
      f82e0fb8
  5. 31 May, 2013 1 commit
  6. 30 May, 2013 1 commit
  7. 29 May, 2013 1 commit
  8. 24 May, 2013 2 commits
  9. 23 May, 2013 6 commits
  10. 22 May, 2013 2 commits
  11. 21 May, 2013 1 commit
  12. 18 May, 2013 1 commit
  13. 16 May, 2013 2 commits
  14. 14 May, 2013 1 commit
  15. 13 May, 2013 1 commit
  16. 11 May, 2013 1 commit
    • Morris Jette's avatar
      Added MaxCPUsPerNode partition configuration parameter. · e33c5d57
      Morris Jette authored
      This can be especially useful to schedule GPUs. For example a node can be
      associated with two Slurm partitions (e.g. "cpu" and "gpu") and the
      partition/queue "cpu" could be limited to only a subset of the node's CPUs,
      insuring that one or more CPUs would be available to jobs in the "gpu"
      partition/queue.
      e33c5d57
  17. 10 May, 2013 1 commit
  18. 08 May, 2013 2 commits
  19. 02 May, 2013 3 commits
  20. 01 May, 2013 3 commits