- 12 Jun, 2015 2 commits
-
-
Brian Christiansen authored
Bug 1739
-
Brian Christiansen authored
Bug 1743
-
- 11 Jun, 2015 1 commit
-
-
Brian Christiansen authored
Bug 1733
-
- 10 Jun, 2015 1 commit
-
-
Morris Jette authored
-
- 09 Jun, 2015 2 commits
-
-
David Bigagli authored
-
Morris Jette authored
1. I submit a first job that uses 1 GPU: $ srun --gres gpu:1 --pty bash $ echo $CUDA_VISIBLE_DEVICES 0 2. while the first one is still running, a 2-GPU job asking for 1 task per node waits (and I don't really understand why): $ srun --ntasks-per-node=1 --gres=gpu:2 --pty bash srun: job 2390816 queued and waiting for resources 3. whereas a 2-GPU job requesting 1 core per socket (so just 1 socket) actually gets GPUs allocated from two different sockets! $ srun -n 1 --cores-per-socket=1 --gres=gpu:2 -p testk --pty bash $ echo $CUDA_VISIBLE_DEVICES 1,2 With this change #2 works the same way as #3. bug 1725
-
- 05 Jun, 2015 1 commit
-
-
Danny Auble authored
Only going to do this in the master as it may affect scripts. This reverts commit 454f78e6. Conflicts: NEWS
-
- 04 Jun, 2015 2 commits
-
-
David Bigagli authored
-
David Bigagli authored
-
- 03 Jun, 2015 1 commit
-
-
Morris Jette authored
switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable. Rather than testing for the task distribution option, test the actual task IDs to see fi they are monotonically increasing across all nodes. Based upon idea from Brian Gilmer (Cray).
-
- 02 Jun, 2015 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
afterward cause a divide by zero error.
-
Danny Auble authored
corruption if thread uses the pointer basing validity off the id. Bug 1710
-
- 01 Jun, 2015 1 commit
-
-
David Bigagli authored
-
- 30 May, 2015 1 commit
-
-
Danny Auble authored
-
- 29 May, 2015 5 commits
-
-
Brian Christiansen authored
Bug 1495
-
Morris Jette authored
Correct count of CPUs allocated to job on system with hyperthreads. The bug was introduced in commit a6d3074d On a system with hyperthreads: srun -n1 --ntasks-per-core=1 hostname you would get: slurmctld: error: job_update_cpu_cnt: cpu_cnt underflow on job_id 67072
-
Morris Jette authored
preempt/job_prio plugin: Implement the concept of Warm-up Time here. Use the QoS GraceTime as the amount of time to wait before preempting. Basically, skip preemption if your time is not up.
-
Morris Jette authored
-
Danny Auble authored
a job runs past it's time limit.
-
- 28 May, 2015 1 commit
-
-
Brian Christiansen authored
Bug 1705
-
- 27 May, 2015 1 commit
-
-
Morris Jette authored
However, --mem=0 now reflects the appropriate amount of memory in the system, --mem-per-cpu=0 hasn't changed. This allows all the memory to be allocated in a cgroup but is not "consumed" and is available for other jobs running on the same host. Eric Martin, Washington University School of Medicine
-
- 26 May, 2015 1 commit
-
-
Morris Jette authored
Correct list of unavailable nodes reported in a job's "reason" field when that job can not start. bug 1614
-
- 22 May, 2015 1 commit
-
-
Morris Jette authored
bug 1679
-
- 21 May, 2015 1 commit
-
-
Danny Auble authored
-
- 20 May, 2015 2 commits
-
-
Brian Christiansen authored
Bug 1679
-
Morris Jette authored
-
- 19 May, 2015 1 commit
-
-
Morris Jette authored
switch/cray: Revert logic added to 14.11.6 that set "PMI_CRAY_NO_SMP_ENV=1" if CR_PACK_NODES is configured. bug 1585
-
- 16 May, 2015 1 commit
-
-
David Bigagli authored
-
- 15 May, 2015 2 commits
-
-
Morris Jette authored
preempt/job_prio plugin: Implement the concept of Warm-up Time here. Use the QoS GraceTime as the amount of time to wait before preempting. Basically, skip preemption if your time is not up.
-
Morris Jette authored
-
- 14 May, 2015 2 commits
-
-
David Bigagli authored
-
David Bigagli authored
-
- 13 May, 2015 3 commits
-
-
Brian Christiansen authored
Bug 1627
-
Brian Christiansen authored
-
Brian Christiansen authored
-
- 12 May, 2015 1 commit
-
-
Morris Jette authored
-
- 11 May, 2015 1 commit
-
-
Morris Jette authored
Make sure that old step data is purged when a job is requeued. Without this logic, if a job terminates abnormally then old step data may be left in slurmctld. If the job is then requeued and started on a different node, referencing that old job step data can result in abnormal events. One specific failure mode is if the job is requeued on a node with a different number of cores, and the step terminated RPC arrives later, the job and step bitmaps of allocated cores can differ in size generating an abort. bug 1660
-
- 08 May, 2015 2 commits
-
-
Danny Auble authored
-
Brian Christiansen authored
Bug 1618
-