- 20 Mar, 2013 2 commits
-
-
Hongjia Cao authored
-
Danny Auble authored
cluster.
-
- 19 Mar, 2013 3 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- 14 Mar, 2013 4 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
Add milliseconds to default log message header (both RFC 5424 and ISO 8601 time formats). Disable milliseconds logging using the configure parameter "--disable-log-time-msec". Default time format changes to ISO 8601 (without time zone information). Specify "--enable-rfc5424time" to restore the time zone information.
-
- 13 Mar, 2013 2 commits
-
-
Morris Jette authored
Add milliseconds to default log message header with the (default) RFC5424 time format. Disable milliseconds logging using the configure parameter "--enable-rfc5424time-secs". Sample time stamp format is as follows: "2013-03-13T14:28:17.767-07:00".
-
Morris Jette authored
If step requests more CPUs than possible in specified node count of job allocation then return ESLURM_TOO_MANY_REQUESTED_CPUS rather than ESLURM_NODES_BUSY and retrying.
-
- 12 Mar, 2013 1 commit
-
-
Morris Jette authored
-
- 11 Mar, 2013 3 commits
-
-
Nathan Yee authored
Without this change, when the sbatch --export option is used, many Slurm environment variables are not set unless explicitly exported.
-
Danny Auble authored
-
Morris Jette authored
-
- 08 Mar, 2013 4 commits
-
-
Morris Jette authored
-
jette authored
This problem would effect systems in which specific GRES are associated with specific CPUs. One possible result is the CPUs identified as usable could be inappropriate and job would be held when trying to layout out the tasks on CPUs (all done as part of the job allocation process). The other problem is that if multiple GRES are linked to specific CPUs, there was a CPU bitmap OR which should have been an AND, resulting in some CPUs being identified as usable, but not available to all GRES.
-
Danny Auble authored
success
-
Stephen Trofinoff authored
-
- 07 Mar, 2013 1 commit
-
-
jette authored
This problem would effect systems in which specific GRES are associated with specific CPUs. One possible result is the CPUs identified as usable could be inappropriate and job would be held when trying to layout out the tasks on CPUs (all done as part of the job allocation process). The other problem is that if multiple GRES are linked to specific CPUs, there was a CPU bitmap OR which should have been an AND, resulting in some CPUs being identified as usable, but not available to all GRES.
-
- 06 Mar, 2013 2 commits
-
-
Danny Auble authored
options in srun, and push that logic to salloc and sbatch. Bug 201
-
Danny Auble authored
and timeout in the runjob_mux trying to send in this situation. Bug 223
-
- 04 Mar, 2013 4 commits
-
-
Danny Auble authored
-
Magnus Jonsson authored
Jobs are not backfilled due to the fact that backfill does not finish the complete backlog of jobs in the queue before it's interrupted and starts all over again. We sometimes have lots of jobs in the queue of various sizes and users and even with idle nodes short job will not start because of this. I have made a patch for backfill with a configuration option (bf_continue) to let backfill continue.
-
Morris Jette authored
The original reservation data structure is deleted and it's backup added to the reservation list, but jobs can retain a pointer to the original (now invalid) reservation data structure. Bug 250
-
Alejandro Lucero Palau authored
-
- 01 Mar, 2013 1 commit
-
-
Danny Auble authored
-
- 28 Feb, 2013 1 commit
-
-
Danny Auble authored
energy data.
-
- 27 Feb, 2013 2 commits
-
-
Danny Auble authored
-
Matthieu Hautreux authored
-
- 26 Feb, 2013 3 commits
-
-
Morris Jette authored
Without this fix, jobs that should be initiated by the backfill scheduler based upon the preemption of other jobs will not be started.
-
Danny Auble authored
-
Danny Auble authored
-
- 25 Feb, 2013 1 commit
-
-
Danny Auble authored
cnode does not have a job running on it do not resume the block.
-
- 22 Feb, 2013 3 commits
-
-
Morris Jette authored
Select/cons_res - If the job request specified --ntasks-per-socket and the allocation using is cores, then pack the tasks onto the sockets up to the specified value. Previously it would ignore the ntasks-per-socket parameter and distribute tasks across sockets
-
Danny Auble authored
--enable-debug.
-
Morris Jette authored
Counts would previously go negative as jobs terminate and decrement from a base value of zero
-
- 21 Feb, 2013 2 commits
-
-
Danny Auble authored
-
Matthieu Hautreux authored
to EINTR when something wrong happened between the open call and its return. By ensuring that Slurm retries on such errors, we can better tolerate Network file systems errors at launch time.
-
- 20 Feb, 2013 1 commit
-
-
Danny Auble authored
(>5000) and using the SchedulerParameters option bf_max_job_user. NEWS note for last few commits
-