- 30 Apr, 2015 5 commits
-
-
Morris Jette authored
In slurmctld communication agent, make the thread timeout be the configured value of MessageTimeout (or 30 seconds, whichever is larger) rather than 30 seconds.
-
Morris Jette authored
-
Morris Jette authored
Fix scancel bug which could return an error on attempt to signal a job step. A simple "scancel 12.3" to signal a specific job step would fail. Adding another option (say "-i", "--partion=", etc.) would fix this.
-
David Bigagli authored
-
David Bigagli authored
-
- 29 Apr, 2015 7 commits
-
-
Morris Jette authored
Modify slurmctld's parsing of a job_id string for the job_signal and job_requeue calls to treat a job ID value of "#_*" as representing all tasks in a job ID number "#". Previously treated as invalid input. Also set the last_job_update time so that if a pending job is killed, then that is reported immediately by "squeue -i#" (previously it may keep reporting stale date.
-
Morris Jette authored
Trying to avoid having technical questions sent to "sales@schedmd.com"
-
Morris Jette authored
-
jette authored
This avoids letting the queued scheduling thread from starting if the main scheduling loop is still running.
-
Danny Auble authored
This reverts commit f9ebf5ad. Conflicts: src/plugins/select/alps/basil_interface.c
-
Danny Auble authored
before ending the job.
-
Danny Auble authored
will make it so the slurmctld will not signal the apid's in a batch job. Instead it relies on the rpc coming from the slurmctld to kill the job to end things correctly.
-
- 28 Apr, 2015 9 commits
-
-
Morris Jette authored
Make this be the minimum time between the end of one scheduling cycle and the start of the next cycle (rather than using start times for both). Set the default value to 1,000,000 microseconds for Cray/ALPS systems.
-
Morris Jette authored
Refactor scancel so that all pending jobs are cancelled before starting cancellation of running jobs. Otherwise they happen in parallel and the pending jobs can be scheduled on resources as the running jobs are being cancelled.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
Expect doesn't seem to be reading the full buffer, so the test was modified to pass. Expect is only looking at the first 100 lines of ouptut from "sbatch --help".
-
Morris Jette authored
Minor revisiions to the logic and documentation of commit 26624602
-
jette authored
Add SchedulerParameters option of sched_min_interval that controls the minimum time interval between any job scheduling action. The default value is zero (disabled). bug 1623
-
- 24 Apr, 2015 3 commits
-
-
Morris Jette authored
Initialize some variables used with the srun --no-alloc option that may cause random failures.
-
Morris Jette authored
-
Morris Jette authored
-
- 23 Apr, 2015 2 commits
-
-
Morris Jette authored
-
Danny Auble authored
-
- 22 Apr, 2015 5 commits
-
-
Danny Auble authored
job. This is already handled on the stepd when the script finishes.
-
Danny Auble authored
often an inventory request is handled.
-
Danny Auble authored
-
Brian Christiansen authored
-
Brian Christiansen authored
-
- 21 Apr, 2015 7 commits
-
-
Danny Auble authored
free afterwards would not have zeroed out memory on the variables that didn't get unpacked.
-
Danny Auble authored
-
David Bigagli authored
-
Morris Jette authored
sbatch to stop parsing script for "#SBATCH" directives after first command. It keeps parsing so long as lines contain only white space or comments (first non-white space character is '#').
-
David Bigagli authored
-
Morris Jette authored
-
Morris Jette authored
bug 1608
-
- 20 Apr, 2015 2 commits
-
-
Morris Jette authored
Add SchedulerParameters option of "sched_max_job_start=" to limit the number of jobs that can be started in any single execution of the main scheduling logic.
-
Danny Auble authored
-