- 15 Sep, 2014 1 commit
-
-
David Bigagli authored
-
- 13 Sep, 2014 1 commit
-
-
Danny Auble authored
s_p_options_t struct.
-
- 11 Sep, 2014 3 commits
-
-
Morris Jette authored
The CPU specification enforcement is strict rather than advisory.
-
Danny Auble authored
warning.
-
Danny Auble authored
of cpus in the job_resources_t structure so as nodes finish the correct cpu count is displayed in the user tools.
-
- 10 Sep, 2014 2 commits
-
-
Morris Jette authored
Previous logic would only make available the CPUs associated with the first N GRES, where N is the number of requested GRES. CPUs which might be made available by using different GRES were not considered available. bug 1092
-
Danny Auble authored
-
- 09 Sep, 2014 2 commits
-
-
Morris Jette authored
Eliminate race condition in enforcement of MaxJobCount limit for job arrays. The job count limit was checked for a job array before setting the slurmctld job locks. If new jobs were submitted between the test and the job array creation such that the job array creation would result in MaxJobCount being exceeded, then a fatal error would result. bug 1091
-
Danny Auble authored
message back. In slow systems with many associations this could speed responsiveness in sacctmgr after adding associations.
-
- 04 Sep, 2014 2 commits
-
-
Morris Jette authored
-
David Bigagli authored
Fix error handling for job array create failure due to inability to copy job files (script and environment). See bug 1077
-
- 03 Sep, 2014 5 commits
-
-
David Bigagli authored
are hard links to the first element specification files. If the controller fails to make the links the files are copied instead.
-
Danny Auble authored
reserved for higher priority jobs.
-
Danny Auble authored
-
Danny Auble authored
since the slurmd will send the same message.
-
Danny Auble authored
correctly.
-
- 30 Aug, 2014 1 commit
-
-
Danny Auble authored
ran inside the allocation can read the environment correctly.
-
- 28 Aug, 2014 4 commits
-
-
Morris Jette authored
This fixes some problems creating advanced reservations on heterogeneous systems, especially when core counts are specified in the reservation. bug 1068
-
Hongjia Cao authored
-
Morris Jette authored
Make "srun --gres=none ..." work when executed without a job allocation (i.e. srun creates the allocation plus the step). Previous logic would try to create the job with a gres value of "none".
-
Morris Jette authored
Fix for possible error if job has GRES, but the step explicitly requests a GRES count of zero.
-
- 27 Aug, 2014 1 commit
-
-
Danny Auble authored
-
- 26 Aug, 2014 3 commits
-
-
Bjørn-Helge Mevik authored
-
Danny Auble authored
and only caused confusion since the cpu_bind options mostly refer to a step we opted to only allow srun to set them in future versions.
-
Morris Jette authored
Defer job step initiation of required GRES are in use by other steps rather than immediately returning an error. bug 1056
-
- 25 Aug, 2014 2 commits
-
-
Danny Auble authored
had --network= specified.
-
Danny Auble authored
ProfileHDF5Dir directory as well as all it's sub-directories and files.
-
- 23 Aug, 2014 2 commits
-
-
Kilian Cavalotti authored
be used for AcctGatherFilesystemType.
-
Kilian Cavalotti authored
-
- 21 Aug, 2014 3 commits
-
-
Morris Jette authored
srun properly interprets a leading "." in the executable name based upon the working directory of the compute node rather than the submit host.
-
Danny Auble authored
script that will use it.
-
Danny Auble authored
states.
-
- 20 Aug, 2014 1 commit
-
-
Danny Auble authored
has finished)
-
- 19 Aug, 2014 5 commits
-
-
Morris Jette authored
-
Danny Auble authored
timeout when talking to the Database is the same timeout so a race condition could occur in the requesting client when receiving the response if the database is unresponsive.
-
Danny Auble authored
-
Danny Auble authored
the entire cnode.
-
Morris Jette authored
Fix SelectTypeParameters=CR_PACK_NODES for srun making both job and step resource allocation. Previously a stand-long srun command would distribute tasks evenly across the nodes.
-
- 18 Aug, 2014 1 commit
-
-
Morris Jette authored
Start a job in the highest priority partition possible, even if it requires preempting other jobs and delaying initiation, rather than using a lower priority partition. Previous logic would preempt lower priority jobs, but then might start the job in a lower priority partition and not use the resources released by the preempted jobs. bug 1032
-
- 12 Aug, 2014 1 commit
-
-
David Bigagli authored
-