- 02 Mar, 2019 3 commits
-
-
Michael Hinton authored
if job tries to reset gpu frequency and nvml is not supported then log "GpuFreq=control_disabled". Test of this function recognizes this and exits.
-
Michael Hinton authored
-
Michael Hinton authored
-
- 01 Mar, 2019 20 commits
-
-
Michael Hinton authored
-
Danny Auble authored
-
Brian Christiansen authored
Contiuation of 9a243a1a Bug 6592
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
Also mention FIXME's that need to be fixed with this code.
-
Tim Wickberg authored
-
Tim Wickberg authored
Generated by autoreconf, should never be checked in.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
This portion of the code is only for removing the Submit limits from the partitions the job wasn't going to run on. Since the job isn't running on these partitions the node bitmap does not need to be touched.
-
Danny Auble authored
-
Danny Auble authored
Test was relying on seeing future jobs which is no longer the way it works.
-
Tim Wickberg authored
-
Tim Wickberg authored
Use SUBDIRS one level above to control build instead of conditionalizing the build inside of src/plugins/gpu/nvml/Makefile.am.
-
Brian Christiansen authored
Just wait for the node to register with Slurm. If it doesn't register within ResumeTimeout, the node will be marked down. Continuation of d42889fb Bug 6587
-
Brian Christiansen authored
-
Broderick Gardner authored
What was happening in a maintenance reservation was every 5 seconds the slurmctld was sending a message to the slurmdbd telling it the node's down/drain state was in a maintenance state as well. We really only needed to know about this once. Bug 6487
-
Broderick Gardner authored
(DRAIN+MAINT+IDLE) Before it would just print MAINT Bug 6487
-
- 28 Feb, 2019 17 commits
-
-
Broderick Gardner authored
Bug 6487
-
Alejandro Sanchez authored
Bug 4296
-
Alejandro Sanchez authored
This will be used in future commits. Bug 4296
-
Marshall Garey authored
GrpTRES behavior was changed with commits 7d69a43a and 158b88de so that each job allocation is no longer counted separately towards the GrpTRES limit. This is important for the GrpNodes. Bug 5303.
-
Brian Christiansen authored
to avoid having to send alias list for cloud nodes that exist in DNS. This is especially important for large cloud environments (e.g. thousands of nodes) as the alias list environment will be too large for execve(). Bug 6589
-
Morris Jette authored
Fix some logic committed in 6705b63c that caused test7.17 to fail
-
Brian Christiansen authored
Bug 6587
-
Brian Christiansen authored
This is useful in a cloud environment where the nodes come and go out of DNS. Bug 6592
-
Dominik Bartkiewicz authored
bug 6445
-
Marshall Garey authored
Bug 6519.
-
Alejandro Sanchez authored
-
Brian Christiansen authored
Continuation of 324404de Bug 6433
-
Brian Christiansen authored
Continuation of c2cdde85 Bug 6433
-
Dominik Bartkiewicz authored
If a GrpNodes limit is configurated in an association, partition QOS or job QOS then favor use of nodes already allocated to that entity. This will result in the configured node "Weight" being incremented by one for nodes which are not prefered. Consider adjusting configured node "Weight" values to achieve the desired node preferences. bug 5303
-
Alejandro Sanchez authored
-
Tim Wickberg authored
Same as salloc/sbatch --gres option. Bug 6582.
-
Alejandro Sanchez authored
-