- 21 May, 2019 15 commits
-
-
Danny Auble authored
Bug 5562
-
Danny Auble authored
-
Alejandro Sanchez authored
Node memory overallocation wouldn't be properly detected since we would just be interpreting the available memory as RealMemory - MemSpecLimit, ignoring other job's memory usage. Bug 5562.
-
Alejandro Sanchez authored
This compares a job memory request against each selected node available memory, interpreting the latter for now as RealMemory - MemSpecLimit. Bug 5562.
-
Alejandro Sanchez authored
Place all three memory cases (per cpu, per node and all node memory) in a single loop, since all three cases need to traverse all job_resources selected nodes. Preparation for a follow-up commit that contains the real fix. Bug 5562.
-
Morris Jette authored
Move common (or similar) logic to globals and remove it from the individual tests.
-
Tim Wickberg authored
-
Tim Wickberg authored
Add handling for acct_gather_energy/xcc and acct_gather_profile/influxdb. Bug 6829.
-
Tim Wickberg authored
Bug 5773.
-
Danny Auble authored
No functional change. Bug 6508
-
Dominik Bartkiewicz authored
Bug 6508
-
Alejandro Sanchez authored
Previously when no memory was explicitly requested the job was assigned the DefMemPer[CPU|Node] from the first partition in the list (or the cluster-wide value if the partition wasn't configured with it), even when evaluating against a different partition. Bug 6950.
-
Dominik Bartkiewicz authored
No functional change Bug 5303
-
Tim Wickberg authored
Bug 7072
-
Dominik Bartkiewicz authored
Bug 6845
-
- 20 May, 2019 5 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Log the ID of individual sub-tests in order to more easily identify which sub-test failed rather than having to scan and compare the various execute lines in the tests.
-
Morris Jette authored
A batch job will run on front-end node, not an assigned compute node
-
Morris Jette authored
The sacct command in verison 19.05 when job ID is specified will find all examples of that job ID run at any time. That means if the job IDs numbers wrap around, this test will always fail. This adds a start time to the sacct command of 00:00 (midnight of current day) to avoid problems with wrapping job IDs and make this test work more like it did in version 18.08. Note this test does have a very tiny window for failures if the test program ran just before midnight and the sacct command to view it's state ran just after midnight. Given that the entire test only runs for a minute, that is unlikely in practice.
-
- 18 May, 2019 1 commit
-
-
Morris Jette authored
Change "Could not for..." to "Could not find ..."
-
- 17 May, 2019 8 commits
-
-
Morris Jette authored
Do not effect non-test jobs with the test LUA script to avoid impacting jobs outside of this specific test. Bug 7050
-
Nate Rini authored
Bug 7050.
-
Morris Jette authored
Previous logic only checked the first gpu record found, which is not going to reliably work if the first gpu type is on one socket and the next gpu type is on a different socket or itself spans sockets.
-
Morris Jette authored
The wrong variable was clearly being used resulting in a node's "gres" string not containing the proper socket identification for GRES bound to sockets.
-
Morris Jette authored
This change adds a job name to all tests spawned by the test. It also explicitly sets the MPI type to none. This is required by some of the tests if using OpenMPI in multi-slurmd mode. See note in test1.88 for full description of OpenMPI limitations in this Slurm mode.
-
Tim Wickberg authored
-
Tim Wickberg authored
This is select/cons_res, not select/cons_tres.
-
Morris Jette authored
Previous select/cons_res logic would allocate one CPU per task on the node Bug 6981
-
- 16 May, 2019 11 commits
-
-
Dominik Bartkiewicz authored
Bug 6221
-
Morris Jette authored
Previous select/cons_tres logic would allocate one CPU per task on the node Bug 6981
-
Morris Jette authored
Modify task layout with --overcommit option plus a heterogeneous job allocation so that a cyclic task distribution can start happening before all CPUs on all nodes are fully allocated. The number of tasks per node will be unchanged from the previous algorithm, but tasks will be distributed in a cyclic fashion first and then extra tasks placed on nodes with more CPUs. Previously all CPUs would be fully allocated in a cyclic fashion, then excess tasks distributed evenly across all allocated nodes. Bug 6981
-
Morris Jette authored
OpenMPI can only run in multi-slurmd mode if no more than one node has more than one task. Individual nodes with more than one task use shared memory for communications and if more than one node is doing that, their shared memory use collides. That means these MPI tests will work if five nodes or more are available, otherwise some tests will fail. See test1.117 for a variation of this test that will work with OpenMPI and multi-slurmd mode.
-
Dominik Bartkiewicz authored
Bug 6969.
-
Dominik Bartkiewicz authored
Bug 6969.
-
Dominik Bartkiewicz authored
Add warning to slurm.h.in that no new reservation flags can be stored in slurmdbd in 19.05. (Although they could still be used by slurmctld without issue.) Note that the underlying RPC still uses uint32_t, but this will be changed before 20.02 on master, and changing the column to uint32_t in 19.05 just to change it again in 20.02 is best avoided. Bug 6969.
-
Morris Jette authored
OpenMPI can only run in multi-slurmd mode if no more than one node has more than one task. Individual nodes with more than one task use shared memory for communications and if more than one node is doing that, their shared memory use collides. That means these MPI tests will work if five nodes or more are available, otherwise some tests will fail. See test1.117 for a variation of this test that will work with OpenMPI and multi-slurmd mode.
-
Morris Jette authored
replace spaces with tabs in test
-
Tim Wickberg authored
-
Morris Jette authored
a job with --gpus=1 --nodes=2 is not currently supported
-