- 25 May, 2012 1 commit
-
-
Don Albert authored
I have implemented the changes as you suggested: using a "-dd" option to indicate that the display of the script is wanted, and setting both the "SHOW_DETAIL" and a new "SHOW_DETAIL2" flag. Since "scontrol" can be run interactively as well, I added a new "script" option to indicate that display of both the script and the details is wanted if the job is a batch job. Here are the man page updates for "man scontrol". For the "-d, --details" option: -d, --details Causes the show command to provide additional details where available. Repeating the option more than once (e.g., "-dd") will cause the show job command to also list the batch script, if the job was a batch job. For the interactive "details" option: details Causes the show command to provide additional details where available. Job information will include CPUs and NUMA memory allocated on each node. Note that on computers with hyperthreading enabled and SLURM configured to allocate cores, each listed CPU represents one physical core. Each hyperthread on that core can be allocated a separate task, so a job's CPU count and task count may differ. See the --cpu_bind and --mem_bind option descriptions in srun man pages for more information. The details option is currently only supported for the show job command. To also list the batch script for batch jobs, in addition to the details, use the script option described below instead of this option. And for the new interactive "script" option: script Causes the show job command to list the batch script for batch jobs in addition to the detail informa- tion described under the details option above. Attached are the patch file for the changes and a text file with the results of the tests I did to check out the changes. The patches are against SLURM 2.4.0-rc1. -Don Albert-
-
- 24 May, 2012 3 commits
-
-
Danny Auble authored
compiling with --enable-debug
-
Jon Bringhurst authored
The purpose of this is so moab scripts and commands (such as 'checkjob') have consistent access to the SUBMITHOST variable.
-
Danny Auble authored
-
- 23 May, 2012 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
isn't up at the time the slurmctld starts, not running the priority/multifactor plugin, and then the database is started up later.
-
Morris Jette authored
-
- 22 May, 2012 1 commit
-
-
Danny Auble authored
-
- 16 May, 2012 4 commits
-
-
Morris Jette authored
Cray - Improve support for zero compute note resource allocations. Partition used can now be configured with no nodes nodes.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
- 11 May, 2012 1 commit
-
-
Danny Auble authored
-
- 10 May, 2012 1 commit
-
-
Morris Jette authored
-
- 09 May, 2012 2 commits
-
-
Don Lipari authored
The symptom is that SLURM schedules lower priority jobs to run when higher priority, dependent jobs have their dependencies satisfied. This happens because dependent jobs still have a priority of 1 when the job queue is sorted in the schedule() function. The proposed fix forces jobs to have their priority updated when their dependencies are satisfied.
-
Don Lipari authored
The symptom is that SLURM schedules lower priority jobs to run when higher priority, dependent jobs have their dependencies satisfied. This happens because dependent jobs still have a priority of 1 when the job queue is sorted in the schedule() function. The proposed fix forces jobs to have their priority updated when their dependencies are satisfied.
-
- 04 May, 2012 1 commit
-
-
Danny Auble authored
developments.
-
- 03 May, 2012 1 commit
-
-
Matthieu Hautreux authored
Here is the way to reproduce it : [root@cuzco27 georgioy]# salloc -n64 -N4 --exclusive salloc: Granted job allocation 8 [root@cuzco27 georgioy]#srun -r 0 -n 30 -N 2 sleep 300& [root@cuzco27 georgioy]#srun -r 1 -n 40 -N 3 sleep 300& [root@cuzco27 georgioy]# srun: error: slurm_receive_msg: Zero Bytes were transmitted or received srun: error: Unable to create job step: Zero Bytes were transmitted or received
-
- 02 May, 2012 1 commit
-
-
Martin Perrry authored
cpus in task/cgroup plugin
-
- 27 Apr, 2012 2 commits
-
-
Morris Jette authored
Cray - Add support for zero compute note resource allocation to run batch script on front-end node with no ALPS reservation. Useful for pre- or post- processing. NOTE: The partition must be configured with MinNodes=0.
-
Danny Auble authored
batch jobs.
-
- 26 Apr, 2012 2 commits
-
-
Morris Jette authored
Sinfo output format of "%P" now prints "*" after default partition even if no field width is specified (previously included "*" only if no field width was specified. Added output format of "%R" to print partition name only without identifying the default partition with "*").
-
Danny Auble authored
-
- 24 Apr, 2012 1 commit
-
-
Morris Jette authored
-
- 23 Apr, 2012 2 commits
-
-
Morris Jette authored
-
Par Andersson authored
-
- 20 Apr, 2012 1 commit
-
-
Danny Auble authored
Previously the code would come up with how much memory a PE should have instead of the memory a node should have.
-
- 18 Apr, 2012 1 commit
-
-
Mark Nelson authored
Mark Nelson.
-
- 17 Apr, 2012 3 commits
-
-
Danny Auble authored
larger than midplane jobs.
-
Bjørn-Helge Mevik authored
Add support for new SchedulerParameters of bf_max_job_user, maximum number of jobs to attempt backfilling per user. Work by Bjørn-Helge Mevik, University of Oslo.
-
Morris Jette authored
Fix sched/wiki2 to support job account name, gres, partition name, wckey, or working directory that contains "#" (a job record separator). Without this patch, the parsing will probably stop once reaching the "#".
-
- 12 Apr, 2012 1 commit
-
-
Danny Auble authored
-
- 10 Apr, 2012 4 commits
-
-
Danny Auble authored
and time limit where it was previously set by an admin.
-
Danny Auble authored
-
Danny Auble authored
slurmdbd accounting and running large amounts of jobs (>50 sec). Job information could be corrupted before it had a chance to reach the DBD.
-
jette authored
-
- 09 Apr, 2012 1 commit
-
-
Danny Auble authored
tasks than possible without overcommit the request would be allowed on more nodes than requested.
-
- 03 Apr, 2012 2 commits
-
-
Morris Jette authored
Add documentation for the mpi/pmi2 plugin. Minor changes to code formatting and logic, but old code should work fine.
-
Morris Jette authored
Add support for new SchedulerParameters of max_depend_depth defining the maximum number of jobs to test for circular dependencies (i.e. job A waits for job B to start and job B waits for job A to start). Default value is 10 jobs.
-
- 02 Apr, 2012 1 commit
-
-
Morris Jette authored
The problem was conflicting logic in the select/cons_res plugin. Some of the code was trying to get the job the maximum node count in the range while other logic was trying to minimize spreading out of the job across multiple switches. As you note, this problem only happens when a range of node counts is specified and the select/cons_res plugin and the topology/tree plugin and even then it is not easy to reproduce (you included all of the details below). Quoting Martin.Perry@Bull.com: > Certain combinations of topology configuration and srun -N option produce > spurious job rejection with "Requested node configuration is not > available" with select/cons_res. The following example illustrates the > problem. > > [sulu] (slurm) etc> cat slurm.conf > ... > TopologyPlugin=topology/tree > SelectType=select/cons_res > SelectTypeParameters=CR_Core > ... > > [sulu] (slurm) etc> cat topology.conf > SwitchName=s1 Nodes=xna[13-26] > SwitchName=s2 Nodes=xna[41-45] > SwitchName=s3 Switches=s[1-2] > > [sulu] (slurm) etc> sinfo > PARTITION AVAIL TIMELIMIT NODES STATE NODELIST > ... > jkob up infinite 4 idle xna[14,19-20,41] > ... > > [sulu] (slurm) etc> srun -N 2-4 -n 4 -p jkob hostname > srun: Force Terminated job 79 > srun: error: Unable to allocate resources: Requested node configuration is > not available > > The problem does not occur with select/linear, or topology/none, or if -N > is omitted, or for certain other values for -N (for example, -N 4-4 and -N > 2-3 work ok). The problem seems to be in function _eval_nodes_topo in > src/plugins/select/cons_res/job_test.c. The srun man page states that when > -N is used, "the job will be allocated as many nodes as possible within > the range specified and without delaying the initiation of the job." > Consistent with this description, the requested number of nodes in the > above example is 4 (req_nodes=4). However, the code that selects the > best-fit topology switches appears to make the selection based on the > minimum required number of nodes (min_nodes=2). It therefore selects > switch s1. s1 has only 3 nodes from partition jkob. Since this is fewer > than req_nodes the job is rejected with the "node configuration" error. > > I'm not sure where the code is going wrong. It could be in the > calculation of the number of needed nodes in function _enough_nodes. Or > it could be in the code that initializes/updates req_nodes or rem_nodes. I > don't feel confident that I understand the logic well enough to propose a > fix without introducing a regression. > > Regards, > Martin
-