- 30 May, 2012 1 commit
-
-
Andy Wettstein authored
In etc/init.d/slurm move check for scontrol after sourcing /etc/sysconfig/slurm. Patch from Andy Wettstein, University of Chicago.
-
- 29 May, 2012 1 commit
-
-
Don Lipari authored
-
- 25 May, 2012 3 commits
-
-
Morris Jette authored
According to man slurm.conf, the default for NodeAddr is NodeName: "By default, the NodeAddr will be identical in value to NodeName." However, it seems the default is NodeHostname (when that differs from NodeName): With the following in slurmnodes.conf: Nodename=c0-0 NodeHostname=compute-0-0 ... I get NodeName=c0-0 Arch=x86_64 CoresPerSocket=2 CPUAlloc=0 CPUErr=0 CPUTot=4 Features=intel,rack0,hugemem Gres=(null) *** NodeAddr=compute-0-0 NodeHostName=compute-0-0 *** OS=Linux RealMemory=3949 Sockets=2 State=IDLE ThreadsPerCore=1 TmpDisk=10000 Weight=1027 BootTime=2012-05-08T15:07:08 SlurmdStartTime=2012-05-25T10:30:10 (This is with 2.4.0-0.pre4.) (We are planning to use cx-y instead of compute-x-y (the rocks default) on our next cluster, to save some typing.) -- Regards, Bjørn-Helge Mevik, dr. scient, Research Computing Services, University of Oslo
-
Rod Schultz authored
This change makes the code consistent with the documentation. Note that "bf_res=" will continue to be recognized for now. Patch from Rod Schultz, Bull.
-
Don Albert authored
I have implemented the changes as you suggested: using a "-dd" option to indicate that the display of the script is wanted, and setting both the "SHOW_DETAIL" and a new "SHOW_DETAIL2" flag. Since "scontrol" can be run interactively as well, I added a new "script" option to indicate that display of both the script and the details is wanted if the job is a batch job. Here are the man page updates for "man scontrol". For the "-d, --details" option: -d, --details Causes the show command to provide additional details where available. Repeating the option more than once (e.g., "-dd") will cause the show job command to also list the batch script, if the job was a batch job. For the interactive "details" option: details Causes the show command to provide additional details where available. Job information will include CPUs and NUMA memory allocated on each node. Note that on computers with hyperthreading enabled and SLURM configured to allocate cores, each listed CPU represents one physical core. Each hyperthread on that core can be allocated a separate task, so a job's CPU count and task count may differ. See the --cpu_bind and --mem_bind option descriptions in srun man pages for more information. The details option is currently only supported for the show job command. To also list the batch script for batch jobs, in addition to the details, use the script option described below instead of this option. And for the new interactive "script" option: script Causes the show job command to list the batch script for batch jobs in addition to the detail informa- tion described under the details option above. Attached are the patch file for the changes and a text file with the results of the tests I did to check out the changes. The patches are against SLURM 2.4.0-rc1. -Don Albert-
-
- 24 May, 2012 9 commits
-
-
Danny Auble authored
so acct_policy_job_runnable will always return true.
-
Danny Auble authored
Signed-off-by: Danny Auble <da@schedmd.com>
-
Danny Auble authored
compiling with --enable-debug
-
Jon Bringhurst authored
The purpose of this is so moab scripts and commands (such as 'checkjob') have consistent access to the SUBMITHOST variable.
-
Nathan Yee authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
- 23 May, 2012 11 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-
Morris Jette authored
-
Don Lipari authored
Phil and I independently found a chunk of duplicate code that can be eliminated with no change to the functionality. This is against the 2.4 branch. Don
-
Morris Jette authored
Conflicts: src/slurmctld/reservation.c
-
Danny Auble authored
isn't up at the time the slurmctld starts, not running the priority/multifactor plugin, and then the database is started up later.
-
Morris Jette authored
-
- 22 May, 2012 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 21 May, 2012 2 commits
-
-
Morris Jette authored
This is a backport of plugin initialization logic made in the select/serial branch that we want to have in SLURM v2.4 and not only in v2.5. Note the plugin logic in v2.5 is different and this changes to not apply
-
Morris Jette authored
-
- 17 May, 2012 1 commit
-
-
Morris Jette authored
Previous code could result in invalid memory reference.
-
- 16 May, 2012 10 commits
-
-
Morris Jette authored
-
Morris Jette authored
-
alejluther authored
Avoiding a slurmctld crash when scheduling problems due to resources. Setting an ADMIN hold instead.
-
Morris Jette authored
Cray - Improve support for zero compute note resource allocations. Partition used can now be configured with no nodes nodes.
-
Morris Jette authored
-
Danny Auble authored
-
Danny Auble authored
Conflicts: NEWS
-
Danny Auble authored
-
Morris Jette authored
-
Danny Auble authored
Conflicts: META
-