1. 21 Jun, 2012 3 commits
  2. 20 Jun, 2012 4 commits
  3. 18 Jun, 2012 3 commits
  4. 15 Jun, 2012 2 commits
  5. 13 Jun, 2012 4 commits
  6. 12 Jun, 2012 3 commits
  7. 11 Jun, 2012 2 commits
  8. 07 Jun, 2012 1 commit
  9. 05 Jun, 2012 4 commits
  10. 04 Jun, 2012 1 commit
    • Rod Schultz's avatar
      Document enforcement of job's --mem option · 54b63642
      Rod Schultz authored
      I'd like to add the following disclaimer to the documentation of the --mem option to the salloc/sbatch/srun commands. There is currently similar wording in the slurm.conf file, but I've received a bug report in which the memory limits were exceeded (until the next accounting poll).
      
      NOTE: Enforcement of memory limits currently requires enabling of accounting,
      which samples memory use on a periodic basis (data need not be stored,  just  collected).
      A task may exceed the memory limit until the next periodic accounting sample.
      
      Rod Schultz, Bull
      54b63642
  11. 01 Jun, 2012 4 commits
  12. 31 May, 2012 2 commits
  13. 30 May, 2012 3 commits
  14. 29 May, 2012 1 commit
  15. 25 May, 2012 3 commits
    • Morris Jette's avatar
      Correct default NodeAddr · fbc0e712
      Morris Jette authored
      According to man slurm.conf, the default for NodeAddr is NodeName:
      
        "By  default, the NodeAddr will be identical in value to NodeName."
      
      However, it seems the default is NodeHostname (when that differs from
      NodeName): With the following in slurmnodes.conf:
      
      Nodename=c0-0 NodeHostname=compute-0-0 ...
      
      I get
      
      NodeName=c0-0 Arch=x86_64 CoresPerSocket=2
         CPUAlloc=0 CPUErr=0 CPUTot=4 Features=intel,rack0,hugemem
         Gres=(null)
      ***
         NodeAddr=compute-0-0 NodeHostName=compute-0-0
      ***
         OS=Linux RealMemory=3949 Sockets=2
         State=IDLE ThreadsPerCore=1 TmpDisk=10000 Weight=1027
         BootTime=2012-05-08T15:07:08 SlurmdStartTime=2012-05-25T10:30:10
      
      (This is with 2.4.0-0.pre4.)
      
      (We are planning to use cx-y instead of compute-x-y (the rocks default)
      on our next cluster, to save some typing.)
      
      --
      Regards,
      Bjørn-Helge Mevik, dr. scient,
      Research Computing Services, University of Oslo
      fbc0e712
    • Rod Schultz's avatar
      Change SchedulerParamters option from "bf_res=" to "bf_resolution=" · 0f590296
      Rod Schultz authored
      This change makes the code consistent with the documentation.
      Note that "bf_res=" will continue to be recognized for now.
      Patch from Rod Schultz, Bull.
      0f590296
    • Don Albert's avatar
      Modify scontrol show job to require -dd option to print batch script. · 8ed1b303
      Don Albert authored
      I have implemented the changes as you suggested:   using a "-dd" option to indicate that the display of the script is wanted, and setting both the "SHOW_DETAIL" and a new "SHOW_DETAIL2" flag.
      
      Since "scontrol" can be run interactively as well,  I added a new "script" option to indicate that display of both the script and the details is wanted if the job is a batch job.
      
      Here are the man page updates for "man scontrol".   For the "-d, --details" option:
      
             -d, --details
                    Causes  the  show command to provide additional details where available.  Repeating the option more than
                    once (e.g., "-dd") will cause the show job command to also list the batch script, if the job was a batch
                    job.
      
      For the interactive "details" option:
      
             details
                    Causes  the  show  command  to provide additional details where available.  Job information will include
                    CPUs and NUMA memory allocated on each node.  Note that on computers  with  hyperthreading  enabled  and
                    SLURM  configured  to allocate cores, each listed CPU represents one physical core.  Each hyperthread on
                    that core can be allocated a separate task, so a job's CPU count and task count  may  differ.   See  the
                    --cpu_bind  and  --mem_bind  option  descriptions  in  srun man pages for more information.  The details
                    option is currently only supported for the show job command. To also list the  batch  script  for  batch
                    jobs, in addition to the details, use the script option described below instead of this option.
      
      And for the new interactive "script" option:
      
             script Causes the show job command to list the batch script for batch jobs in addition to the  detail  informa-
                    tion described under the details option above.
      
      Attached are the patch file for the changes and a text file with the results of the tests I did to check out the changes.   The patches are against SLURM 2.4.0-rc1.
      
              -Don Albert-
      8ed1b303