NEWS 194 KB
Newer Older
Christopher J. Morrone's avatar
Christopher J. Morrone committed
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
* Changes in SLURM 1.4.0-pre9
=============================

* Changes in SLURM 1.4.0-pre8
=============================
 -- In order to create a new partition using the scontrol command, use
    the "create" option rather than "update" (which will only operate
    upon partitions that already exist).
 -- Added environment variable SLURM_RESTART_COUNT to batch jobs to
    indicated the count of job restarts made.
 -- Added sacctmgr command "show config".
 -- Added the scancel option --nodelist to cancel any jobs running on a
    given list of nodes.
 -- Add partition-specific DefaultTime (default time limit for jobs, 
    if not specified use MaxTime for the partition. Patch from Par
    Andersson, National Supercomputer Centre, Sweden.
 -- Add support for the scontrol command to be able change the Weight
    associated with nodes. Patch from Krishnakumar Ravi[KK] (HP).
 -- Add DebugFlag configuration option of "CPU_Bind" for detailed CPU
    binding information to be logged.
 -- Fix some significant bugs in task binding logic (possible infinite loops
    and memory corruption).
 -- Add new node state flag of NODE_STATE_MAINT indicating the node is in
    a reservation of type MAINT.
 -- Modified task/affinity plugin to automatically bind tasks to sockets,
    cores, or threads as appropriated based upon resource allocation and
    task count. User can override with srun's --cpu_bind option. 
 -- Fix bug in backfill logic for select/cons_res plugin, resulted in 
    error "cons_res:_rm_job_from_res: node_state mis-count".
 -- Add logic go bind a batch job to the resources allocated to that job.
 -- Add configuration parameter MpiParams for (future) OpenMPI port 
    management. Add resv_port_cnt and resv_ports fields to the job step 
    data structures. Add environment variable SLURM_STEP_RESV_PORTS to
    show what ports are reserved for a job step.
 -- Add support for SchedulerParameters=interval=<sec> to control the time
    interval between executions of the backfill scheduler logic.
 -- Preserve record of last job ID in use even when doing a cold-start unless
    there is no job state file or there is a change in its format (which only 
    happens when there is a change in SLURM's major or minor version number: 
    v1.3 -> v1.4).
 -- Added new configuration parameter KillOnBadExit to kill a job step as soon
    as any task of a job step exits with a non-zero exit code. Patch based
    on work from Eric Lin, Bull.
Moe Jette's avatar
Moe Jette committed
 -- Add spank plugin calls for use by salloc and sbatch command, see 
    "man spank" for details.
 -- NOTE: Cold-start (without preserving state) required for upgrade from 
Danny Auble's avatar
Danny Auble committed
    version 1.4.0-pre7.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.4.0-pre7
=============================
 -- Bug fix for preemption with select/cons_res when there are no idle nodes.
Moe Jette's avatar
Moe Jette committed
 -- Bug fix for use of srun options --exclusive and --cpus-per-task together
    for job step resource allocation (tracking of cpus in use was bad).
 -- Added the srun option --preserve-env to pass the current values of 
    environment variables SLURM_NNODES and SLURM_NPROCS through to the 
    executable, rather than computing them from commandline parameters.
 -- For select/cons_res or sched/gang only: Validate a job's resource 
    allocation socket and core count on each allocated node. If the node's
    configuration has been changed, then abort the job.
 -- For select/cons_res or sched/gang only: Disable updating a node's 
    processor count if FastSchedule=0. Administrators must set a valid
    processor count although the memory and disk space configuration can
    be loaded from the compute node when it starts.
 -- Add configure option "--disable-iso8601" to disable SLURM use of ISO 8601
    time format at the time of SLURM build. Default output for all commands
    is now ISO 8601 (yyyy-mm-ddThh:mm:ss).
 -- Add support for scontrol to explicity power a node up or down using the
    configured SuspendProg and ResumeProg programs.
Moe Jette's avatar
Moe Jette committed
 -- Fix book select/cons_res logic for tracking the number of allocated
    CPUs on a node when a partition's Shared value is YES or FORCE.
 -- Added configure options "--enable-cray-xt" and "--with-apbasil=PATH" for
    eventual support of Cray-XT systems.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.4.0-pre6
=============================
 -- Fix job preemption when sched/gang and select/linear are configured with
    non-sharing partitions.
 -- In select/cons_res insure that required nodes have available resources.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 1.4.0-pre5
=============================
 -- Correction in setting of SLURM_CPU_BIND environment variable.
 -- Rebuild slurmctld's job select_jobinfo->node_bitmap on restart/reconfigure
    of the daemon rather than restoring the bitmap since the nodes in a system
    can change (be added or removed).
 -- Add configuration option "--with-cpusetdir=PATH" for non-standard 
    locations.
 -- Get new multi-core data structures working on BlueGene systems.
 -- Modify PMI_Get_clique_ranks() to return an array of integers rather 
    than a char * to satisfy PMI standard. Correct logic in 
    PMI_Get_clique_size() for when srun --overcommit option is used.
 -- Fix bug in select/cons_res, allocated a job all of the processors on a 
    node when the --exclusive option is specified as a job submit option.
 -- Add NUMA cpu_bind support to the task affinity plugin. Binds tasks to
    a set of CPUs that belong NUMA locality domain with the appropriate
    --cpu-bind option (ldoms, rank_ldom, map_ldom, and mask_ldom), see
    "man srun" for more information.
* Changes in SLURM 1.4.0-pre4
=============================
 -- For task/affinity, force jobs to use a particular task binding by setting
    the TaskPluginParam configuration parameter rather than slurmd's
    SLURM_ENFORCED_CPU_BIND environment variable.
 -- Enable full preemption of jobs by partition with select/cons_res 
    (cons_res_preempt.patch from Chris Holmes, HP).
 -- Add configuration parameter DebugFlags to provide detailed logging for
    specific subsystems (steps and triggers so far).
 -- srun's --no-kill option is passed to slurmctld so that a job step is 
    killed even if the node where srun executes goes down (unless the 
    --no-kill option is used, previous termination logic would fail if 
    srun was not responding).
 -- Transfer a job step's core bitmap from the slurmctld to the slurmd
    within the job step credential.
 -- Add cpu_bind, cpu_bind_type, mem_bind and mem_bind_type to job allocation
    request and job_details structure in slurmctld. Add support to --cpu_bind
    and --mem_bind options from salloc and sbatch commands.
* Changes in SLURM 1.4.0-pre3
=============================
 -- Internal changes: CPUs per node changed from 32-bit to 16-bit size.
    Node count fields changed from 16-bit to 32-bit size in some structures.
 -- Remove select plugin functions select_p_get_extra_jobinfo(),
    select_p_step_begin() and select_p_step_fini().
 -- Remove the following slurmctld job structure fields: num_cpu_groups,
    cpus_per_node, cpu_count_reps, alloc_lps_cnt, alloc_lps, and used_lps.
    Use equivalent fields in new "select_job" structure, which is filled
    in by the select plugins.
 -- Modify mem_per_task in job step request from 16-bit to 32-bit size.
    Use new "select_job" structure for the job step's memory management.
 -- Add core_bitmap_job to slurmctld's job step structure to identify
Moe Jette's avatar
Moe Jette committed
    which specific cores are allocated to the step.
 -- Add new configuration option OverTimeLimit to permit jobs to exceed 
    their (soft) time limit by a configurable amount. Backfill scheduling
    will be based upon the soft time limit.
 -- Remove select_g_get_job_cores(). That data is now within the slurmctld's
    job structure.

* Changes in SLURM 1.4.0-pre2
=============================
 -- Remove srun's --ctrl-comm-ifhn-addr option (for PMI/MPICH2). It is no
    longer needed.
 -- Modify power save mode so that nodes can be powered off when idle. See
    https://computing.llnl.gov/linux/slurm/power_save.html or 
    "man slurm.conf" (SuspendProgram and related parameters) for more 
    information.
 -- Added configuration parameter PrologSlurmctld, which can be used to boot
    nodes into a particular state for each job. See "man slurm.conf" for 
    details.
 -- Add configuration parameter CompleteTime to control how long to wait for 
    a job's completion before allocating already released resources to pending
    jobs. This can be used to reduce fragmentation of resources. See
    "man slurm.conf" for details.
 -- Make default CryptoType=crypto/munge. OpenSSL is now completely optional.
 -- Make default AuthType=auth/munge rather than auth/none.
 -- Change output format of "sinfo -R" from "%35R %N" to "%50R %N".
* Changes in SLURM 1.4.0-pre1
=============================
 -- Save/restore a job's task_distribution option on slurmctld retart.
    NOTE: SLURM must be cold-started on converstion from version 1.3.x.
 -- Remove task_mem from job step credential (only job_mem is used now).
 -- Remove --task-mem and --job-mem options from salloc, sbatch and srun
    (use --mem-per-cpu or --mem instead).
 -- Remove DefMemPerTask from slurm.conf (use DefMemPerCPU or DefMemPerNode
    instead).
 -- Modify slurm_step_launch API call. Move launch host from function argument
    to element in the data structure slurm_step_launch_params_t, which is
    used as a function argument.
 -- Add state_reason_string to job state with optional details about why
    a job is pending.
 -- Make "scontrol show node" output match scontrol input for some fields
    ("Cores" changed to "CoresPerSocket", etc.).
 -- Add support for a new node state "FUTURE" in slurm.conf. These node records
    are created in SLURM tables for future use without a reboot of the SLURM
    daemons, but are not reported by any SLURM commands or APIs.

* Changes in SLURM 1.3.14
=========================
 -- Fix bug in squeue command with sort on job name ("-S j" option) for jobs
    that lack a name. Previously generated an invalid memory reference.
 -- Permit the TaskProlog to write to the job's standard output by writing
    a line containing the prefix "print " to it's standard output.
 -- Fix for making the slurmdbd agent thread start up correctly when 
    stopped and then started again.
 -- Prevent the Linux out of memory killer from killing the slurmd or
    slurmstepd daemons. Patch from Hongjia Cao, NUDT.
 -- Add squeue option to report jobs by account (-U or --account). Patch from
    Par Andersson, National Supercomputer Centre, Sweden.
 -- Add -DNUMA_VERSION1_COMPATIBILITY to Makefile CFLAGS for proper behavior
    when building with NUMA version 2 APIs.
 -- BLUEGENE - slurm works on a BGP system.
 -- BLUEGENE - slurm handles HTC blocks
 -- BLUEGENE - Added option DenyPassthrough in the bluegene.conf.  Can be set
    to any combination of X,Y,Z to not allow passthroughs when running in 
    dynamic layout mode.
 -- Fix bug in logic to remove a job's dependency, could result in abort.
 -- Add new error message to sched/wiki and sched/wiki2 (Maui and Moab) for
    STARTJOB request: "TASKLIST includes non-responsive nodes".
 -- Fix bug in select/linear when used with sched/gang that can result in a 
    job's required or excluded node specification being ignored.
 -- Add logic to handle message connect timeouts (timed-out.patch from 
    Chuck Clouston, Bull).
 -- BLUEGENE - CFLAGS=-m64 is no longer required in configure
 -- Update python-hostlist code from Kent Engström (NSC) to v1.5
    - Add hostgrep utility to search for lines matching a hostlist.
    - Make each "-" on the command line count as one hostlist argument.
      If multiple hostslists are given on stdin they are combined to a
      union hostlist before being used in the way requested by the
      options.

* Changes in SLURM 1.3.13
=========================
 -- Added ability for slurmdbd to archive and purge step and/or job records.
 -- Added DefaultQOS as an option to slurmdbd.conf for when clusters are 
    added the default will be set to this if none is given in the sacctmgr line.
 -- Added configure option --enable-sun-const for Sun Constellation system with
    3D torus interconnect. Supports proper smap and sview displays for 3-D
    topology. Node names are automatically put into Hilbert curve order given
    a one-line nodelist definition in slurm.conf (e.g. NodeNames=sun[000x533]). 
 -- Fixed bug in parsing time for sacct and sreport to pick the correct year if
    none is specified.
 -- Provide better scheduling with overlapping partitions (when a job can not
    be scheduled due to insufficient resources, reserve specific the nodes
    associated with that partition rather than blocking all partitions with
    any overlapping nodes).
 -- Correct logic to log in a job's stderr that it was "CANCELLED DUE TO 
    NODE FAILURE" rather than just "CANCELLED".
 -- Fix to crypto/openssl plugin that could result in job launch requests
    being spoofed through the use of an improperly formed credential. This bug 
    could permit a user to launch tasks on compute nodes not allocated for 
    their use, but will NOT permit them to run tasks as another user. For more 
    information see http://www.ocert.org/advisories/ocert-2008-016.html
* Changes in SLURM 1.3.12
=========================
 -- Added support for Workload Characteristic Key (WCKey) in accounting.  The
    WCkey is something that can be used in accounting to group associations
    together across clusters or within clusters that are not related.  Use 
    the --wckey option in srun, sbatch or salloc or set the SLURM_WCKEY env
    var to have this set. Use sreport with the wckey option to view reports.
    THIS CHANGES THE RPC LEVEL IN THE SLURMDBD.  YOU MUST UPGRADE YOUR SLURMDBD
    BEFORE YOU UPGRADE THE REST OF YOUR CLUSTERS.  THE NEW SLURMDBD WILL TALK 
    TO OLDER VERSIONS OF SLURM FINE.
 -- Added configuration parameter BatchStartTimeout to control how long to 
    allow for a batch job prolog and environment loading (for Moab) to run.
    Previously if job startup took too long, a batch job could be cancelled
    before fully starting with a SlurmctldLog message of "Master node lost 
    JobId=#, killing it".  See "man slurm.conf" for details.
 -- For a job step, add support for srun's --nodelist and --exclusive options
    to be used together.
 -- On slurmstepd failure, set node state to DRAIN rather than DOWN.
 -- Fix bug in select/cons_res that would incorrectly satify a tasks's
    --cpus-per-task specification by allocating the task CPUs on more than
    one node.
 -- Add support for hostlist expressions containing up to two numeric 
    expressions (e.g. "rack[0-15]_blade[0-41]").
 -- Fix bug in slurmd message forwarding which left file open in the case of
    some communication failures.
 -- Correction to sinfo node state information on BlueGene systems. DRAIN
    state was replaced with ALLOC or IDLE under some situations.
 -- For sched/wiki2 (Moab), strip quotes embedded within job names from the
    name reported.
 -- Fix bug in jobcomp/script that could cause the slurmctld daemon to exit
    upon reconfiguration ("scontrol reconfig" or SIGHUP).
 -- Fix to sinfo, don't print a node's memory size or tmp_disk space with 
    suffix of "K" or "M" (thousands or millions of megabytes).
 -- Improve efficiency of scheduling jobs into partitions which do not overlap.
 -- Fixed sreport user top report to only display the limit specified 
    instead of all users.
* Changes in SLURM 1.3.11
=========================
 -- Bluegene/P support added (minimally tested, but builds correctly).
 -- Fix infinite loop when using accounting_storage/mysql plugin either from
    the slurmctld or slurmdbd daemon.
 -- Added more thread safety for assoc_mgr in the controller.
 -- For sched/wiki2 (Moab), permit clearing of a job's dependencies with the 
    JOB_MODIFY option "DEPEND=0".
 -- Do not set a running or pending job's EndTime when changing it's time 
    limit.
 -- Fix bug in use of "include" parameter within the plugstack.conf file.
 -- Fix bug in the parsing of negative numeric values in configuration files.
 -- Propagate --cpus-per-task parameter from salloc or sbatch input line to
    the SLURM_CPUS_PER_TASK environment variable in the spawned shell for 
    srun to use.
 -- Add support for srun --cpus-per-task=0. This can be used to spawn tasks
    without allocating resouces for the job step from the job's allocation
    when running multiple job steps with the --exclusive option.
 -- Remove registration messages from saved messages when bringing down cluster.
    Without causes deadlock if wrong cluster name is given.
 -- Correction to build for srun debugger (export symbols).
 -- sacct will now display more properly allocations made with salloc with only 
    one step.
 -- Altered sacctmgr, sreport to look at complete option before applying. 
    Before we would only look at the first determined significant characters.
 -- BLUGENE - in overlap mode marking a block to error state will now end
    jobs on overlapping blocks and free them.
 -- Give a batch job 20 minutes to start before considering it missing and 
    killing it (long delay could result from slurmd being paged out). Changed
    the log message from "Master node lost JobId=%u, killing it" to "Batch 
    JobId=%u missing from master node, killing it".
 -- Avoid "Invalid node id" error when a job step within an existing job 
    allocation specifies a node count which is less than the node count
    allocated in order to satisfy the task count specification (e.g. 
    "srun -n16 -N1 hostname" on allocation of 16 one-CPU nodes).
 -- For sched/wiki2 (Moab) disable changing a job's name after it has begun
    execution.
* Changes in SLURM 1.3.10
=========================
 -- Fix several bugs in the hostlist functions:
    - Fix hostset_insert_range() to do proper accounting of hl->nhosts (count).
    - Avoid assertion failure when callinsg hostset_create(NULL).
    - Fix return type of hostlist and hostset string functions from size_t to
      ssize_t.
    - Add check for NULL return from hostlist_create().
    - Rewrite of hostrange_hn_within(), avoids reporting "tst0" in the hostlist
 -- Modify squeue to accept "--nodes=<hostlist>" rather than 
    "--node=<node_name>" and report all jobs with any allocated nodes from set
    of nodes specified. From Par Anderson, National Supercomputer Centre, 
    Sweden.
 -- Fix bug preventing use of TotalView debugger with TaskProlog configured or 
    or srun's --task-prolog option.
 -- Improve reliability of batch job requeue logic in the event that the slurmd
    daemon is temporarily non-responsive (for longer than the configured
    MessageTimeout value but less than the SlurmdTimeout value).
 -- In sched/wiki2 (Moab) report a job's MAXNODES (maximum number of permitted
    nodes).
 -- Fixed SLURM_TASKS_PER_NODE to live up more to it's name on an allocation. 
    Will now contain the number of tasks per node instead of the number of CPUs
    per node.  This is only for a resource allocation. Job steps already have 
    the environment variable set correctly.
 -- Configuration parameter PropagateResourceLimits has new option of "NONE".
 -- User's --propagate options take precidence over PropagateResourceLimits
    configuration parameter in both srun and sbatch commands.
 -- When Moab is in use (salloc or sbatch is executed with the --get-user-env
    option to be more specific), load the user's default resource limits rather
    than propagating the Moab daemon's limits.
 -- Fix bug in slurmctld restart logic for recovery of batch jobs that are
    initiated as a job step rather than an independent job (used for LSF).
 -- Fix bug that can cause slurmctld restart to fail, bug introduced in SLURM
    version 1.3.9. From Eygene Ryabinkin, Kurchatov Institute, Russia.
 -- Permit slurmd configuration parameters to be set to new values from 
    previously unset values.
* Changes in SLURM 1.3.9
========================
 -- Fix jobs being cancelled by ctrl-C to have correct cancelled state in 
    accounting.
 -- Slurmdbd will only cache user data, made for faster start up
 -- Improved support for job steps in FRONT_END systems
 -- Added support to dump and load association information in the controller
    on start up if slurmdbd is unresponsive
 -- BLUEGENE - Added support for sched/backfill plugin
 -- sched/backfill modified to initiate multiple jobs per cycle.
 -- Increase buffer size in srun to hold task list expressions. Critical 
    for jobs with 16k tasks or more.
 -- Added support for eligible jobs and downed nodes to be sent to accounting
    from the controller the first time accounting is turned on.
 -- Correct srun logic to support --tasks-per-node option without task count.
 -- Logic in place to handle multiple versions of RPCs within the slurmdbd. 
    THE SLURMDBD MUST BE UPGRADED TO THIS VERSION BEFORE UPGRADING THE 
    SLURMCTLD OR THEY WILL NOT TALK.  
    Older versions of the slurmctld will continue to talk to the new slurmdbd.
 -- Add support for new job dependency type: singleton. Only one job from a 
    given user with a given name will execute with this dependency type.
    From Matthieu Hautreux, CEA.
 -- Updated contribs/python/hostlist to version 1.3: See "CHANGES" file in
    that directory for details. From Kent Engström, NSC.
 -- Add SLURM_JOB_NAME environment variable for jobs submitted using sbatch.
    In order to prevent the job steps from all having the same name as the 
    batch job that spawned them, the SLURM_JOB_NAME environment variable is
    ignored when setting the name of a job step from within an existing 
    resource allocation.
 -- For use with sched/wiki2 (Moab only), set salloc's default shell based 
    upon the user who the job runs as rather than the user submitting the job 
    (user root).
 -- Fix to sched/backfill when job specifies no time limit and the partition
    time limit is INFINITE.
 -- Validate a job's constraints (node features) at job submit or modification 
    time. Major re-write of resource allocation logic to support more complex
    job feature requests.
 -- For sched/backfill, correct logic to support job constraint specification
    (e.g. node features).
 -- Correct power save logic to avoid trying to wake DOWN node. From Matthieu
    Hautreux, CEA.
 -- Cancel a job step when one of it's nodes goes DOWN based upon the job 
    step's --no-kill option, by default the step is killed (previously the 
    job step remained running even without the --no-kill option).
 -- Fix bug in logic to remove whitespace from plugstack.conf.
 -- Add new configuration parameter SallocDefaultCommand to control what 
    shell that salloc launches by default.
 -- When enforcing PrivateData configuration parameter, failures return 
    "Access/permission denied" rather than "Invalid user id".
 -- From sbatch and srun, if the --dependency option is specified then set 
    the environment variable SLURM_JOB_DEPENDENCY to the same value.
 -- In plugin jobcomp/filetxt, use ISO8601 formats for time by default (e.g. 
    YYYY-MM-DDTHH:MM:SS rather than MM/DD-HH:MM:SS). This restores the default
    behavior from Slurm version 1.2. Change the value of USE_ISO8601 in
    src/plusings/jobcomp/filetxt/jobcomp_filetxt.c to revert the behavior.
 -- Add support for configuration option of ReturnToService=2, which will 
    return a DOWN to use if the node was previous set DOWN for any reason.
 -- Removed Gold accounting plugin.  This plugin was to be used for accounting 
    but has seen not been maintained and is no longer needed.  If using this
    please contact slurm-dev@llnl.gov.
 -- When not enforcing associations and running accounting if a user 
    submits a job to an account that does not have an association on the 
    cluster the account will be changed to the default account to help 
    avoid trash in the accounting system.  If the users default account 
    does not have an association on the cluster the requested account 
    will be used.
 -- Add configuration parameter "--have-front-end" to define HAVE_FRONT_END 
    in config.h and run slurmd only on a front end (suitable only for SLURM
    development and testing).
* Changes in SLURM 1.3.8
========================
 -- Added PrivateData flags for Users, Usage, and Accounts to Accounting. 
    If using slurmdbd, set in the slurmdbd.conf file. Otherwise set in the 
    slurm.conf file.  See "man slurm.conf" or "man slurmdbd.conf" for details.
 -- Reduce frequency of resending job kill RPCs. Helpful in the event of 
    network problems or down nodes.
 -- Fix memory leak caused under heavy load when running with select/cons_res
    plus sched/backfill.
 -- For salloc, if no local command is specified, execute the user's default
    shell.
 -- BLUEGENE - patch to make sure when starting a job blocks required to be
    freed are checked to make sure no job is running on them.  If one is found
    we will requeue the new job.  No job will be lost.
 -- BLUEGENE - Set MPI environment variables from salloc.
 -- BLUEGENE - Fix threading issue for overlap mode
 -- Reject batch scripts containing DOS linebreaks.
 -- BLUEGENE - Added wait for block boot to salloc
* Changes in SLURM 1.3.7
========================
 -- Add jobid/stepid to MESSAGE_TASK_EXIT to address race condition when 
    a job step is cancelled, another is started immediately (before the 
    first one completely terminates) and ports are reused. 
    NOTE: This change requires that SLURM be updated on all nodes of the
    cluster at the same time. There will be no impact upon currently running
    jobs (they will ignore the jobid/stepid at the end of the message).
 -- Added Python module to process hostslists as used by SLURM. See
    contribs/python/hostlist. Supplied by Kent Engstrom, National
    Supercomputer Centre, Sweden.
 -- Report task termination due to signal (restored functionality present
 -- Remove sbatch test for script size being no larger than 64k bytes.
    The current limit is 4GB.
 -- Disable FastSchedule=0 use with SchedulerType=sched/gang. Node 
    configuration must be specified in slurm.conf for gang scheduling now.
 -- For sched/wiki and sched/wiki2 (Maui or Moab scheduler) disable the ability
    of a non-root user to change a job's comment field (used by Maui/Moab for
    storing scheduler state information).
 -- For sched/wiki (Maui) add pending job's future start time to the state
    info reported to Maui.
 -- Improve reliability of job requeue logic on node failure.
 -- Add logic to ping non-responsive nodes even if SlurmdTimeout=0. This permits
    the node to be returned to use when it starts responding rather than 
    remaining in a non-usable state.
 -- Honor HealthCheckInterval values that are smaller than SlurmdTimeout.
 -- For non-responding nodes, log them all on a single line with a hostlist 
    expression rather than one line per node. Frequency of log messages is 
    dependent upon SlurmctldDebug value from 300 seconds at SlurmctldDebug<=3
    to 1 second at SlurmctldDebug>=5.
 -- If a DOWN node is resumed, set its state to IDLE & NOT_RESPONDING and 
    ping the node immediately to clear the NOT_RESPONDING flag.
 -- Log that a job's time limit is reached, but don't sent SIGXCPU.
 -- Fixed gid to be set in slurmstepd when run by root
 -- Changed getpwent to getpwent_r in the slurmctld and slurmd
 -- Increase timeout on most slurmdbd communications to 60 secs (time for
    substantial database updates).
 -- Treat srun option of --begin= with a value of now without a numeric
    component as a failure (e.g. "--begin=now+hours").
 -- Eliminate a memory leak associated with notifying srun of allocated
    nodes having failed.
 -- Add scontrol shutdown option of "slurmctld" to just shutdown the 
    slurmctld daemon and leave the slurmd daemons running.
 -- Do not require JobCredentialPrivateKey or JobCredentialPublicCertificate
    in slurm.conf if using CryptoType=crypto/munge.
 -- Remove SPANK support from sbatch. 
* Changes in SLURM 1.3.6
========================
 -- Add new function to get information for a single job rather than always
    getting information for all jobs. Improved performance of some commands. 
    NOTE: This new RPC means that the slurmctld daemons should be updated
    before or at the same time as the compute nodes in order to process it.
 -- In salloc, sbatch, and srun replace --task-mem options with --mem-per-cpu
    (--task-mem will continue to be accepted for now, but is not documented).
    Replace DefMemPerTask and MaxMemPerTask with DefMemPerCPU, DefMemPerNode,
Moe Jette's avatar
Moe Jette committed
    MaxMemPerCPU and MaxMemPerNode in slurm.conf (old options still accepted
    for now, but mapped to "PerCPU" parameters and not documented). Allocate
    a job's memory memory at the same time that processors are allocated based
    upon the --mem or --mem-per-cpu option rather than when job steps are
    initiated.
 -- Altered QOS in accounting to be a list of admin defined states, an
    account or user can have multiple QOS's now.  They need to be defined using
    'sacctmgr add qos'.  They are no longer an enum.  If none are defined
    Normal will be the QOS for everything.  Right now this is only for use 
    with MOAB.  Does nothing outside of that.
 -- Added spank_get_item support for field S_STEP_CPUS_PER_TASK.
 -- Make corrections in spank_get_item for field S_JOB_NCPUS, previously 
    reported task count rather than CPU count.
 -- Convert configuration parameter PrivateData from on/off flag to have
    separate flags for job, partition, and node data. See "man slurm.conf"
    for details.
 -- Fix bug, failed to load DisableRootJobs configuration parameter.
 -- Altered sacctmgr to always return a non-zero exit code on error and send 
    error messages to stderr.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.5
========================
Moe Jette's avatar
Moe Jette committed
 -- Fix processing of auth/munge authtentication key for messages originating 
    in slurmdbd and sent to slurmctld. 
 -- If srun is allocating resources (not within sbatch or salloc) and MaxWait
    is configured to a non-zero value then wait indefinitely for the resource
    allocation rather than aborting the request after MaxWait time.
 -- For Moab only: add logic to reap defunct "su" processes that are spawned by
    slurmd to load user's environment variables.
 -- Added more support for "dumping" account information to a flat file and 
    read in again to protect data incase something bad happens to the database.
 -- Sacct will now report account names for job steps.
 -- For AIX: Remove MP_POERESTART_ENV environment variable, disabling 
    poerestart command. User must explicitly set MP_POERESTART_ENV before 
    executing poerestart.
 -- Put back notification that a job has been allocated resources when it was
    pending.
Moe Jette's avatar
Moe Jette committed

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.4
========================
 -- Some updates to man page formatting from Gennaro Oliva, ICAR.
 -- Smarter loading of plugins (doesn't stat every file in the plugin dir)
 -- In sched/backfill avoid trying to schedule jobs on DOWN or DRAINED nodes.
 -- forward exit_code from step completion to slurmdbd
 -- Add retry logic to socket connect() call from client which can fail 
    when the slurmctld is under heavy load.
Danny Auble's avatar
Danny Auble committed
 -- Fixed bug when adding associations to add correctly.
 -- Added support for associations for user root.
 -- For Moab, sbatch --get-user-env option processed by slurmd daemon
    rather than the sbatch command itself to permit faster response
    for Moab.
 -- IMPORTANT FIX: This only effects use of select/cons_res when allocating
    resources by core or socket, not by CPU (default for SelectTypeParameter). 
    We are not saving a pending job's task distribution, so after restarting
    slurmctld, select/cons_res was over-allocating resources based upon an 
    invalid task distribution value. Since we can't save the value without 
    changing the state save file format, we'll just set it to the default 
    value for now and save it in Slurm v1.4. This may result in a slight 
    variation on how sockets and cores are allocated to jobs, but at least 
    resources will not be over-allocated.
 -- Correct logic in accumulating resources by node weight when more than 
    one job can run per node (select/cons_res or partition shared=yes|force).
 -- slurm.spec file updated to avoid creating empty RPMs. RPM now *must* be
    built with correct specification of which packages to build or not build.
    See the top of the slurm.spec file for information about how to control
    package building specification.
 -- Set SLURM_JOB_CPUS_PER_NODE for jobs allocated using the srun command.
    It was already set for salloc and sbatch commands.
 -- Fix to handle suspended jobs that were cancelled in accounting
 -- BLUEGENE - fix to only include bps given in a name from the bluegene.conf 
    file.
 -- For select/cons_res: Fix record-keeping for core allocations when more 
    than one partition uses a node or there is more than one socket per node.
 -- In output for "scontrol show job" change "StartTime" header to "EligibleTime"
    for pending jobs to accurately describe what is reported.
 -- Add more slurmdbd.conf paramters: ArchiveScript, ArchiveAge, JobPurge, and
    StepPurge (not fully implemented yet).
 -- Add slurm.conf parameter EnforcePartLimits to reject jobs which exceed a
    partition's size and/or time limits rather than leaving them queued for a
    later change in the partition's limits. NOTE: Not reported by
    "scontrol show config" to avoid changing RPCs. It will be reported in 
    SLURM version 1.4.
 -- Added idea of coordinator to accounting.  A coordinator can add associations
    between exsisting users to the account or any sub-account they are 
    coordinator to.  They can also add/remove other coordinators to those 
    accounts.
 -- Add support for Hostname and NodeHostname in slurm.conf being fully 
    qualified domain names (by Vijay Ramasubramanian, University of Maryland). 
    For more information see "man slurm.conf".
* Changes in SLURM 1.3.3
========================
 -- Add mpi_openmpi plugin to the main SLURM RPM.
 -- Prevent invalid memory reference when using srun's --cpu_bind=cores option
    (slurm-1.3.2-1.cea1.patch from Matthieu Hautreux, CEA).
 -- Task affinity plugin modified to support a particular cpu bind type: cores,
    sockets, threads, or none. Accomplished by setting an environment variable
    SLURM_ENFORCE_CPU_TYPE (slurm-1.3.2-1.cea2.patch from Matthieu Hautreux, 
    CEA).
 -- For BlueGene only, log "Prolog failure" once per job not once per node.
 -- Reopen slurmctld log file after reconfigure or SIGHUP is received.
 -- In TaskPlugin=task/affinity, fix possible infinite loop for slurmd.
 -- Accounting rollup works for mysql plugin.  Automatic rollup when using 
    slurmdbd.
Danny Auble's avatar
Danny Auble committed
 -- Copied job stat logic out of sacct into sstat in the future sacct -stat 
    will be deprecated.
 -- Correct sbatch processing of --nice option with negative values.
 -- Add squeue formatted print option %Q to print a job's integer priority.
 -- In sched/backfill, fix bug that was changing a pending job's shared value
    to zero (possibly changing a pending job's resource requirements from a 
    processor on some node to the full node).
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.2
========================
 -- Get --ntasks-per-node option working for sbatch command.
 -- BLUEGENE: Added logic to give back a best block on overlapped mode 
    in test_only mode
 -- BLUEGENE: Updated debug info and man pages for better help with the 
    numpsets option and to fail correctly with bad image request for building
    blocks.
 -- In sched/wiki and sched/wiki2 properly support Slurm license consumption
    (job state reported as "Hold" when required licenses are not available).
 -- In sched/wiki2 JobWillRun command, don't return an error code if the job(s)
    can not be started at that time. Just return an error message (from 
    Doug Wightman, CRI).
 -- Fix bug if sched/wiki or sched/wiki2 are configured and no job comment is 
    set.
 -- scontrol modified to report partition partition's "DisableRootJobs" value.
 -- Fix bug in setting host address for PMI communications (mpich2 only).
Moe Jette's avatar
Moe Jette committed
 -- Fix for memory size accounting on some architectures.
 -- In sbatch and salloc, change --dependency's one letter option from "-d"
    to "-P" (continue to accept "-d", but change the documentation).
 -- Only check that task_epilog and task_prolog are runable by the job's
    user, not as root.
Moe Jette's avatar
Moe Jette committed
 -- In sbatch, if specifying an alternate directory (--workdir/-D), then
    input, output and error files are in that directory rather than the 
    directory from which the command is executed
 -- NOTE: Fully operational with Moab version 5.2.3+. Change SUBMITCMD in
Moe Jette's avatar
Moe Jette committed
    moab.cfg to be the location of sbatch rather than srun. Also set 
    HostFormat=2 in SLURM's wiki.conf for improved performance.
 -- NOTE: We needed to change an RPC from version 1.3.1. You must upgrade 
    all nodes in a cluster from v1.3.1 to v1.3.2 at the same time.
 -- Postgres plugin will work from job accounting, not for association 
    management yet.
 -- For srun/sbatch --get-user-env option (Moab use only) look for "env"
    command in both /bin and /usr/sbin (for Suse Linux).
 -- Fix bug in processing job feature requests with node counts (could fail
    to schedule job if some nodes have not associated features).
 -- Added nodecnt and gid to jobcomp/script
 -- Insure that nodes select in "srun --will-run" command or the equivalent in
    sched/wiki2 are in the job's partition.
 -- BLUGENE - changed partition Min|MaxNodes to represent c-node counts
    instead of base partitions
 -- In sched/gang only, prevent possible invalid memory reference when 
    slurmctld is reconfigured, e.g. "scontrol reconfig".
 -- In select/linear only, prevent invalid memory reference in log message when
    nodes are added to slurm.conf and then "scontrol reconfig" is executed. 
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 1.3.1
========================
 -- Correct logic for processing batch job's memory limit enforcement.
 -- Fix bug that was setting a job's requeue value on any update of the 
    job using the "scontrol update" command. The invalid value of an 
    updated job prevents it's recovery when slurmctld restarts.
 -- Add support for cluster-wide consumable resources. See "Licenses"
    parameter in slurm.conf man page and "--licenses" option in salloc, 
    sbatch and srun man pages.
 -- Major changes in select/cons_res to support FastSchedule=2 with more
    resources configured than actually exist (useful for testing purposes).
 -- Modify srun --test-only response to include expected initiation time 
    for a job as well as the nodes to be allocated and processor count
    (for use by Moab).
 -- Correct sched/backfill to properly honor job dependencies.
 -- Correct select/cons_res logic to allocate CPUs properly if there is
    more than one thread per core (previously failed to allocate all cores).
 -- Correct select/linear logic in shared job count (was off by 1).
Moe Jette's avatar
Moe Jette committed
 -- Add support for job preeption based upon partition priority (in sched/gang,
    preempt.patch from Chris Holmes, HP).
 -- Added much better logic for mysql accounting.  
 -- Finished all basic functionality for sacctmgr.
 -- Added load file logic to sacctmgr for setting up a cluster in one step.
Moe Jette's avatar
Moe Jette committed
 -- NOTE: We needed to change an RPC from version 1.3.0. You must upgrade 
    all nodes in a cluster from v1.3.0 to v1.3.1 at the same time.
 -- NOTE: Work is currently underway to improve placement of jobs for gang
    scheduling and preemption.
 -- NOTE: Work is underway to provide additional tools for reporting 
    accounting information.
* Changes in SLURM 1.3.0
========================
 -- In sched/wiki2, add processor count to JOBWILLRUN response.
 -- Add event trigger for node entering DRAINED state.
 -- Build properly without OpenSSL installed (OpenSSL is recommended, but not 
    required).
Danny Auble's avatar
Danny Auble committed
 -- Added slurmdbd, and modified accounting_storage plugin to talk to it. 
    Allowing multiple slurm systems to securly store and gather information
    not only about jobs, but the system also. See accounting web page for more
    information.    
* Changes in SLURM 1.3.0-pre11
==============================
 -- Restructure the sbcast RPC to take advantage of larger buffers available
    in Slurm v1.3 RPCs.
Moe Jette's avatar
Moe Jette committed
 -- Fix several memory leaks.
Moe Jette's avatar
Moe Jette committed
 -- In scontrol, show job's Requeue value, permit change of Requeue and Comment
 -- In slurmctld job record, add QOS (quality of service) value for accounting
    purposes with Maui and Moab.
 -- Log to a job's stderr when it is being cancelled explicitly or upon reaching
    it's time limit.
 -- Only permit a job's account to be changed while that job is PENDING.
 -- Fix race condition in job suspend/resume (slurmd.sus_res.patch from HP).
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre10
==============================
 -- Add support for node-specific "arch" (architecture) and "os" (operating 
    system) fields. These fields are set based upon values reported by the
    slurmd daemon on each compute node using SLURM_ARCH and SLURM_OS environment 
    variables (if set, the uname function otherwise) and are intended to support
    changes in real time changes in operating system. These values are reported
    by "scontrol show node" plus the sched/wiki and sched/wiki2 plugins for Maui
    and Moab respectively.
 -- In sched/wiki and sched/wiki2: add HostFormat and HidePartitionJobs to 
    "scontrol show config" SCHEDULER_CONF output.
 -- In sched/wiki2: accept hostname expression as input for GETNODES command.
 -- Add JobRequeue configuration parameter and --requeue option to the sbatch
    command.
 -- Add HealthCheckInterval and HealthCheckProgram configuration parameters.
Moe Jette's avatar
Moe Jette committed
 -- Add SlurmDbdAddr, SlurmDbdAuthInfo and SlurmDbdPort configuration parameters.
 -- Modify select/linear to achieve better load leveling with gang scheduler.
 -- Develop the sched/gang plugin to support select/linear and
    select/cons_res. If sched/gang is enabled and Shared=FORCE is configured
    for a partition, this plugin will gang-schedule or "timeslice" jobs that
    share common resources within the partition. Note that resources that are
    shared across partitions are not gang-scheduled.
 -- Add EpilogMsgTime configuration parameter. See "man slurm.conf" for details.
 -- Increase default MaxJobCount configuration parameter from 2000 to 5000. 
 -- Move all database common files from src/common to new lib in src/database.
 -- Move sacct to src/accounting added sacctmgr for scontrol like operations 
    to accounting infrastructure.
 -- Basic functions of sacctmgr in place to make for administration of 
    accounting.
 -- Moved clusteracct_storage plugin to accounting_storage plugin,
    jobacct_storage is still it's own plugin for now.
 -- Added template for slurm php extention.
 -- Add infrastructure to support allocation of cluster-wide licenses to jobs.
    Full support will be added some time after version 1.3.0 is released.
 -- In sched/wiki2 with select/bluegene, add support for WILLRUN command
    to accept multiple jobs with start time specifications.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre9
=============================
 -- Add spank support to sbatch. Note that spank_local_user() will be called 
    with step_layout=NULL and gid=SLURM_BATCH_SCRIPT and spank_fini() will 
    be called immediately afterwards.
 -- Made configure use mysql_config to find location of mysql database install
    Removed bluegene specific information from the general database tables.
 -- Re-write sched/backfill to utilize new will-run logic in the select 
    plugins. It now supports select/cons_res and all job options (required
    nodes, excluded nodes, contiguous, etc.).
 -- Modify scheduling logic to better support overlapping partitions.
 -- Add --task-mem option and remove --job-mem option from srun, salloc, and 
    sbatch commands. Enforce step memory limit, if specified and there is
    no job memory limit specified (--mem). Also see DefMemPerTask and
    MaxMemPerTask in "man slurm.conf". Enforcement is dependent upon job
    accounting being enabled with non-zero value for JoabAcctGatherFrequency.
Moe Jette's avatar
Moe Jette committed
 -- Change default node tmp_disk size to zero (for diskless nodes).
* Changes in SLURM 1.3.0-pre8
=============================
 -- Modify how strings are packed in the RPCs, Maximum string size 
    increased from 64KB (16-bit size field) to 4GB (32-bit size field).
 -- Fix bug that prevented time value of "INFINITE" from being processed.
 -- Added new srun/sbatch option "--open-mode" to control how output/error 
    files are opened ("t" for truncate, "a" for append).
 -- Added checkpoint/xlch plugin for use with XLCH (Hongjia Cao, NUDT).
 -- Added srun option --checkpoint-path for use with XLCH (Hongjia Cao, NUDT).
 -- Added new srun/salloc/sbatch option "--acctg-freq" for user control over 
    accounting data collection polling interval.
 -- In sched/wiki2 add support for hostlist expression use in GETNODES command
    with HostFormat=2 in the wiki.conf file.
 -- Added new scontrol option "setdebug" that can change the slurmctld daemons
    debug level at any time (Hongjia Cao, NUDT).
 -- Track total total suspend time for jobs and steps for accounting purposes.
 -- Add version information to partition state file.
 -- Added 'will-run' functionality to all of the select plugins (bluegene,
    linear, and cons_res) to return node list and time job can start based 
    on other jobs running.
 -- Major restructuring of node selection logic. select/linear now supports
    partition max_share parameter and tries to match like size jobs on the 
    same nodes to improve gang scheduling performance. Also supports treating 
    memory as consumable resource for job preemption and  gang scheduling if 
    SelectTypeParameter=CR_Memory in slurm.conf.
 -- BLUEGENE: Reorganized bluegene plugin for maintainability sake.
 -- Major restructuring of data structures in select/cons_res.
 -- Support job, node and partition names of arbitrary size.
 -- Fix bug that caused slurmd to hang when using select/linear with
    task/affinity.

* Changes in SLURM 1.3.0-pre7
=============================
 -- Fix a bug in the processing of srun's --exclusive option for a job step.

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre6
=============================
 -- Add support for configurable number of jobs to share resources using the 
    partition Shared parameter in slurm.conf (e.g. "Shared=FORCE:3" for two 
    jobs to share the resources). From Chris Holmes, HP.
 -- Made salloc use api instead of local code for message handling.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre5
=============================
 -- Add select_g_reconfigure() function to node changes in slurmctld configuration
    that can impact node scheduling.
 -- scontrol to set/get partition's MaxTime and job's Timelimit in minutes plus
    new formats: min:sec, hr:min:sec, days-hr:min:sec, days-hr, etc.
 -- scontrol "notify" command added to send message to stdout of srun for 
    specified job id.
 -- For BlueGene, make alpha part of node location specification be case insensitive.
 -- Report scheduler-plugin specific configuration information with the 
    "scontrol show configuration" command on the SCHEDULER_CONF line. This
    information is not found in the "slurm.conf" file, but a scheduler plugin 
    specific configuration (e.g. "wiki.conf").
 -- sview partition information reported now includes partition priority.
 -- Expand job dependency specification to support concurrent execution, 
    testing of job exit status and multiple job IDs.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 1.3.0-pre4
=============================
 -- Job step launch in srun is now done from the slurm api's all further
    modifications to job launch should be done there.
 -- Add new partition configuration parameter Priority. Add job count to 
    Shared parameter.
 -- Add new configuration parameters DefMemPerTask, MaxMemPerTask, and 
    SchedulerTimeSlice.
 -- In sched/wiki2, return REJMESSAGE with details on why a job was 
    requeued (e.g. what node failed).

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre3
=============================
 -- Remove slaunch command
Moe Jette's avatar
Moe Jette committed
 -- Added srun option "--checkpoint=time" for job step to automatically be 
    checkpointed on a period basis.
 -- Change behavior of "scancel -s KILL <jobid>" to send SIGKILL to all job
    steps rather than cancelling the job. This now matches the behavior of
    all other signals. "scancel <jobid>" still cancels the job and all steps.
 -- Add support for new job step options --exclusive and --immediate. Permit
    job steps to be queued when resources are not available within an existing 
    job allocation to dedicate the resources to the job step. Useful for
    executing simultaneous job steps. Provides resource management both at 
    the level of jobs and job steps.
Moe Jette's avatar
Moe Jette committed
 -- Add support for feature count in job constraints, for example
    srun --nodes=16 --constraint=graphics*4 ...
    Based upon work by Kumar Krishna (HP, India).
Moe Jette's avatar
Moe Jette committed
 -- Add multi-core options to salloc and sbatch commands (sbatch.patch and
    cleanup.patch from Chris Holmes, HP).
 -- In select/cons_res properly release resources allocated to job being 
    suspended (rmbreak.patch, from Chris Holmes, HP).
 -- Removed database and jobacct plugin replaced with jobacct_storage 
    and jobacct_gather for easier hooks for further expansion of the
    jobacct plugin.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 1.3.0-pre2
=============================
 -- Added new srun option --pty to start job with pseudo terminal attached 
    to task 0 (all other tasks have I/O discarded)
 -- Disable user specifying jobid when sched/wiki2 configured (needed for 
    Moab releases until early 2007).
 -- Report command, args and working directory for batch jobs with 
    "scontrol show job".
* Changes in SLURM 1.3.0-pre1
=============================
 -- !!! SRUN CHANGES !!!
    The srun options -A/--allocate, -b/--batch, and -a/--attach have been
    removed!  That functionality is now available in the separate commands
    salloc, sbatch, and sattach, respectively.
 -- Add new node state FAILING plus trigger for when node enters that state.
 -- Add new configuration paramter "PrivateData". This can be used to 
    prevent a user from seeing jobs or job steps belonging to other users.
 -- Added configuration parameters for node power save mode: ResumeProgram
    ResumeRate, SuspendExcNodes, SuspendExcParts, SuspendProgram and 
    SuspendRate.
 -- Slurmctld maintains the IP address (rather than hostname) for srun 
    communications. This fixes some possible network routing issues.
Danny Auble's avatar
Danny Auble committed
 -- Added global database plugin.  Job accounting and Job completion are the 
    first to use it.  Follow documentation to add more to the plugin.
 -- Removed no-longer-needed jobacct/common/common_slurmctld.c since that is
    replaced by the database plugin.
Moe Jette's avatar
Moe Jette committed
 -- Added new configuration parameter: CryptoType.
    Moved existing digital signature logic into new plugin: crypto/openssl.
    Added new support for crypto/munge (available with GPL license).
* Changes in SLURM 1.2.36
=========================
 -- For spank_get_item(S_JOB_ARGV) for batch job with script input via STDIN,
    set argc value to 1 (rather than 2, argv[0] still set to path of generated
    script).
 -- sacct will now display more properly allocations made with salloc with only 
    one step.
* Changes in SLURM 1.2.35
=========================
 -- Permit SPANK plugins to dynamically register options at runtime base upon
    configuration or other runtime checks.
 -- Add "include" keywork to SPANK plugstack.conf file to optionally include
    other configuration files or directories of configuration files.
 -- Srun to wait indefinitely for resource allocation to be made. Used to
    abort after two minutes.
* Changes in SLURM 1.2.34
=========================
 -- Permit the cancellation of a job that is in the process of being 
    requeued.
 -- Ignore the show_flag when getting job, step, node or partition information
    for user root.
 -- Convert some functions to thread-safe versions: getpwnam, getpwuid, 
    getgrnam, and getgrgid to similar functions with "_r" suffix. While no
    failures have been observed, a race condition would in the worst case
    permit a user access to a partition not normally allowed due to the
    AllowGroup specification or the wrong user identified in an accounting
    record. The job would NOT be run as the wrong user.
 -- For PMI only (MPICH2/MVAPICH2) base address to send messages to (the srun)
    upon the address from which slurmd gets the task launch request rather then
    "hostname" where srun executes.
 -- Make test for StateSaveLocation directory more comprehensive.
 -- For jobcomp/script plugin, PROCS environment variable is now the actual
    count of allocated processors rather than the count of processes to 
    be started.
* Changes in SLURM 1.2.33
=========================
 -- Cancelled or Failed jobs will now report their job and step id on exit
 -- Add SPANK items available to get: SLURM_VERSION, SLURM_VERSION_MAJOR,
    SLURM_VERISON_MINOR and SLURM_VERSION_MICRO.
 -- Fixed handling of SIGPIPE in srun. Abort job.
 -- Fix bug introduced to MVAPICH plugin preventing use of TotalView debugger.
 -- Modify slurmctld to get srun/salloc network address based upon the incoming
    message rather than hostname set by the user command (backport of logic in
    SLURM v1.3).
* Changes in SLURM 1.2.32
=========================
 -- LSF only: Enable scancel of job in RootOnly partition by the job's owner.
 -- Add support for sbatch --distribution and --network options.
 -- Correct pending job's wait reason to "Priority" rather than "Resources" if
    required resources are being held in reserve for a higher priority job.
 -- In sched/wiki2 (Moab) report a node's state as "Drained" rather than 
    "Draining" if it has no allocated work (An undocumented Moab wiki option, 
    see CRI ticket #2394).
 -- Log to job's output when it is cancelled or reaches it's time limit (ported
    from existing code in slurm v1.3).
 -- Add support in salloc and sbatch commands for --network option.
 -- Add support for user environment variables that include '\n' (e.g. 
    bash functions).
 -- Partial rewrite of mpi/mvapich plugin for improved scalability.
* Changes in SLURM 1.2.31
=========================
 -- For Moab only: If GetEnvTimeout=0 in slurm.conf then do not run "su" to get
    the user's environment, only use the cache file.
 -- For sched/wiki2 (Moab), treat the lack of a wiki.conf file or the lack 
    of a configured AuthKey as a fatal error (lacks effective security).
 -- For sched/wiki and sched/wiki2 (Maui or Moab) report a node's state as 
    Busy rather than Running when allocated if SelectType=select/linear. Moab
    was trying to schedule job's on nodes that were already allocated to jobs
    that were hidden from it via the HidePartitionJobs in Slurm's wiki.conf.
 -- In select/cons_res improve the resource selection when a job has specified
    a processor count along with a maximum node count.
 -- For an srun command with --ntasks-per-node option and *no* --ntasks count,
    spawn a task count equal to the number of nodes selected multiplied by the 
    --ntasks-per-node value.
 -- In jobcomp/script: Set TZ if set in slurmctld's environment.
 -- In srun with --verbose option properly format CPU allocation information 
    logged for clusters with 1000+ nodes and 10+ CPUs per node.
 -- Process a job's --mail_type=end option on any type of job termination, not
    just normal completion (e.g. all failure modes too).
=========================
 -- Fix for gold not to print out 720 error messages since they are
    potentally harmful.
 -- In sched/wiki2 (Moab), permit changes to a pending job's required features:
    CMD=CHANGEJOB ARG=<jobid> RFEATURES=<features>
 -- Fix for not aborting when node selection doesn't load, fatal error instead
 -- In sched/wiki and sched/wiki2 DO NOT report a job's state as "Hold" if it's
    dependencies have not been satisfied. This reverses a changed made in SLURM
    version 1.2.29 (which was requested by Cluster Resources, but places jobs 
    in a HELD state indefinitely).
* Changes in SLURM 1.2.29
=========================
 -- Modified global configuration option "DisableRootJobs" from number (0 or 1)
    to boolean (YES or NO) to match partition parameter.
 -- Set "DisableRootJobs" for a partition to match the global parameters value 
    for newly created partitions.
 -- In sched/wiki and sched/wiki2 report a node's updated features if changed
    after startup using "scontrol update ..." command.
 -- In sched/wiki and sched/wiki2 report a job's state as "Hold" if it's 
    dependencies have not been satisfied.
 -- In sched/wiki and sched/wiki2 do not process incoming requests until
    slurm configuration is completely loaded.
 -- In sched/wiki and sched/wiki2 do not report a job's node count after it 
    has completed (slurm decrements the allocated node count when the nodes
    transition from completing to idle state).