Newer
Older
-- Fix handling of requeued jobs with steps that are still finishing.
-- Cleaner copy for PriorityWeightTRES, it also fixes a core dump when trying
to free it otherwise.
-- Add environment variables SLURM_ARRAY_TASK_MAX, SLURM_ARRAY_TASK_MIN,
SLURM_ARRAY_TASK_STEP for job arrays.
-- Fix srun to use the NoInAddrAny TopologyParam option.
-- Change QOS flag name from PartitionQOS to OverPartQOS to be a better
description.
-- Make complete_batch_script RPC work with message aggregation.
-- Do not count slurmctld threads waiting in a "throttle" lock against the
daemon's thread limit as they are not contending for resources.
-- Modify slurmctld outgoing RPC logic to support more parallel tasks (up to
85 RPCs and 256 pthreads; the old logic supported up to 21 RPCs and 256
threads). This change can dramatically improve performance for RPCs
operating on small node counts.
-- Increase total backfill scheduler run time in stats_info_response_msg data
structure from 32 to 64 bits in order to prevent overflow.
-- Add NoInAddrAny option to TopologyParam in the slurm.conf which allows to
bind to the interface of return of gethostname instead of any address on
the node which avoid RSIP issues in Cray systems. This is most likely
useful in other systems as well.
-- Fix memory leak in Slurm::load_jobs perl api call.
-- Added --noconvert option to sacct, sstat, squeue and sinfo which allows
values to be displayed in their original unit types (e.g. 2048M won't be
converted to 2G).
-- Fix spelling of node_rescrs to node_resrcs in Perl API.
-- Fix node state race condition, UNKNOWN->IDLE without configuration info.
-- Cray: Disable LDAP references from slurmstepd on job launch due for
improved scalability.
-- Remove srun "read header error" due to application termination race
condition.
-- Optimize sacct queries with additional db indexes.
-- Add SLURM_TOPO_LEN env variable for scontrol show topology.
-- Add free_mem to node information.
-- Fix abort of batch launch if prolog is running, wait for prolog instead.
Brian Christiansen
committed
-- Fix case where job would get the wrong cpu count when using
--ntasks-per-core and --cpus-per-task together.
-- Add TRESBillingWeights to partitions in slurm.conf which allows taking into
consideration any TRES Type when calculating the usage of a job.
-- Add PriorityWeightTRES slurm.conf option to be able to configure priority
factors for TRES types.
* Changes in Slurm 15.08.0pre6
==============================
-- Add scontrol options to view and modify layouts tables.
-- Add MsgAggregationParams which controls a reverse tree to the slurmctld
which can be used to aggregate messages to the slurmctld into a single
message to reduce communication to the slurmctld. Currently only epilog
complete messages and node registration messages use this logic.
-- Add sacct and squeue options to print trackable resources.
-- Add sacctmgr option to display trackable resources.
-- If an salloc or srun command is executed on a "front-end" configuration,
that job will be assigned a slurmd shepherd daemon on the same host as used
to execute the command when possible rather than an slurmd daemon on an
arbitrary front-end node.
-- Add srun --accel-bind option to control how tasks are bound to GPUs and NIC
Generic RESources (GRES).
-- gres/nic plugin modified to set OMPI_MCA_btl_openib_if_include environment
variable based upon allocated devices (usable with OpenMPI and Melanox).
-- Make it so info options for srun/salloc/sbatch print with just 1 -v instead
of 4.
-- Add "no_backup_scheduling" SchedulerParameter to prevent jobs from being
scheduled when the backup takes over. Jobs can be submitted, modified and
cancelled while the backup is in control.
-- Enable native Slurm backup controller to reside on an external Cray node
when the "no_backup_scheduling" SchedulerParameter is used.
-- Removed TICKET_BASED fairshare. Consider using the FAIR_TREE algorithm.
-- Disable advanced reservation "REPLACE" option on IBM Bluegene systems.
-- Add support for control distribution of tasks across cores (in addition
to existing support for nodes and sockets, (e.g. "block", "cyclic" or
"fcyclic" task distribution at 3 levels in the hardware rather than 2).
-- Create db index on <cluster>_assoc_table.acct. Deleting accounts that didn't
have jobs in the job table could take a long time.
-- The performance of Profiling with HDF5 is improved. In addition, internal
structures are changed to make it easier to add new profile types,
particularly energy sensors. sh5util will continue to work with either
format.
-- Add partition information to sshare output if the --partition option
is specified on the sshare command line.
-- Add sreport -T/--tres option to identify Trackable RESources (TRES) to
report.
-- Display job in sacct when single step's cpus are different from the job
allocation.
-- Add association usage information to "scontrol show cache" command output.
-- MPI/MVAPICH plugin now requires Munge for authentication.
-- job_submit/lua: Add default_qos fields. Add job record qos. Add partition
record allow_qos and qos_char fields.
* Changes in Slurm 15.08.0pre5
==============================
-- Add jobcomp/elasticsearch plugin. Libcurl is required for build. Configure
the server as follows: "JobCompLoc=http://YOUR_ELASTICSEARCH_SERVER:9200".
-- Scancel logic large re-written to better support job arrays.
-- Added a slurm.conf parameter PrologEpilogTimeout to control how long
-- Added TRES (Trackable resources) to track Mem, GRES, license, etc
utilization.
-- Add re-entrant versions of glibc time functions (e.g. localtime) to Slurm
in order to eliminate rare deadlock of slurmstepd fork and exec calls.
-- Constrain kernel memory (if available) in cgroups.
-- Add PrologFlags option of "Contain" to create a proctrack container at
job resource allocation time.
-- Disable the OOM Killer in slurmd and slurmstepd's memory cgroup when using
MemSpecLimit.
* Changes in Slurm 15.08.0pre4
==============================
-- Burst_buffer/cray - Convert logic to use new commands/API names (e.g.
"dws_setup" rather than "bbs_setup").
-- Remove the MinJobAge size limitation. It can now exceed 65533 as it
is represented using an unsigned integer.
-- Verify that all plugin version numbers are identical to the component
attempting to load them. Without this verification, the plugin can reference
Slurm functions in the caller which differ (e.g. the underlying function's
arguments could have changed between Slurm versions).
NOTE: All plugins (except SPANK) must be built against the identical
version of Slurm in order to be used by any Slurm command or daemon. This
should eliminate some very difficult to diagnose problems due to use of old
plugins.
-- Increase the MAX_PACK_MEM_LEN define to avoid PMI2 failure when fencing
with large amount of ranks (to 1GB).
-- Requests by normal user to reset a job priority (even to lower it) will
result in an error saying to change the job's nice value instead.
-- SPANK naming changes: For environment variables set using the
spank_job_control_setenv() function, the values were available in the
slurm_spank_job_prolog() and slurm_spank_job_epilog() functions using
getenv where the name was given a prefix of "SPANK_". That prefix has
been removed for consistency with the environment variables available in
the Prolog and Epilog scripts.
-- Add "TopologyParam" configuration parameter. Optional value of "dragonfly"
is supported.
-- Optimize resource allocation for systems with dragonfly networks.
-- Add "--thread-spec" option to salloc, sbatch and srun commands. This is
the count of threads reserved for system use per node.
-- job_submit/lua: Enable reading and writing job environment variables.
For example: if (job_desc.environment.LANGUAGE == "en_US") then ...
-- Added two new APIs slurm_job_cpus_allocated_str_on_node_id()
and slurm_job_cpus_allocated_str_on_node() to print the CPUs id
allocated to a job.
-- Specialized memory (a node's MemSpecLimit configuration parameter) is not
available for allocation to jobs.
-- Modify scontrol update job to allow jobid specification without
the = sign. 'scontrol update job=123 ...' and 'scontrol update job 123 ...'
are both valid syntax.
-- Archive a month at a time when there are lots of records to archive.
-- Introduce new sbatch option '--kill-on-invalid-dep=yes|no' which allows
users to specify which behavior they want if a job dependency is not
satisfied.
-- Add Slurmdb::qos_get() interface to perl api.
-- If a job fails to start set the requeue reason to be:
job requeued in held state.
-- Implemented a new MPI key,value PMIX_RING() exchange algorithm as
an alternative to PMI2.
Brian Christiansen
committed
-- Remove possible deadlocks in the slurmctld when the slurmdbd is busy
archiving/purging.
-- Add DB_ARCHIVE debug flag for filtering out debug messages in the slurmdbd
when the slurmdbd is archiving/purging.
-- Fix some power_save mode issues: Parsing of SuspendTime in slurm.conf was
bad, powered down nodes would get set non-responding if there was an
in-flight message, and permit nodes to be powered down from any state.
-- Initialize variables in consumable resource plugin to prevent core dump.
* Changes in Slurm 15.08.0pre3
==============================
-- CRAY - addition of acct_gather_energy/cray plugin.
-- Add job credential to "Run Prolog" RPC used with a configuration of
PrologFlags=alloc. This allows the Prolog to be passed identification of
GPUs allocated to the job.
-- Add SLURM_JOB_CONSTAINTS to environment variables available to the Prolog.
-- Added "--mail=stage_out" option to job submission commands to notify user
when burst buffer state out is complete.
-- Require a "Reason" when using scontrol to set a node state to DOWN.
-- Mail notifications on job BEGIN, END and FAIL now apply to a job array as a
whole rather than generating individual email messages for each task in the
job array.
-- task/affinity - Fix memory binding to NUMA with cpusets.
Brian Christiansen
committed
-- Display job's estimated NodeCount based off of partition's configured
resources rather than the whole system's.
-- Add AuthInfo option of "cred_expire=#" to specify the lifetime of a job
step credential. The default value was changed from 1200 to 120 seconds.
-- Set the delay time for job requeue to the job credential lifetime (120
seconds by default). This insures that prolog runs on every node when a
job is requeued. (This change will slow down launch of re-queued jobs).
-- Add AuthInfo option of "cred_expire=#" to specify the lifetime of a job
step credential.
-- Remove srun --max-launch-time option. The option has not been functional
since Slurm version 2.0.
-- Add sockets and cores to TaskPluginParams' autobind option.
-- Added LaunchParameters configuration parameter. Have srun command test
locally for the executable file if LaunchParameters=test_exec or the
environment variable SLURM_TEST_EXEC is set. Without this an invalid
command will generate one error message per task launched.
-- Fix the slurm /etc/init.d script to return 0 upon stopping the
daemons and return 1 in case of failure.
-- Add the ability for a compute node to be allocated to multiple jobs, but
restricted to a single user. Added "--exclusive=user" option to salloc,
sbatch and srun commands. Added "owner" field to node record, visible using
the scontrol and sview commands. Added new partition configuration parameter
"ExclusiveUser=yes|no".
* Changes in Slurm 15.08.0pre2
==============================
-- Add the environment variables SLURM_JOB_ACCOUNT, SLURM_JOB_QOS
and SLURM_JOB_RESERVATION in the batch/srun jobs.
-- Properly enforce partition Shared=YES option. Previously oversubscribing
resources required gang scheduling to be configured.
-- Enable per-partition gang scheduling resource resolution (e.g. the partition
can have SelectTypeParameters=CR_CORE, while the global value is CR_SOCKET).
-- Make it so a newer version of a slurmstepd can talk to an older srun.
Brian Christiansen
committed
allocation. Nodes could have been added while waiting for an allocation.
-- Expanded --cpu-freq parameters to include min-max:governor specifications.
--cpu-freq now supported on salloc and sbatch.
-- Add support for optimized job allocations with respect to SGI Hypercube
topology.
NOTE: Only supported with select/linear plugin.
NOTE: The program contribs/sgi/netloc_to_topology can be used to build
Slurm's topology.conf file.
-- Remove 64k validation of incoming RPC nodelist size. Validated at 64MB
when unpacking.
-- In slurmstepd() add the user primary group if it is not part of the
groups sent from the client.
-- Added BurstBuffer field to advanced reservations.
-- For advanced reservation, replace flag "License_only" with flag "Any_Nodes".
It can be used to indicate the an advanced reservation resources (licenses
and/or burst buffers) can be used with any compute nodes.
-- Allow users to specify the srun --resv-ports as 0 in which case no ports
will be reserved. The default behaviour is to allocate one port per task.
-- Interpret a partition configuration of "Nodes=ALL" in slurm.conf as
including all nodes defined in the cluster.
-- Added new configuration parameters PowerParameters and PowerPlugin.
-- Added power management plugin infrastructure.
-- If job already exceeded one of its QOS/Accounting limits do not
return error if user modifies QOS unrelated job settings.
-- When caching user ids of AllowGroups use both getgrnam_r() and getgrent_r()
then remove eventual duplicate entries.
-- Remove rpm dependency between slurm-pam and slurm-devel.
-- Remove support for the XCPU (cluster management) package.
-- Add Slurmdb::jobs_get() interface to perl api.
-- Performance improvement when sending data from srun to stepds when
processing fencing.
-- Add the feature to specify arbitrary field separator when running
sacct -p or sacct -P. The command line option is --separator.
-- Introduce slurm.conf parameter to use Proportional Set Size (PSS) instead
of RSS to determinate the memory footprint of a job.
Add an slurm.conf option not to kill jobs that is over memory limit.
-- Add job submission command options: --sicp (available for inter-cluster
dependencies) and --power (specify power management options) to salloc,
sbatch, and srun commands.
-- Add DebugFlags option of SICP (inter-cluster option logging).
-- In order to support inter-cluster job dependencies, the MaxJobID
configuration parameter default value has been reduced from 4,294,901,760
to 2,147,418,112 and it's maximum value is now 2,147,463,647.
ANY JOBS WITH A JOB ID ABOVE 2,147,463,647 WILL BE PURGED WHEN SLURM IS
UPGRADED FROM AN OLDER VERSION!
-- Add QOS name to the output of a partition in squeue/scontrol/sview/smap.
* Changes in Slurm 15.08.0pre1
==============================
-- Add sbcast support for file transfer to resources allocated to a job step
rather than a job allocation.
-- Change structures with association in them to assoc to save space.
-- Add support for job dependencies jointed with OR operator (e.g.
"--depend=afterok:123?afternotok:124").
-- Add "--bb" (burst buffer specification) option to salloc, sbatch, and srun.
-- Added configuration parameters BurstBufferParameters and BurstBufferType.
-- Added burst_buffer plugin infrastructure (needs many more functions).
-- Make it so when the fanout logic comes across a node that is down we abandon
the tree to avoid worst case scenarios when the entire branch is down and
we have to try each serially.
-- Add better error reporting of invalid partitions at submission time.
-- Move will-run test for multiple clusters from the sbatch code into the API
so that it can be used with DRMAA.
-- If a non-exclusive allocation requests --hint=nomultithread on a
CR_CORE/SOCKET system lay out tasks correctly.
-- Avoid including unused CPUs in a job's allocation when cores or sockets are
allocated.
-- Added new job state of STOPPED indicating processes have been stopped with a
SIGSTOP (using scancel or sview), but retain its allocated CPUs. Job state
returns to RUNNING when SIGCONT is sent (also using scancel or sview).
-- Added EioTimeout parameter to slurm.conf. It is the number of seconds srun
waits for slurmstepd to close the TCP/IP connection used to relay data
between the user application and srun when the user application terminates.
-- Remove slurmctld/dynalloc plugin as the work was never completed, so it is
not worth the effort of continued support at this time.
-- Remove DynAllocPort configuration parameter.
-- Add advance reservation flag of "replace" that causes allocated resources
to be replaced with idle resources. This maintains a pool of available
resources that maintains a constant size (to the extent possible).
-- Added SchedulerParameters option of "bf_busy_nodes". When selecting
resources for pending jobs to reserve for future execution (i.e. the job
can not be started immediately), then preferentially select nodes that are
in use. This will tend to leave currently idle resources available for
backfilling longer running jobs, but may result in allocations having less
than optimal network topology. This option is currently only supported by
the select/cons_res plugin.
-- Permit "SuspendTime=NONE" as slurm.conf value rather than only a numeric
value to match "scontrol show config" output.
-- Add the 'scontrol show cache' command which displays the associations
in slurmctld.
-- Test more frequently for node boot completion before starting a job.
Provides better responsiveness.
-- Permit PreemptType=qos and PreemptMode=suspend,gang to be used together.
A high-priority QOS job will now oversubscribe resources and gang schedule,
but only if there are insufficient resources for the job to be started
without preemption. NOTE: That with PreemptType=qos, the partition's
Shared=FORCE:# configuration option will permit one job more per resource
to be run than than specified, but only if started by preemption.
-- Remove the CR_ALLOCATE_FULL_SOCKET configuration option. It is now the
default.
-- Fix a race condition in PMI2 when fencing counters can be out of sync.
-- Increase the MAX_PACK_MEM_LEN define to avoid PMI2 failure when fencing
with large amount of ranks.
-- Add QOS option to a partition. This will allow a partition to have
all the limits a QOS has. If a limit is set in both QOS the partition
QOS will override the job's QOS unless the job's QOS has the
OverPartQOS flag set.
-- The task_dist_states variable has been split into "flags" and "base"
components. Added SLURM_DIST_PACK_NODES and SLURM_DIST_NO_PACK_NODES values
to give user greater control over task distribution. The srun --dist options
has been modified to accept a "Pack" and "NoPack" option. These options can
be used to override the CR_PACK_NODE configuration option.
* Changes in Slurm 14.11.12
===========================
-- Correct dependency formatting to print array task ids if set.
-- Fix for configuration of "AuthType=munge" and "AuthInfo=socket=..." with
alternate munge socket path.
-- BGQ - Remove redeclaration of job_read_lock.
-- BGQ - Tighter locks around structures when nodes/cables change state.
-- Fix job array formatting to allow return [0-100:2] display for arrays with
step functions rather than [0,2,4,6,8,...] .
-- Associations - prevent hash table corruption if uid initially unset for
a user, which can cause slurmctld to crash if that user is deleted.
-- Add cast to memory limit calculation to prevent integer overflow for
very large memory values.
-- Fix test cases to have proper int return signature.
* Changes in Slurm 14.11.11
===========================
-- Fix systemd's slurmd service from killing slurmstepds on shutdown.
-- Fix the qstat wrapper when user is removed from the system but still
has running jobs.
-- Log the request to terminate a job at info level if DebugFlags includes
the Steps keyword.
-- Fix potential memory corruption in _slurm_rpc_epilog_complete as well as
_slurm_rpc_complete_job_allocation.
-- Fix incorrectly sized buffer used by jobid2str which will cause buffer
overflow in slurmctld. (Bug 2295.)
* Changes in Slurm 14.11.10
===========================
-- Fix truncation of job reason in squeue.
-- If a node is in DOWN or DRAIN state, leave it unavailable for allocation
when powered down.
-- Update the slurm.conf man page documenting better nohold_on_prolog_fail
variable.
-- Don't trucate task ID information in "squeue --array/-r" or "sview".
-- Fix a bug which caused scontrol to core dump when releasing or
holding a job by name.
-- Fix unit conversion bug in slurmd which caused wrong memory calculation
for cgroups.
-- Fix issue with GRES in steps so that if you have multiple exclusive steps
and you use all the GRES up instead of reporting the configuration isn't
available you hold the requesting step until the GRES is available.
-- Fix slurmdbd backup to use DbdAddr when contacting the primary.
-- Fix to handle arrays with respect to number of jobs submitted. Previously
only 1 job was accounted (against MaxSubmitJob) for when an array was
submitted.
-- Correct counting for job array limits, job count limit underflow possible
when master cancellation of master job record.
-- For pending jobs have sacct print 0 for nnodes instead of the bogus 2.
-- Fix for tracking node state when jobs that have been allocated exclusive
access to nodes (i.e. entire nodes) and later relinquish some nodes. Nodes
would previously appear partly allocated and prevent use by other jobs.
Brian Christiansen
committed
-- Fix updating job in db after extending job's timelimit past partition's
timelimit.
Brian Christiansen
committed
-- Fix srun -I<timeout> from flooding the controller with step create requests.
-- Requeue/hold batch job launch request if job already running (possible if
node went to DOWN state, but jobs remained active).
-- If a job's CPUs/task ratio is increased due to configured MaxMemPerCPU,
then increase it's allocated CPU count in order to enforce CPU limits.
-- Don't mark powered down node as not responding. This could be triggered by
race condition of the node suspend and ping logic.
-- Don't requeue RPC going out from slurmctld to DOWN nodes (can generate
repeating communication errors).
-- Propagate sbatch "--dist=plane=#" option to srun.
-- Fix sacct to not return all jobs if the -j option is given with a trailing
','.
-- Permit job_submit plugin to set a job's priority.
-- Fix issue with sacct, printing 0_0 for array's that had finished in the
database but the start record hadn't made it yet.
-- Fix sacct -j, (nothing but a comma) to not return all jobs.
-- Prevent slurmstepd from core dumping if /proc/<pid>/stat has
unexpected format.
* Changes in Slurm 14.11.9
==========================
-- Correct "sdiag" backfill cycle time calculation if it yields locks. A
microsecond value was being treated as a second value resulting in an
overflow in the calcuation.
-- Fix segfault when updating timelimit on jobarray task.
-- Fix to job array update logic that can result in a task ID of 4294967294.
-- Fix of job array update, previous logic could fail to update some tasks
of a job array for some fields.
-- CRAY - Fix seg fault if a blade is replaced and slurmctld is restarted.
-- Fix plane distribution to allocate in blocks rather than cyclically.
-- squeue - Remove newline from job array ID value printed.
-- squeue - Enable filtering for job state SPECIAL_EXIT.
-- Prevent job array task ID being inappropriately set to NO_VAL.
-- MYSQL - Make it so you don't have to restart the slurmctld
to gain the correct limit when a parent account is root and you
remove a subaccount's limit which exists on the parent account.
-- MYSQL - Close chance of setting the wrong limit on an association
when removing a limit from an association on multiple clusters
at the same time.
-- MYSQL - Fix minor memory leak when modifying an association but no
change was made.
-- srun command line of either --mem or --mem-per-cpu will override both the
SLURM_MEM_PER_CPU and SLURM_MEM_PER_NODE environment variables.
-- Prevent slurmctld abort on update of advanced reservation that contains no
nodes.
Danny Auble
committed
-- ALPS - Revert commit 2c95e2d22 which also removes commit 2e2de6a4 allowing
cray with the SubAllocate option to work as it did with 2.5.
-- Properly parse CPU frequency data on POWER systems.
-- Correct sacct.a man pages describing -i option.
-- Capture salloc/srun information in sdiag statistics.
-- Fix bug in node selection with topology optimization.
-- Don't set distribution when srun requests 0 memory.
Nathan Yee
committed
-- Read in correct number of nodes from SLURM_HOSTFILE when specifying nodes
and --distribution=arbitrary.
-- Fix segfault in Bluegene setups where RebootQOSList is defined in
bluegene.conf and accounting is not setup.
-- MYSQL - Update mod_time when updating a start job record or adding one.
-- MYSQL - Fix issue where if an association id ever changes on at least a
portion of a job array is pending after it's initial start in the
database it could create another row for the remain array instead
of using the already existing row.
-- Fix scheduling anomaly with job arrays submitted to multiple partitions,
jobs could be started out of priority order.
-- If a host has suspened jobs do not reboot it. Reboot only hosts
with no jobs in any state.
-- ALPS - Fix issue when using --exclusive flag on srun to do the correct
thing (-F exclusive) instead of -F share.
-- Fix a bug in the controller which display jobs in CF state as RUNNING.
-- Preserve advanced _core_ reservation when nodes added/removed/resized on
slurmctld restart. Rebuild core_bitmap as needed.
-- Fix for non-standard Munge port location for srun/pmi use.
-- Fix gang scheduling/preemption issue that could cancel job at startup.
-- Fix a bug in squeue which prevented squeue -tPD to print array jobs.
Brian Christiansen
committed
-- Sort job arrays in job queue according to array_task_id when priorities are
equal.
-- Fix segfault in sreport when there was no response from the dbd.
-- ALPS - Fix compile to not link against -ljob and -lexpat with every lib
or binary.
Brian Christiansen
committed
-- Fix testing for CR_Memory when CR_Memory and CR_ONE_TASK_PER_CORE are used
with select/linear.
-- MySQL - Fix minor memory leak if a connection ever goes away whist using it.
-- ALPS - Make it so srun --hint=nomultithread works correctly.
-- Prevent job array task ID from being reported as NO_VAL if last task in the
array gets requeued.
-- Fix some potential deadlock issues when state files don't exist in the
association manager.
-- Correct RebootProgram logic when executed outside of a maintenance
reservation.
-- Requeue job if possible when slurmstepd aborts.
* Changes in Slurm 14.11.8
==========================
-- Eliminate need for user to set user_id on job_update calls.
-- Correct list of unavailable nodes reported in a job's "reason" field when
that job can not start.
-- Map job --mem-per-cpu=0 to --mem=0.
-- Fix squeue -o %m and %d unit conversion to Megabytes.
-- Fix issue with incorrect time calculation in the priority plugin when
a job runs past it's time limit.
-- Prevent users from setting job's partition to an invalid partition.
-- Fix sreport core dump when requesting
'job SizesByAccount grouping=individual'.
-- select/linear: Correct count of CPUs allocated to job on system with
hyperthreads.
Brian Christiansen
committed
-- Fix race condition where last array task might not get updated in the db.
-- CRAY - Remove libpmi from rpm install
-- Fix squeue -o %X output to correctly handle NO_VAL and suffix.
-- When deleting a job from the system set the job_id to 0 to avoid memory
corruption if thread uses the pointer basing validity off the id.
-- Fix issue where sbatch would set ntasks-per-node to 0 making any srun
afterward cause a divide by zero error.
-- switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable.
-- When sacctmgr loads archives with version less than 14.11 set the array
task id to NO_VAL, so sacct can display the job ids correctly.
-- When using memory cgroup if a task uses more memory than requested
the failures are logged into memory.failcnt count file by cgroup
and the user is notified by slurmstepd about it.
-- Fix scheduling inconsistency with GRES bound to specific CPUs.
-- If user belongs to a group which has split entries in /etc/group
search for its username in all groups.
-- Do not consider nodes explicitly powered up as DOWN with reason of "Node
unexpected rebooted".
-- Use correct slurmd spooldir when creating cpu-frequency locks.
-- Note that TICKET_BASED fairshare will be deprecated in the future. Consider
using the FAIR_TREE algorithm instead.
-- Set job's reason to BadConstaints when job can't run on any node.
-- Prevent abort on update of reservation with no nodes (licenses only).
-- Prevent slurmctld from dumping core if job_resrcs is missing in the
job data structure.
-- Fix squeue to print array task ids according to man page when
SLURM_BITSTR_LEN is defined in the environment.
-- In squeue, sort jobs based on array job ID if available.
-- Fix the calculation of job energy by not including the NO_VAL values.
-- Advanced reservation fixes: enable update of bluegene reservation, avoid
abort on multi-core reservations.
-- Set the totalview_stepid to the value of the job step instead of NO_VAL.
-- Fix slurmdbd core dump if the daemon does not have connection with
the database.
-- Display error message when attempting to modify priority of a held job.
-- Backfill scheduler: The configured backfill_interval value (default 30
seconds) is now interpretted as a maximum run time for the backfill
scheduler. Once reached, the scheduler will build a new job queue and
start over, even if not all jobs have been tested.
-- Backfill scheduler now considers OverTimeLimit and KillWait configuration
parameters to estimate when running jobs will exit.
-- Correct task layout with CR_Pack_Node option and more than 1 CPU per task.
-- Fix the scontrol man page describing the release argument.
-- When job QOS is modified, do so before attempting to change partition in
order to validate the partition's Allow/DenyQOS parameter.
* Changes in Slurm 14.11.7
==========================
-- Initialize some variables used with the srun --no-alloc option that may
cause random failures.
-- Add SchedulerParameters option of sched_min_interval that controls the
minimum time interval between any job scheduling action. The default value
is zero (disabled).
-- Change default SchedulerParameters=max_sched_time from 4 seconds to 2.
-- Refactor scancel so that all pending jobs are cancelled before starting
cancellation of running jobs. Otherwise they happen in parallel and the
pending jobs can be scheduled on resources as the running jobs are being
cancelled.
-- ALPS - Add new cray.conf variable NoAPIDSignalOnKill. When set to yes this
will make it so the slurmctld will not signal the apid's in a batch job.
Instead it relies on the rpc coming from the slurmctld to kill the job to
end things correctly.
-- ALPS - Have the slurmstepd running a batch job wait for an ALPS release
before ending the job.
-- Initialize variables in consumable resource plugin to prevent core dump.
-- Fix scancel bug which could return an error on attempt to signal a job step.
-- In slurmctld communication agent, make the thread timeout be the configured
value of MessageTimeout rather than 30 seconds.
-- sshare -U/--Users only flag was used uninitialized.
-- Cray systems, add "plugstack.conf.template" sample SPANK configuration file.
-- BLUEGENE - Set DB2NOEXITLIST when starting the slurmctld daemon to avoid
random crashing in db2 when the slurmctld is exiting.
-- Make full node reservations display correctly the core count instead of
cpu count.
-- Preserve original errno on execve() failure in task plugin.
-- Add SLURM_JOB_NAME env variable to an salloc's environment.
-- Overwrite SLURM_JOB_NAME in an srun when it gets an allocation.
-- Make sure each job has a wckey if that is something that is tracked.
-- Make sure old step data is cleared when job is requeued.
-- Load libtinfo as needed when building ncurses tools.
-- Fix small memory leak in backup controller.
-- Fix segfault when backup controller takes control for second time.
-- Cray - Fix backup controller running native Slurm.
-- Provide prototypes for init_setproctitle()/fini_setproctitle on NetBSD.
-- Add configuration test to find out the full path to su command.
-- preempt/job_prio plugin: Fix for possible infinite loop when identifying
preemptable jobs.
-- preempt/job_prio plugin: Implement the concept of Warm-up Time here. Use
the QoS GraceTime as the amount of time to wait before preempting.
Basically, skip preemption if your time is not up.
-- Make srun wait KillWait time when a task is cancelled.
-- switch/cray: Revert logic added to 14.11.6 that set "PMI_CRAY_NO_SMP_ENV=1"
if CR_PACK_NODES is configured.
* Changes in Slurm 14.11.6
==========================
-- If SchedulerParameters value of bf_min_age_reserve is configured, then
a newly submitted job can start immediately even if there is a higher
priority non-runnable job which has been waiting for less time than
bf_min_age_reserve.
-- qsub wrapper modified to export "all" with -V option
-- RequeueExit and RequeueExitHold configuration parameters modified to accept
numeric ranges. For example "RequeueExit=1,2,3,4" and "RequeueExit=1-4" are
equivalent.
-- Correct the job array specification parser to accept brackets in job array
expression (e.g. "123_[4,7-9]").
-- Fix for misleading job submit failure errors sent to users. Previous error
could indicate why specific nodes could not be used (e.g. too small memory)
when other nodes could be used, but were not for another reason.
-- Fix squeue --array to display correctly the array elements when the
% separator is specified at the array submission time.
-- Fix priority from not being calculated correctly due to memory issues.
-- Fix a transient pending reason 'JobId=job_id has invalid QOS'.
-- A non-administrator change to job priority will not be persistent except
for holding the job. User's wanting to change a job priority on a persistent
basis should reset it's "nice" value.
-- Print buffer sizes as unsigned values when failed to pack messages.
Brian Christiansen
committed
-- Fix race condition where sprio would print factors without weights applied.
-- Document the sacct option JobIDRaw which for arrays prints the jobid instead
of the arrayTaskId.
-- Allow users to modify MinCPUsNode, MinMemoryNode and MinTmpDiskNode of
their own jobs.
-- Increase the jobid print field in SQUEUE_FORMAT in
opt_modulefiles_slurm.in.
-- Enable compiling without optimizations and with debugging symbols by
default. Disable this by configuring with --disable-debug.
-- job_submit/lua plugin: Add mail_type and mail_user fields.
-- Use standard statvfs(2) syscall if available, in preference to
non-standard statfs.
-- Add a new option -U/--Users to sshare to display only users
information, parent and ancestors are not printed.
-- Purge 50000 records at a time so that locks can released periodically.
-- Fix potentially uninitialized variables
-- ALPS - Fix issue where a frontend node could become unresponsive and never
added back into the system.
-- Gate epilog complete messages as done with other messages
-- If we have more than a certain number of agents (50) wait longer when gating
rpcs.
-- FrontEnd - ping non-responding or down nodes.
-- switch/cray: If CR_PACK_NODES is configured, then set the environment
variable "PMI_CRAY_NO_SMP_ENV=1"
-- Fix invalid memory reference in SlurmDBD when putting a node up.
-- Allow opening of plugstack.conf even when a symlink.
-- Fix scontrol reboot so that rebooted nodes will not be set down with reason
'Node xyz unexpectedly rebooted' but will be correctly put back to service.
-- CRAY - Throttle the post NHC operations as to not hog the job write lock
if many steps/jobs finish at once.
-- Disable changes to GRES count while jobs are running on the node.
-- CRAY - Fix issue with scontrol reconfig.
-- slurmd: Remove wrong reporting of "Error reading step ... memory limit".
The logic was treating success as an error.
-- Eliminate "Node ping apparently hung" error messages.
-- Fix average CPU frequency calculation.
-- When allocating resources with resolution of sockets, charge the job for all
CPUs on allocated sockets rather than just the CPUs on used cores.
-- Prevent slurmdbd error if cluster added or removed while rollup in progress.
Removing a cluster can cause slurmdbd to abort. Adding a cluster can cause
the slurmdbd rollup to hang.
-- sview - When right clicking on a tab make sure we don't display the page
list, but only the column list.
-- FRONTEND - If doing a clean start make sure the nodes are brought up in the
database.
-- MySQL - Fix issue when using the TrackSlurmctldDown and nodes are down at
the same time, don't double bill the down time.
-- MySQL - Various memory leak fixes.
-- Fix node manager logic to keep unexpectedly rebooted node in state
NODE_STATE_DOWN even if already down when rebooted.
-- Fix for array jobs submitted to multiple partitions not starting.
-- CRAY - Enable ALPs mpp compatibility code in sbatch for native Slurm.
-- ALPS - Move basil_inventory to less confusing function.
-- Add SchedulerParameters option of "sched_max_job_start=" to limit the
number of jobs that can be started in any single execution of the main
scheduling logic.
-- Fixed compiler warnings generated by gcc version >= 4.6.
-- sbatch to stop parsing script for "#SBATCH" directives after first command,
which matches the documentation.
-- Overwrite the SLURM_JOB_NAME in sbatch if already exist in the environment
and use the one specified on the command line --job-name.
-- Remove xmalloc_nz from unpack functions. If the unpack ever failed the
free afterwards would not have zeroed out memory on the variables that
didn't get unpacked.
-- Improve database interaction from controller.
-- Fix for data shift when loading job archives.
-- ALPS - Added new SchedulerParameters=inventory_interval to specify how
often an inventory request is handled.
-- ALPS - Don't run a release on a reservation on the slurmctld for a batch
job. This is already handled on the stepd when the script finishes.
* Changes in Slurm 14.11.5
==========================
-- Correct the squeue command taking into account that a node can
have NULL name if it is not in DNS but still in slurm.conf.
-- Fix slurmdbd regression which would cause a segfault when a node is set
down with no reason.
-- BGQ - Fix issue with job arrays not being handled correctly
in the runjob_mux plugin.
-- Print FAIR_TREE, if configured, in "scontrol show config" output for
PriorityFlags.
-- Add SLURM_JOB_GPUS environment variable to those available in the Prolog.
-- Load lua-5.2 library if using lua5.2 for lua job submit plugin.
-- GRES logic: Prevent bad node_offset due to not preserving no_consume flag.
-- Fix wrong variables used in the wrapper functions needed for systems that
don't support strong_alias
-- Fix code for apple computers SOL_TCP is not defined
-- Cray/BASIL - Check for mysql credentials in /root/.my.cnf.
Brian Christiansen
committed
-- Fix sprio showing wrong priority for job arrays until priority is
recalculated.
-- Account to batch step all CPUs that are allocated to a job not
just one since the batch step has access to all CPUs like other steps.
Brian Christiansen
committed
-- Fix job getting EligibleTime set before meeting dependency requirements.
-- Correct the initialization of QOS MinCPUs per job limit.
-- Set the debug level of information messages in cgroup plugin to debug2.
-- For job running under a debugger, if the exec of the task fails, then
cancel its I/O and abort immediately rather than waiting 60 seconds for
I/O timeout.
-- Fix associations not getting default qos set until after a restart.
-- Set the value of total_cpus not to be zero before invoking
acct_policy_job_runnable_post_select.
-- MySQL - When requesting cluster resources, only return resources for the
cluster(s) requested.
-- Add TaskPluginParam=autobind=threads option to set a default binding in the
case that "auto binding" doesn't find a match.
-- Introduce a new SchedulerParameters variable nohold_on_prolog_fail.
If configured don't requeue jobs on hold is a Prolog fails.
-- Make it so sched_params isn't read over and over when an epilog complete
message comes in
-- Fix squeue -L <licenses> not filtering out jobs with licenses.
-- Changed the implementation of xcpuinfo_abs_to_mac() be identical
_abs_to_mac() to fix CPUs allocation using cpuset cgroup.
-- Improve the explanation of the unbuffered feature in the
srun man page.
-- Make taskplugin=cgroup work for core spec. needed to have task/cgroup
before.
-- Fix reports not using the month usage table.
-- BGQ - Sanity check given for translating small blocks into slurm bg_records.
-- Fix bug preventing the requeue/hold or requeue/special_exit of job from the
completing state.
-- Cray - Fix for launching batch step within an existing job allocation.
-- Cray - Add ALPS_APP_ID_ENV environment variable.
-- Increase maximum MaxArraySize configuration parameter value from 1,000,001
to 4,000,001.
-- Added new SchedulerParameters value of bf_min_age_reserve. The backfill
scheduler will not reserve resources for pending jobs until they have
been pending for at least the specified number of seconds. This can be
valuable if jobs lack time limits or all time limits have the same value.
-- Fix support for --mem=0 (all memory of a node) with select/cons_res plugin.
-- Fix bug that can permit someone to kill job array belonging to another user.
-- Don't set the default partition on a license only reservation.
-- Show a NodeCnt=0, instead of NO_VAL, in "scontrol show res" for a license
only reservation.
-- BGQ - When using static small blocks make sure when clearing the job the
block is set up to it's original state.
-- Start job allocation using lowest numbered sockets for block task
distribution for consistency with cyclic distribution.
* Changes in Slurm 14.11.4
==========================
-- Make sure assoc_mgr locks are initialized correctly.
-- Correct check of enforcement when filling in an association.
-- Make sacctmgr print out classification correctly for clusters.
-- Add array_task_str to the perlapi job info.
-- Fix for slurmctld abort with GRES types configured and no CPU binding.
-- Fix for GRES scheduling where count > 1 per topology type (or GRES types).
-- Make CR_ONE_TASK_PER_CORE work correctly with task/affinity.
-- job_submit/pbs - Fix possible deadlock.
-- job_submit/lua - Add "alloc_node" to job information available.
-- Fix memory leak in mysql accounting when usage rollup happens.
-- If users specify ALL together with other variables using the
--export sbatch/srun command line option, propagate the users'
environ to the execution side.
-- Fix job array scheduling anomaly that can stop scheduling of valid tasks.
-- Fix perl api tests for libslurmdb to work correctly.
-- Remove some misleading logs related to non-consumable GRES.
-- Allow --ignore-pbs to take effect when read as an #SBATCH argument.
Brian Christiansen
committed
-- Fix Slurmdb::clusters_get() in perl api from not returning information.
-- Fix TaskPluginParam=Cpusets from logging error message about not being able
to remove cpuset dir which was already removed by the release_agent.
-- Fix the file name substitution for job stderr when %A, %a %j and %u
are specified.
-- Remove minor warning when compiling slurmstepd.
-- Fix database resources so they can add new clusters to them after they have
initially been added.
-- Use the slurm_getpwuid_r wrapper of getpwuid_r to handle possible
interrupts.
-- Correct the scontrol man page and command listing which node states can
be set by the command.
-- Stop sacct from printing non-existent stat information for
Front End systems.
-- Correct srun and acct_gather.conf man pages, mention Filesystem instead
of Lustre.
-- When a job using multiple partition starts send to slurmdbd only
the partition in which the job runs.
-- ALPS - Fix depth for MemoryAllocation in BASIL with CLE 5.2.3.
-- Fix assoc_mgr hash to deal with users that don't have a uid yet when making
reservations.
-- When a job uses multiple partition set the environment variable
SLURM_JOB_PARTITION to be the one in which the job started.
-- Print spurious message about the absence of cgroup.conf at log level debug2
instead of info.
-- Enable CUDA v7.0+ use with a Slurm configuration of TaskPlugin=task/cgroup
ConstrainDevices=yes (in cgroup.conf). With that configuration
CUDA_VISIBLE_DEVICES will start at 0 rather than the device number.
-- Fix job array logic that can cause slurmctld to abort.
-- Report job "shared" field properly in scontrol, squeue, and sview.
-- If a job is requeued because of RequeueExit or RequeueExitHold sent event
REQUEUED to slurmdbd.
-- Fix build if hwloc is in non-standard location.
-- Fix slurmctld job recovery logic which could cause the last task in a job
array to be lost.
-- Fix slurmctld initialization problem which could cause requeue of the last
task in a job array to fail if executed prior to the slurmctld loading
the maximum size of a job array into a variable in the job_mgr.c module.
-- Fix fatal in controller when deleting a user association of a user which
Brian Christiansen
committed
had been previously removed from the system.
-- MySQL - If a node state and reason are the same on a node state change
don't insert a new row in the event table.
Brian Christiansen
committed
-- Fix issue with "sreport cluster AccountUtilizationByUser" when using
PrivateData=users.
-- Fix perlapi tests for libslurm perl module.
-- MySQL - Fix potential issue when PrivateData=Usage and a normal user
runs certain sreport reports.
* Changes in Slurm 14.11.3
==========================
-- Prevent vestigial job record when canceling a pending job array record.
-- Fix job array hash table bug, could result in slurmctld infinite loop or
invalid memory reference.
-- In srun honor ntasks_per_node before looking at cpu count when the user
doesn't request a number of tasks.
-- Fix ghost job when submitting job after all jobids are exhausted.
-- MySQL - Enhanced coordinator security checks.
-- Fix for task/affinity if an admin configures a node for having threads
but then sets CPUs to only represent the number of cores on the node.
-- Make it so previous versions of salloc/srun work with newer versions
of Slurm daemons.
-- Avoid delay on commit for PMI rank 0 to improve performance with some
MPI implementations.
-- auth/munge - Correct logic to read old format AccountingStoragePass.
-- Reset node "RESERVED" state as appropriate when deleting a maintenance
reservation.
-- Prevent a job manually suspended from being resumed by gang scheduler once
free resources are available.
-- Prevent invalid job array task ID value if a task is started using gang
scheduling.
-- Fix documentation bugs in slurm.conf.5. DenyAccount should be DenyAccounts.
-- For backward compatibility with older versions of OMPI not compiled
with --with-pmi restore the SLURM_STEP_RESV_PORTS in the job environment.
-- Update the html documentation describing the integration with openmpi.
-- Fix sacct when searching by nodelist.
-- Fix cosmetic info statements when dealing with a job array task instead of
a normal job.
-- Correct the sbatch pbs parser to process -j.
-- BGQ - Put print statement under a DebugFlag. This was just an oversight.
-- BLUEGENE - Remove check that would erroneously remove the CONFIGURING
flag from a job while the job is waiting for a block to boot.
-- Fix segfault in slurmstepd when job exceeded memory limit.
-- Fix race condition that could start a job that is dependent upon a job array
before all tasks of that job array complete.
* Changes in Slurm 14.11.2
==========================
-- Fix issue with association hash not getting the correct index which
could result in seg fault.
-- Avoid huge malloc if GRES configured with "Type" and huge "Count".
-- Fix jobs from starting in overlapping reservations that won't finish before
a "maint" reservation begins.
-- When node gets drained while in state mixed display its status as draining
in sinfo output.
-- Allow priority/multifactor to work with sched/wiki(2) if all priorities
have no weight. This allows for association and QOS decay limits to work.
-- Fix "squeue --start" to override SQUEUE_FORMAT env variable.
Brian Christiansen
committed
-- Fix scancel to be able to cancel multiple jobs that are space delimited.
-- Log Cray MPI job calling exit() without mpi_fini(), but do not treat it as
a fatal error. This partially reverts logic added in version 14.03.9.
-- sview - Fix displaying of suspended steps elapsed times.
-- Increase number of messages that get cached before throwing them away
when the DBD is down.
-- Fix jobs from starting in overlapping reservations that won't finish before
a "maint" reservation begins.
-- Restore GRES functionality with select/linear plugin. It was broken in
version 14.03.10.
-- Fix bug with GRES having multiple types that can cause slurmctld abort.
Brian Christiansen
committed
-- Fix squeue issue with not recognizing "localhost" in --nodelist option.
-- Make sure the bitstrings for a partitions Allow/DenyQOS are up to date
when running from cache.
-- Add smap support for job arrays and larger job ID values.
-- Fix possible race condition when attempting to use QOS on a system running
accounting_storage/filetxt.
-- Fix issue with accounting_storage/filetxt and job arrays not being printed
correctly.
-- In proctrack/linuxproc and proctrack/pgid, check the result of strtol()
for error condition rather than errno, which might have a vestigial error
code.
-- Improve information recording for jobs deferred due to advanced
reservation.
-- Exports eio_new_initial_obj to the plugins and initialize kvs_seq on
mpi/pmi2 setup to support launching.
* Changes in Slurm 14.11.1
==========================
-- Get libs correct when doing the xtree/xhash make check.
-- Update xhash/tree make check to work correctly with current code.
-- Remove the reference 'experimental' for the jobacct_gather/cgroup
plugin.
-- Add QOS manipulation examples to the qos.html documentation page.
-- If 'squeue -w node_name' specifies an unknown host name print
an error message and return 1.
-- Fix race condition in job_submit plugin logic that could cause slurmctld to
deadlock.
-- Job wait reason of "ReqNodeNotAvail" expanded to identify unavailable nodes
(e.g. "ReqNodeNotAvail(Unavailable:tux[3-6])").
* Changes in Slurm 14.11.0
==========================
-- ALPS - Fix issue with core_spec warning.
-- Allow multiple partitions to be specified in sinfo -p.
-- Install the service files in /usr/lib/systemd/system.
-- MYSQL - Add id_array_job and id_resv keys to $CLUSTER_job_table. THIS
COULD TAKE A WHILE TO CREATE THE KEYS SO BE PATIENT.
-- CRAY - Resize bitmaps on a restart and find we have more blades
than before.
-- Add new eio API function for removing unused connections.
-- ALPS - Fix issue where batch allocations weren't correctly confirmed or
released.
-- Define DEFAULT_MAX_TASKS_PER_NODE based on MAX_TASKS_PER_NODE from
slurm.h as per documentation.
-- Update the FAQ about relocating slurmctld.
-- In the memory cgroup enable memory.use_hierarchy in the cgroup root.
-- Add SLURM_CLUSTER_NAME to job environment.
* Changes in Slurm 14.11.0rc3
=============================
-- Allow envs to override autotools binaries in autogen.sh
-- Added system services files.
-- If the jobs pends with DependencyNeverSatisfied keep it pending even after
the job which it was depending upon was cleaned.
-- Let operators (in addition to user root and SlurmUser) see job script for
other user's jobs.
-- Perl API modified to return node state of MIXED rather than ALLOCATED if
only some CPUs allocated.
-- Double Munge connect retry timeout from 1 to 2 seconds.
-- sview - Remove unneeded code that was resolved globally in commit
98e24b0dedc.
-- Collect and report the accounting of the batch step and its children.
-- Add configure checks for faccessat and eaccess, and make use of one of
them if available.
-- Make configure --enable-developer also set --enable-debug
-- Introduce a SchedulerParameters variable kill_invalid_depend, if set
then jobs pending with invalid dependency are going to be terminated.
-- Move spank_user_task() call in slurmstepd after the task_g_pre_launch()
so that the task affinity information is available to spank.
-- Make /etc/init.d/slurm script return value 3 when the daemon is
not running. This is required by Linux Standard Base Core
Specification 3.1
* Changes in Slurm 14.11.0rc2
=============================
-- Logs for jobs which are explicitly requeued will say so rather than saying
that a node in their allocation failed.
-- Updated the documentation about the remote licenses served by
the Slurm database.
-- Insure that slurm_spank_exit() is only called once from srun.
-- Change the signature of net_set_low_water() to use 4 bytes instead of 8.
-- Export working_cluster_rec in libslurmdb.so as well as move some function
definitions needed for drmaa.
-- If using cons_res or serial cause a fatal in the plugin instead of causing
the SelectTypeParameters to magically set to CR_CPU.
-- Enhance task/affinity auto binding to consider tasks * cpus-per-task.
-- Fix regression the priority/multifactor which would cause memory corruption.
Issue is only in rc1.
-- Add PrivateData value of "cloud". If set, powered down nodes in the cloud
will be visible.
-- Sched/backfill - Eliminate clearing start_time of running jobs.
-- Fix various backwards compatibility issues.
-- If failed to launch a batch job, requeue it in hold.
* Changes in Slurm 14.11.0rc1
=============================
-- When using cgroup name the batch step as step_batch instead of
batch_4294967294
-- Changed LEVEL_BASED priority to be "Fair_Tree"
-- BGQ - Add cnode based reservations.
-- Alongside totalview_jobid implement totalview_stepid available
to sattach.
-- Add ability to include other files in slurm.conf based upon the ClusterName.
-- Add reservation information in the sacct and sreport output.
-- Add job priority calculation check for overflow and fix memory leak.
-- Add SchedulerParameters option of pack_serial_at_end to put serial jobs at
the end of the available nodes rather than using a best fit algorithm.
-- Allow regular users to view default sinfo output when
privatedata=reservations is set.
-- PrivateData=reservation modified to permit users to view the reservations
which they have access to (rather then preventing them from seeing ANY
reservation).
-- job_submit/lua: Fix job_desc set field logic