Newer
Older
or srun's --task-prolog option.
-- Improve reliability of batch job requeue logic in the event that the slurmd
daemon is temporarily non-responsive (for longer than the configured
MessageTimeout value but less than the SlurmdTimeout value).
-- In sched/wiki2 (Moab) report a job's MAXNODES (maximum number of permitted
nodes).
-- Fixed SLURM_TASKS_PER_NODE to live up more to it's name on an allocation.
Will now contain the number of tasks per node instead of the number of CPUs
per node. This is only for a resource allocation. Job steps already have
the environment variable set correctly.
-- Configuration parameter PropagateResourceLimits has new option of "NONE".
-- User's --propagate options take precidence over PropagateResourceLimits
configuration parameter in both srun and sbatch commands.
-- When Moab is in use (salloc or sbatch is executed with the --get-user-env
option to be more specific), load the user's default resource limits rather
than propagating the Moab daemon's limits.
-- Fix bug in slurmctld restart logic for recovery of batch jobs that are
initiated as a job step rather than an independent job (used for LSF).
-- Fix bug that can cause slurmctld restart to fail, bug introduced in SLURM
version 1.3.9. From Eygene Ryabinkin, Kurchatov Institute, Russia.
-- Permit slurmd configuration parameters to be set to new values from
previously unset values.
* Changes in SLURM 1.3.9
========================
-- Fix jobs being cancelled by ctrl-C to have correct cancelled state in
accounting.
-- Slurmdbd will only cache user data, made for faster start up
-- Improved support for job steps in FRONT_END systems
-- Added support to dump and load association information in the controller
on start up if slurmdbd is unresponsive
-- BLUEGENE - Added support for sched/backfill plugin
-- sched/backfill modified to initiate multiple jobs per cycle.
-- Increase buffer size in srun to hold task list expressions. Critical
for jobs with 16k tasks or more.
-- Added support for eligible jobs and downed nodes to be sent to accounting
from the controller the first time accounting is turned on.
-- Correct srun logic to support --tasks-per-node option without task count.
-- Logic in place to handle multiple versions of RPCs within the slurmdbd.
THE SLURMDBD MUST BE UPGRADED TO THIS VERSION BEFORE UPGRADING THE
SLURMCTLD OR THEY WILL NOT TALK.
Older versions of the slurmctld will continue to talk to the new slurmdbd.
-- Add support for new job dependency type: singleton. Only one job from a
given user with a given name will execute with this dependency type.
From Matthieu Hautreux, CEA.
-- Updated contribs/python/hostlist to version 1.3: See "CHANGES" file in
that directory for details. From Kent Engström, NSC.
-- Add SLURM_JOB_NAME environment variable for jobs submitted using sbatch.
In order to prevent the job steps from all having the same name as the
batch job that spawned them, the SLURM_JOB_NAME environment variable is
ignored when setting the name of a job step from within an existing
resource allocation.
-- For use with sched/wiki2 (Moab only), set salloc's default shell based
upon the user who the job runs as rather than the user submitting the job
(user root).
-- Fix to sched/backfill when job specifies no time limit and the partition
time limit is INFINITE.
-- Validate a job's constraints (node features) at job submit or modification
time. Major re-write of resource allocation logic to support more complex
job feature requests.
-- For sched/backfill, correct logic to support job constraint specification
(e.g. node features).
-- Correct power save logic to avoid trying to wake DOWN node. From Matthieu
Hautreux, CEA.
-- Cancel a job step when one of it's nodes goes DOWN based upon the job
step's --no-kill option, by default the step is killed (previously the
job step remained running even without the --no-kill option).
-- Fix bug in logic to remove whitespace from plugstack.conf.
-- Add new configuration parameter SallocDefaultCommand to control what
shell that salloc launches by default.
-- When enforcing PrivateData configuration parameter, failures return
"Access/permission denied" rather than "Invalid user id".
-- From sbatch and srun, if the --dependency option is specified then set
the environment variable SLURM_JOB_DEPENDENCY to the same value.
-- In plugin jobcomp/filetxt, use ISO8601 formats for time by default (e.g.
YYYY-MM-DDTHH:MM:SS rather than MM/DD-HH:MM:SS). This restores the default
behavior from Slurm version 1.2. Change the value of USE_ISO8601 in
src/plusings/jobcomp/filetxt/jobcomp_filetxt.c to revert the behavior.
-- Add support for configuration option of ReturnToService=2, which will
return a DOWN to use if the node was previous set DOWN for any reason.
-- Removed Gold accounting plugin. This plugin was to be used for accounting
but has seen not been maintained and is no longer needed. If using this
please contact slurm-dev@llnl.gov.
-- When not enforcing associations and running accounting if a user
submits a job to an account that does not have an association on the
cluster the account will be changed to the default account to help
avoid trash in the accounting system. If the users default account
does not have an association on the cluster the requested account
will be used.
-- Add configuration parameter "--have-front-end" to define HAVE_FRONT_END
in config.h and run slurmd only on a front end (suitable only for SLURM
development and testing).
* Changes in SLURM 1.3.8
========================
-- Added PrivateData flags for Users, Usage, and Accounts to Accounting.
If using slurmdbd, set in the slurmdbd.conf file. Otherwise set in the
slurm.conf file. See "man slurm.conf" or "man slurmdbd.conf" for details.
-- Reduce frequency of resending job kill RPCs. Helpful in the event of
network problems or down nodes.
-- Fix memory leak caused under heavy load when running with select/cons_res
plus sched/backfill.
-- For salloc, if no local command is specified, execute the user's default
shell.
-- BLUEGENE - patch to make sure when starting a job blocks required to be
freed are checked to make sure no job is running on them. If one is found
we will requeue the new job. No job will be lost.
-- BLUEGENE - Set MPI environment variables from salloc.
-- BLUEGENE - Fix threading issue for overlap mode
-- Reject batch scripts containing DOS linebreaks.
-- BLUEGENE - Added wait for block boot to salloc
* Changes in SLURM 1.3.7
========================
-- Add jobid/stepid to MESSAGE_TASK_EXIT to address race condition when
a job step is cancelled, another is started immediately (before the
first one completely terminates) and ports are reused.
NOTE: This change requires that SLURM be updated on all nodes of the
cluster at the same time. There will be no impact upon currently running
jobs (they will ignore the jobid/stepid at the end of the message).
-- Added Python module to process hostslists as used by SLURM. See
contribs/python/hostlist. Supplied by Kent Engstrom, National
Supercomputer Centre, Sweden.
-- Report task termination due to signal (restored functionality present
in slurm v1.2).
-- Remove sbatch test for script size being no larger than 64k bytes.
The current limit is 4GB.
-- Disable FastSchedule=0 use with SchedulerType=sched/gang. Node
configuration must be specified in slurm.conf for gang scheduling now.
-- For sched/wiki and sched/wiki2 (Maui or Moab scheduler) disable the ability
of a non-root user to change a job's comment field (used by Maui/Moab for
storing scheduler state information).
-- For sched/wiki (Maui) add pending job's future start time to the state
info reported to Maui.
-- Improve reliability of job requeue logic on node failure.
-- Add logic to ping non-responsive nodes even if SlurmdTimeout=0. This permits
the node to be returned to use when it starts responding rather than
remaining in a non-usable state.
-- Honor HealthCheckInterval values that are smaller than SlurmdTimeout.
-- For non-responding nodes, log them all on a single line with a hostlist
expression rather than one line per node. Frequency of log messages is
dependent upon SlurmctldDebug value from 300 seconds at SlurmctldDebug<=3
to 1 second at SlurmctldDebug>=5.
-- If a DOWN node is resumed, set its state to IDLE & NOT_RESPONDING and
ping the node immediately to clear the NOT_RESPONDING flag.
-- Log that a job's time limit is reached, but don't sent SIGXCPU.
-- Fixed gid to be set in slurmstepd when run by root
-- Changed getpwent to getpwent_r in the slurmctld and slurmd
-- Increase timeout on most slurmdbd communications to 60 secs (time for
substantial database updates).
-- Treat srun option of --begin= with a value of now without a numeric
component as a failure (e.g. "--begin=now+hours").
-- Eliminate a memory leak associated with notifying srun of allocated
nodes having failed.
-- Add scontrol shutdown option of "slurmctld" to just shutdown the
slurmctld daemon and leave the slurmd daemons running.
-- Do not require JobCredentialPrivateKey or JobCredentialPublicCertificate
in slurm.conf if using CryptoType=crypto/munge.
-- Remove SPANK support from sbatch.
* Changes in SLURM 1.3.6
========================
-- Add new function to get information for a single job rather than always
getting information for all jobs. Improved performance of some commands.
NOTE: This new RPC means that the slurmctld daemons should be updated
before or at the same time as the compute nodes in order to process it.
-- In salloc, sbatch, and srun replace --task-mem options with --mem-per-cpu
(--task-mem will continue to be accepted for now, but is not documented).
Replace DefMemPerTask and MaxMemPerTask with DefMemPerCPU, DefMemPerNode,
MaxMemPerCPU and MaxMemPerNode in slurm.conf (old options still accepted
for now, but mapped to "PerCPU" parameters and not documented). Allocate
a job's memory memory at the same time that processors are allocated based
upon the --mem or --mem-per-cpu option rather than when job steps are
initiated.
-- Altered QOS in accounting to be a list of admin defined states, an
account or user can have multiple QOS's now. They need to be defined using
'sacctmgr add qos'. They are no longer an enum. If none are defined
Normal will be the QOS for everything. Right now this is only for use
with MOAB. Does nothing outside of that.
-- Added spank_get_item support for field S_STEP_CPUS_PER_TASK.
-- Make corrections in spank_get_item for field S_JOB_NCPUS, previously
reported task count rather than CPU count.
-- Convert configuration parameter PrivateData from on/off flag to have
separate flags for job, partition, and node data. See "man slurm.conf"
for details.
-- Fix bug, failed to load DisableRootJobs configuration parameter.
-- Altered sacctmgr to always return a non-zero exit code on error and send
error messages to stderr.
-- Fix processing of auth/munge authtentication key for messages originating
in slurmdbd and sent to slurmctld.
-- If srun is allocating resources (not within sbatch or salloc) and MaxWait
is configured to a non-zero value then wait indefinitely for the resource
allocation rather than aborting the request after MaxWait time.
-- For Moab only: add logic to reap defunct "su" processes that are spawned by
slurmd to load user's environment variables.
-- Added more support for "dumping" account information to a flat file and
read in again to protect data incase something bad happens to the database.
-- Sacct will now report account names for job steps.
-- For AIX: Remove MP_POERESTART_ENV environment variable, disabling
poerestart command. User must explicitly set MP_POERESTART_ENV before
executing poerestart.
-- Put back notification that a job has been allocated resources when it was
pending.
-- Some updates to man page formatting from Gennaro Oliva, ICAR.
-- Smarter loading of plugins (doesn't stat every file in the plugin dir)
-- In sched/backfill avoid trying to schedule jobs on DOWN or DRAINED nodes.
-- forward exit_code from step completion to slurmdbd
-- Add retry logic to socket connect() call from client which can fail
when the slurmctld is under heavy load.
-- Fixed bug when adding associations to add correctly.
-- Added support for associations for user root.
-- For Moab, sbatch --get-user-env option processed by slurmd daemon
rather than the sbatch command itself to permit faster response
for Moab.
-- IMPORTANT FIX: This only effects use of select/cons_res when allocating
resources by core or socket, not by CPU (default for SelectTypeParameter).
We are not saving a pending job's task distribution, so after restarting
slurmctld, select/cons_res was over-allocating resources based upon an
invalid task distribution value. Since we can't save the value without
changing the state save file format, we'll just set it to the default
value for now and save it in Slurm v1.4. This may result in a slight
variation on how sockets and cores are allocated to jobs, but at least
resources will not be over-allocated.
-- Correct logic in accumulating resources by node weight when more than
one job can run per node (select/cons_res or partition shared=yes|force).
-- slurm.spec file updated to avoid creating empty RPMs. RPM now *must* be
built with correct specification of which packages to build or not build.
See the top of the slurm.spec file for information about how to control
package building specification.
-- Set SLURM_JOB_CPUS_PER_NODE for jobs allocated using the srun command.
It was already set for salloc and sbatch commands.
-- Fix to handle suspended jobs that were cancelled in accounting
-- BLUEGENE - fix to only include bps given in a name from the bluegene.conf
file.
-- For select/cons_res: Fix record-keeping for core allocations when more
than one partition uses a node or there is more than one socket per node.
-- In output for "scontrol show job" change "StartTime" header to "EligibleTime"
for pending jobs to accurately describe what is reported.
-- Add more slurmdbd.conf paramters: ArchiveScript, ArchiveAge, JobPurge, and
StepPurge (not fully implemented yet).
-- Add slurm.conf parameter EnforcePartLimits to reject jobs which exceed a
partition's size and/or time limits rather than leaving them queued for a
later change in the partition's limits. NOTE: Not reported by
"scontrol show config" to avoid changing RPCs. It will be reported in
SLURM version 1.4.
-- Added idea of coordinator to accounting. A coordinator can add associations
between exsisting users to the account or any sub-account they are
coordinator to. They can also add/remove other coordinators to those
accounts.
-- Add support for Hostname and NodeHostname in slurm.conf being fully
qualified domain names (by Vijay Ramasubramanian, University of Maryland).
For more information see "man slurm.conf".
* Changes in SLURM 1.3.3
========================
-- Add mpi_openmpi plugin to the main SLURM RPM.
-- Prevent invalid memory reference when using srun's --cpu_bind=cores option
(slurm-1.3.2-1.cea1.patch from Matthieu Hautreux, CEA).
-- Task affinity plugin modified to support a particular cpu bind type: cores,
sockets, threads, or none. Accomplished by setting an environment variable
SLURM_ENFORCE_CPU_TYPE (slurm-1.3.2-1.cea2.patch from Matthieu Hautreux,
CEA).
-- For BlueGene only, log "Prolog failure" once per job not once per node.
-- Reopen slurmctld log file after reconfigure or SIGHUP is received.
-- In TaskPlugin=task/affinity, fix possible infinite loop for slurmd.
-- Accounting rollup works for mysql plugin. Automatic rollup when using
slurmdbd.
-- Copied job stat logic out of sacct into sstat in the future sacct -stat
will be deprecated.
-- Correct sbatch processing of --nice option with negative values.
-- Add squeue formatted print option %Q to print a job's integer priority.
-- In sched/backfill, fix bug that was changing a pending job's shared value
to zero (possibly changing a pending job's resource requirements from a
processor on some node to the full node).
-- Get --ntasks-per-node option working for sbatch command.
Danny Auble
committed
-- BLUEGENE: Added logic to give back a best block on overlapped mode
in test_only mode
-- BLUEGENE: Updated debug info and man pages for better help with the
numpsets option and to fail correctly with bad image request for building
blocks.
-- In sched/wiki and sched/wiki2 properly support Slurm license consumption
(job state reported as "Hold" when required licenses are not available).
-- In sched/wiki2 JobWillRun command, don't return an error code if the job(s)
can not be started at that time. Just return an error message (from
Doug Wightman, CRI).
-- Fix bug if sched/wiki or sched/wiki2 are configured and no job comment is
set.
-- scontrol modified to report partition partition's "DisableRootJobs" value.
-- Fix bug in setting host address for PMI communications (mpich2 only).
-- Fix for memory size accounting on some architectures.
-- In sbatch and salloc, change --dependency's one letter option from "-d"
to "-P" (continue to accept "-d", but change the documentation).
-- Only check that task_epilog and task_prolog are runable by the job's
user, not as root.
-- In sbatch, if specifying an alternate directory (--workdir/-D), then
input, output and error files are in that directory rather than the
directory from which the command is executed
-- NOTE: Fully operational with Moab version 5.2.3+. Change SUBMITCMD in
moab.cfg to be the location of sbatch rather than srun. Also set
HostFormat=2 in SLURM's wiki.conf for improved performance.
-- NOTE: We needed to change an RPC from version 1.3.1. You must upgrade
all nodes in a cluster from v1.3.1 to v1.3.2 at the same time.
-- Postgres plugin will work from job accounting, not for association
management yet.
-- For srun/sbatch --get-user-env option (Moab use only) look for "env"
command in both /bin and /usr/sbin (for Suse Linux).
-- Fix bug in processing job feature requests with node counts (could fail
to schedule job if some nodes have not associated features).
-- Added nodecnt and gid to jobcomp/script
-- Insure that nodes select in "srun --will-run" command or the equivalent in
sched/wiki2 are in the job's partition.
-- BLUGENE - changed partition Min|MaxNodes to represent c-node counts
instead of base partitions
-- In sched/gang only, prevent possible invalid memory reference when
slurmctld is reconfigured, e.g. "scontrol reconfig".
-- In select/linear only, prevent invalid memory reference in log message when
nodes are added to slurm.conf and then "scontrol reconfig" is executed.
* Changes in SLURM 1.3.1
========================
-- Correct logic for processing batch job's memory limit enforcement.
-- Fix bug that was setting a job's requeue value on any update of the
job using the "scontrol update" command. The invalid value of an
updated job prevents it's recovery when slurmctld restarts.
-- Add support for cluster-wide consumable resources. See "Licenses"
parameter in slurm.conf man page and "--licenses" option in salloc,
sbatch and srun man pages.
-- Major changes in select/cons_res to support FastSchedule=2 with more
resources configured than actually exist (useful for testing purposes).
-- Modify srun --test-only response to include expected initiation time
for a job as well as the nodes to be allocated and processor count
(for use by Moab).
-- Correct sched/backfill to properly honor job dependencies.
-- Correct select/cons_res logic to allocate CPUs properly if there is
more than one thread per core (previously failed to allocate all cores).
-- Correct select/linear logic in shared job count (was off by 1).
-- Add support for job preeption based upon partition priority (in sched/gang,
preempt.patch from Chris Holmes, HP).
Danny Auble
committed
-- Added much better logic for mysql accounting.
-- Finished all basic functionality for sacctmgr.
-- Added load file logic to sacctmgr for setting up a cluster in one step.
-- NOTE: We needed to change an RPC from version 1.3.0. You must upgrade
all nodes in a cluster from v1.3.0 to v1.3.1 at the same time.
-- NOTE: Work is currently underway to improve placement of jobs for gang
scheduling and preemption.
-- NOTE: Work is underway to provide additional tools for reporting
accounting information.
* Changes in SLURM 1.3.0
========================
-- In sched/wiki2, add processor count to JOBWILLRUN response.
-- Add event trigger for node entering DRAINED state.
-- Build properly without OpenSSL installed (OpenSSL is recommended, but not
required).
-- Added slurmdbd, and modified accounting_storage plugin to talk to it.
Allowing multiple slurm systems to securly store and gather information
not only about jobs, but the system also. See accounting web page for more
information.
* Changes in SLURM 1.3.0-pre11
==============================
-- Restructure the sbcast RPC to take advantage of larger buffers available
in Slurm v1.3 RPCs.
-- In scontrol, show job's Requeue value, permit change of Requeue and Comment
values.
-- In slurmctld job record, add QOS (quality of service) value for accounting
purposes with Maui and Moab.
-- Log to a job's stderr when it is being cancelled explicitly or upon reaching
it's time limit.
-- Only permit a job's account to be changed while that job is PENDING.
-- Fix race condition in job suspend/resume (slurmd.sus_res.patch from HP).
* Changes in SLURM 1.3.0-pre10
==============================
-- Add support for node-specific "arch" (architecture) and "os" (operating
system) fields. These fields are set based upon values reported by the
slurmd daemon on each compute node using SLURM_ARCH and SLURM_OS environment
variables (if set, the uname function otherwise) and are intended to support
changes in real time changes in operating system. These values are reported
by "scontrol show node" plus the sched/wiki and sched/wiki2 plugins for Maui
and Moab respectively.
-- In sched/wiki and sched/wiki2: add HostFormat and HidePartitionJobs to
"scontrol show config" SCHEDULER_CONF output.
-- In sched/wiki2: accept hostname expression as input for GETNODES command.
-- Add JobRequeue configuration parameter and --requeue option to the sbatch
command.
-- Add HealthCheckInterval and HealthCheckProgram configuration parameters.
-- Add SlurmDbdAddr, SlurmDbdAuthInfo and SlurmDbdPort configuration parameters.
-- Modify select/linear to achieve better load leveling with gang scheduler.
-- Develop the sched/gang plugin to support select/linear and
select/cons_res. If sched/gang is enabled and Shared=FORCE is configured
for a partition, this plugin will gang-schedule or "timeslice" jobs that
share common resources within the partition. Note that resources that are
shared across partitions are not gang-scheduled.
-- Add EpilogMsgTime configuration parameter. See "man slurm.conf" for details.
-- Increase default MaxJobCount configuration parameter from 2000 to 5000.
-- Move all database common files from src/common to new lib in src/database.
-- Move sacct to src/accounting added sacctmgr for scontrol like operations
to accounting infrastructure.
-- Basic functions of sacctmgr in place to make for administration of
accounting.
-- Moved clusteracct_storage plugin to accounting_storage plugin,
jobacct_storage is still it's own plugin for now.
-- Added template for slurm php extention.
-- Add infrastructure to support allocation of cluster-wide licenses to jobs.
Full support will be added some time after version 1.3.0 is released.
-- In sched/wiki2 with select/bluegene, add support for WILLRUN command
to accept multiple jobs with start time specifications.
* Changes in SLURM 1.3.0-pre9
=============================
-- Add spank support to sbatch. Note that spank_local_user() will be called
with step_layout=NULL and gid=SLURM_BATCH_SCRIPT and spank_fini() will
be called immediately afterwards.
Danny Auble
committed
-- Made configure use mysql_config to find location of mysql database install
Removed bluegene specific information from the general database tables.
-- Re-write sched/backfill to utilize new will-run logic in the select
plugins. It now supports select/cons_res and all job options (required
nodes, excluded nodes, contiguous, etc.).
-- Modify scheduling logic to better support overlapping partitions.
-- Add --task-mem option and remove --job-mem option from srun, salloc, and
sbatch commands. Enforce step memory limit, if specified and there is
no job memory limit specified (--mem). Also see DefMemPerTask and
MaxMemPerTask in "man slurm.conf". Enforcement is dependent upon job
accounting being enabled with non-zero value for JoabAcctGatherFrequency.
-- Change default node tmp_disk size to zero (for diskless nodes).
* Changes in SLURM 1.3.0-pre8
=============================
-- Modify how strings are packed in the RPCs, Maximum string size
increased from 64KB (16-bit size field) to 4GB (32-bit size field).
-- Fix bug that prevented time value of "INFINITE" from being processed.
-- Added new srun/sbatch option "--open-mode" to control how output/error
files are opened ("t" for truncate, "a" for append).
-- Added checkpoint/xlch plugin for use with XLCH (Hongjia Cao, NUDT).
-- Added srun option --checkpoint-path for use with XLCH (Hongjia Cao, NUDT).
-- Added new srun/salloc/sbatch option "--acctg-freq" for user control over
accounting data collection polling interval.
-- In sched/wiki2 add support for hostlist expression use in GETNODES command
with HostFormat=2 in the wiki.conf file.
-- Added new scontrol option "setdebug" that can change the slurmctld daemons
debug level at any time (Hongjia Cao, NUDT).
-- Track total total suspend time for jobs and steps for accounting purposes.
-- Add version information to partition state file.
-- Added 'will-run' functionality to all of the select plugins (bluegene,
linear, and cons_res) to return node list and time job can start based
on other jobs running.
-- Major restructuring of node selection logic. select/linear now supports
partition max_share parameter and tries to match like size jobs on the
Moe Jette
committed
same nodes to improve gang scheduling performance. Also supports treating
memory as consumable resource for job preemption and gang scheduling if
SelectTypeParameter=CR_Memory in slurm.conf.
-- BLUEGENE: Reorganized bluegene plugin for maintainability sake.
-- Major restructuring of data structures in select/cons_res.
-- Support job, node and partition names of arbitrary size.
-- Fix bug that caused slurmd to hang when using select/linear with
task/affinity.
* Changes in SLURM 1.3.0-pre7
=============================
-- Fix a bug in the processing of srun's --exclusive option for a job step.
* Changes in SLURM 1.3.0-pre6
=============================
-- Add support for configurable number of jobs to share resources using the
partition Shared parameter in slurm.conf (e.g. "Shared=FORCE:3" for two
jobs to share the resources). From Chris Holmes, HP.
-- Made salloc use api instead of local code for message handling.
* Changes in SLURM 1.3.0-pre5
=============================
-- Add select_g_reconfigure() function to node changes in slurmctld configuration
that can impact node scheduling.
-- scontrol to set/get partition's MaxTime and job's Timelimit in minutes plus
new formats: min:sec, hr:min:sec, days-hr:min:sec, days-hr, etc.
-- scontrol "notify" command added to send message to stdout of srun for
specified job id.
Moe Jette
committed
-- For BlueGene, make alpha part of node location specification be case insensitive.
-- Report scheduler-plugin specific configuration information with the
"scontrol show configuration" command on the SCHEDULER_CONF line. This
information is not found in the "slurm.conf" file, but a scheduler plugin
specific configuration (e.g. "wiki.conf").
-- sview partition information reported now includes partition priority.
-- Expand job dependency specification to support concurrent execution,
testing of job exit status and multiple job IDs.
* Changes in SLURM 1.3.0-pre4
=============================
-- Job step launch in srun is now done from the slurm api's all further
modifications to job launch should be done there.
-- Add new partition configuration parameter Priority. Add job count to
Shared parameter.
-- Add new configuration parameters DefMemPerTask, MaxMemPerTask, and
SchedulerTimeSlice.
-- In sched/wiki2, return REJMESSAGE with details on why a job was
requeued (e.g. what node failed).
* Changes in SLURM 1.3.0-pre3
=============================
-- Added srun option "--checkpoint=time" for job step to automatically be
checkpointed on a period basis.
-- Change behavior of "scancel -s KILL <jobid>" to send SIGKILL to all job
steps rather than cancelling the job. This now matches the behavior of
all other signals. "scancel <jobid>" still cancels the job and all steps.
-- Add support for new job step options --exclusive and --immediate. Permit
job steps to be queued when resources are not available within an existing
job allocation to dedicate the resources to the job step. Useful for
executing simultaneous job steps. Provides resource management both at
the level of jobs and job steps.
srun --nodes=16 --constraint=graphics*4 ...
Based upon work by Kumar Krishna (HP, India).
-- Add multi-core options to salloc and sbatch commands (sbatch.patch and
cleanup.patch from Chris Holmes, HP).
-- In select/cons_res properly release resources allocated to job being
suspended (rmbreak.patch, from Chris Holmes, HP).
-- Removed database and jobacct plugin replaced with jobacct_storage
and jobacct_gather for easier hooks for further expansion of the
jobacct plugin.
* Changes in SLURM 1.3.0-pre2
=============================
-- Added new srun option --pty to start job with pseudo terminal attached
to task 0 (all other tasks have I/O discarded)
-- Disable user specifying jobid when sched/wiki2 configured (needed for
Moab releases until early 2007).
-- Report command, args and working directory for batch jobs with
"scontrol show job".
* Changes in SLURM 1.3.0-pre1
=============================
Christopher J. Morrone
committed
-- !!! SRUN CHANGES !!!
The srun options -A/--allocate, -b/--batch, and -a/--attach have been
removed! That functionality is now available in the separate commands
salloc, sbatch, and sattach, respectively.
-- Add new node state FAILING plus trigger for when node enters that state.
-- Add new configuration paramter "PrivateData". This can be used to
prevent a user from seeing jobs or job steps belonging to other users.
-- Added configuration parameters for node power save mode: ResumeProgram
ResumeRate, SuspendExcNodes, SuspendExcParts, SuspendProgram and
SuspendRate.
-- Slurmctld maintains the IP address (rather than hostname) for srun
communications. This fixes some possible network routing issues.
-- Added global database plugin. Job accounting and Job completion are the
first to use it. Follow documentation to add more to the plugin.
-- Removed no-longer-needed jobacct/common/common_slurmctld.c since that is
replaced by the database plugin.
-- Added new configuration parameter: CryptoType.
Moved existing digital signature logic into new plugin: crypto/openssl.
Added new support for crypto/munge (available with GPL license).
* Changes in SLURM 1.2.36
=========================
-- For spank_get_item(S_JOB_ARGV) for batch job with script input via STDIN,
set argc value to 1 (rather than 2, argv[0] still set to path of generated
script).
-- sacct will now display more properly allocations made with salloc with only
one step.
* Changes in SLURM 1.2.35
=========================
-- Permit SPANK plugins to dynamically register options at runtime base upon
configuration or other runtime checks.
-- Add "include" keywork to SPANK plugstack.conf file to optionally include
other configuration files or directories of configuration files.
-- Srun to wait indefinitely for resource allocation to be made. Used to
abort after two minutes.
* Changes in SLURM 1.2.34
=========================
-- Permit the cancellation of a job that is in the process of being
requeued.
-- Ignore the show_flag when getting job, step, node or partition information
for user root.
-- Convert some functions to thread-safe versions: getpwnam, getpwuid,
getgrnam, and getgrgid to similar functions with "_r" suffix. While no
failures have been observed, a race condition would in the worst case
permit a user access to a partition not normally allowed due to the
AllowGroup specification or the wrong user identified in an accounting
record. The job would NOT be run as the wrong user.
-- For PMI only (MPICH2/MVAPICH2) base address to send messages to (the srun)
upon the address from which slurmd gets the task launch request rather then
"hostname" where srun executes.
-- Make test for StateSaveLocation directory more comprehensive.
-- For jobcomp/script plugin, PROCS environment variable is now the actual
count of allocated processors rather than the count of processes to
be started.
* Changes in SLURM 1.2.33
=========================
-- Cancelled or Failed jobs will now report their job and step id on exit
-- Add SPANK items available to get: SLURM_VERSION, SLURM_VERSION_MAJOR,
SLURM_VERISON_MINOR and SLURM_VERSION_MICRO.
-- Fixed handling of SIGPIPE in srun. Abort job.
-- Fix bug introduced to MVAPICH plugin preventing use of TotalView debugger.
-- Modify slurmctld to get srun/salloc network address based upon the incoming
message rather than hostname set by the user command (backport of logic in
SLURM v1.3).
* Changes in SLURM 1.2.32
=========================
-- LSF only: Enable scancel of job in RootOnly partition by the job's owner.
-- Add support for sbatch --distribution and --network options.
-- Correct pending job's wait reason to "Priority" rather than "Resources" if
required resources are being held in reserve for a higher priority job.
-- In sched/wiki2 (Moab) report a node's state as "Drained" rather than
"Draining" if it has no allocated work (An undocumented Moab wiki option,
see CRI ticket #2394).
-- Log to job's output when it is cancelled or reaches it's time limit (ported
from existing code in slurm v1.3).
-- Add support in salloc and sbatch commands for --network option.
-- Add support for user environment variables that include '\n' (e.g.
bash functions).
-- Partial rewrite of mpi/mvapich plugin for improved scalability.
* Changes in SLURM 1.2.31
=========================
-- For Moab only: If GetEnvTimeout=0 in slurm.conf then do not run "su" to get
the user's environment, only use the cache file.
-- For sched/wiki2 (Moab), treat the lack of a wiki.conf file or the lack
of a configured AuthKey as a fatal error (lacks effective security).
-- For sched/wiki and sched/wiki2 (Maui or Moab) report a node's state as
Busy rather than Running when allocated if SelectType=select/linear. Moab
was trying to schedule job's on nodes that were already allocated to jobs
that were hidden from it via the HidePartitionJobs in Slurm's wiki.conf.
-- In select/cons_res improve the resource selection when a job has specified
a processor count along with a maximum node count.
-- For an srun command with --ntasks-per-node option and *no* --ntasks count,
spawn a task count equal to the number of nodes selected multiplied by the
--ntasks-per-node value.
-- In jobcomp/script: Set TZ if set in slurmctld's environment.
-- In srun with --verbose option properly format CPU allocation information
logged for clusters with 1000+ nodes and 10+ CPUs per node.
-- Process a job's --mail_type=end option on any type of job termination, not
just normal completion (e.g. all failure modes too).
* Changes in SLURM 1.2.30
=========================
-- Fix for gold not to print out 720 error messages since they are
potentally harmful.
-- In sched/wiki2 (Moab), permit changes to a pending job's required features:
CMD=CHANGEJOB ARG=<jobid> RFEATURES=<features>
-- Fix for not aborting when node selection doesn't load, fatal error instead
-- In sched/wiki and sched/wiki2 DO NOT report a job's state as "Hold" if it's
dependencies have not been satisfied. This reverses a changed made in SLURM
version 1.2.29 (which was requested by Cluster Resources, but places jobs
in a HELD state indefinitely).
* Changes in SLURM 1.2.29
=========================
-- Modified global configuration option "DisableRootJobs" from number (0 or 1)
to boolean (YES or NO) to match partition parameter.
-- Set "DisableRootJobs" for a partition to match the global parameters value
for newly created partitions.
-- In sched/wiki and sched/wiki2 report a node's updated features if changed
after startup using "scontrol update ..." command.
-- In sched/wiki and sched/wiki2 report a job's state as "Hold" if it's
dependencies have not been satisfied.
-- In sched/wiki and sched/wiki2 do not process incoming requests until
slurm configuration is completely loaded.
-- In sched/wiki and sched/wiki2 do not report a job's node count after it
has completed (slurm decrements the allocated node count when the nodes
transition from completing to idle state).
-- If job prolog or epilog fail, log the program's exit code.
-- In jobacct/gold map job names containing any non-alphanumeric characters
to '_' to avoid MySQL parsing problems.
-- In jobacct/linux correct parsing if command name contains spaces.
-- In sched/wiki and sched/wiki2 report make job info TASK count reflect the
actual task allocation (not requested tasks) even after job terminates.
Useful for accounting purposes only.
* Changes in SLURM 1.2.28
=========================
-- Added configuration option "DisableRootJobs" for parameter
"PartitionName". See "man slurm.conf" for details.
-- Fix for faking a large system to correctly handle node_id in the task
afffinity plugin for ia64 systems.
* Changes in SLURM 1.2.27
=========================
-- Record job eligible time in accounting database (for jobacct/gold only).
-- Prevent user root from executing a job step within a job allocation
belonging to another user.
-- Fixed limiting issue for strings larger than 4096 in xstrfmtcat
-- Fix bug in how Slurm reports job state to Maui/Moab when a job is requeued
due to a node failure, but we can't terminate the job's spawned processes.
Job was being reported as PENDING when it was really still COMPLETING.
-- Added patch from Jerry Smith for qstat -a output
-- Fixed looking at the correct perl path for Slurm.pm in torque wrappers.
-- Enhance job requeue on node failure to be more robust.
-- Added configuration parameter "DisableRootJobs". See "man slurm.conf"
for details.
-- Fixed issue with account = NULL in Gold job accounting plugin
* Changes in SLURM 1.2.26
=========================
-- Correct number of sockets/cores/threads reported by slurmd (from
Par Andersson, National Supercomputer Centre, Sweden).
-- Update libpmi linking so that libslurm is not required for PMI use
(from Steven McDougal, SiCortex).
-- In srun and sbatch, do not check the PATH env var if an absolute pathname
of the program is specified (previously reported an error if no PATH).
-- Correct output of "sinfo -o %C" (CPU counts by node state).
* Changes in SLURM 1.2.25
=========================
-- Bug fix for setting exit code in accounting for batch script.
-- Add salloc option, --no-shell (for LSF).
-- Added new options for sacct output
-- mvapich: Ensure MPIRUN_ID is unique for all job steps within a job.
(Fixes crashes when running multiple job steps within a job on one node)
-- Prevent "scontrol show job" from failing with buffer overflow when a job
has a very long Comment field.
-- Make certain that a job step is purged when a job has been completed.
Previous versions could have the job step persist if an allocated node
went DOWN and the slurmctld restarted.
-- Fix bug in sbcast that can cause communication problems for large files.
-- Add sbcast option -t/--timeout and SBCAST_TIMEOUT environment variable
to control message timeout.
-- Add threaded agent to manage a queue of Gold update requests for
performance reasons.
-- Add salloc options --chdir and --get-user-env (for Moab).
-- Modify scontrol update to support job comment changes.
-- Do not clear a DRAINED node's reason field when slurmctld restarts.
-- Do not cancel a pending job if Moab or Maui try to start it on unusable nodes.
Leave the job queued.
-- Add --requeue option to srun and sbatch (these undocumented options have no
effect in slurm v1.2, but are legitimate options in slurm v1.3).
* Changes in SLURM 1.2.24
=========================
-- In sched/wiki and sched/wiki2, support non-zero UPDATE_TIME specification
for GETNODES and GETJOBS commands.
-- Bug fix for sending accounting information multiple times for same
info. patch from Hongjia Cao (NUDT).
-- BLUEGENE - try FILE pointer rotation logic to avoid core dump on
bridge log rotate
-- Spread out in time the EPILOG_COMPLETE messages from slurmd to slurmctld
to avoid message congestions and retransmission.
* Changes in SLURM 1.2.23
=========================
-- Fix for libpmi to not export unneeded variables like xstr*
-- BLUEGENE - added per partition dynamic block creation
-- fix infinite loop bug in sview when there were multiple partitions
-- Send message to srun command when a job is requeued due to node failure.
Note this will be overwritten in the output file unless JobFileAppend
is set in slurm.conf. In slurm version 1.3, srun's --open-mode=append
option will offer this control for each job.
-- Change a node's default TmpDisk from 1MB to 0MB and change job's default
disk space requirement from 1MB to 0MB.
-- In sched/wiki (Maui scheduler) specify a QOS (quality of service) by
specifying an account of the form "qos-name".
-- In select/linear, fix bug in scheduling required nodes that already have
a job running on them (req.load.patch from Chris Holmes, HP).
-- For use with Moab only: change timeout for srun/sbatch --get-user-env
option to 2 secs, don't get DISPLAY environment variables, but explicitly
set ENVIRONMENT=BATCH and HOSTNAME to the execution host of the batch script.
-- Add configuration parameter GetEnvTimeout for use with Moab. See
"man slurm.conf" for details.
-- Modify salloc and sbatch to accept both "--tasks" and "--ntasks" as
equivalent options for compatibility with srun.
-- If a partition's node list contains space separators, replace them with
commas for easier parsing.
-- BLUEGENE - fixed bug in geometry specs when creating a block.
-- Add support for Moab and Maui to start jobs with select/cons_res plugin
and jobs requiring more than one CPU per task.
* Changes in SLURM 1.2.22
=========================
-- In sched/wiki2, add support for MODIFYJOB option "MINSTARTTIME=<time>"
to modify a job's earliest start time.
-- In sbcast, fix bug with large files and causing sbcast to die.
-- In sched/wiki2, add support for COMMENT= option in STARTJOB and CANCELJOB
commands.
-- Avoid printing negative job run time in squeue due to clock skew.
-- In sched/wiki and sched/wiki2, add support for wiki.conf option
HidePartitionJobs (see man pages for details).
-- Update to srun/sbatch --get-user-env option logic (needed by Moab).
-- In slurmctld (for Moab) added job->details->reserved_resources field
to report resources that were kept in reserve for job while it was
pending.
-- In sched/wiki (for Maui scheduler) report a pending job's node feature
requirements (from Miguel Roa, BSC).
-- Permit a user to change a pending job's TasksPerNode specification
using scontrol (from Miguel Roa, BSC).
-- Add support for node UP/DOWN event logging in jobacct/gold plugin
WARNING: using the jobacct/gold plugin slows the system startup set the
MessageTimeout variable in the slurm.conf to around 20+.
-- Added check at start of slurmctld to look for /tmp/slurm_gold_first if
there, and using the gold plugin slurm will make record of all nodes in
downed or drained state.
* Changes in SLURM 1.2.21
=========================
-- Fixed torque wrappers to look in the correct spot for the perl api
-- Do not treat user resetting his time limit to the current value as
an error.
-- Set correct executable names for Totalview when --multi-prog option
is used and more than one node is allocated to the job step.
-- When a batch job gets requeued, record in accounting logs that
the job was cancelled, the requeued job's submit time will be
set to the time of its requeue so it looks like a different job.
-- Prevent communication problems if the slurmd/slurmstepd have a
different JobAcct plugin configured than slurmctld.
-- Adding Gold plugin for job accounting
-- In sched/wiki2, add support for MODIFYJOB option "JOBNAME=<name>"
to modify a job's name.
-- Add configuration check for sys/syslog.h and include it as needed.
-- Add --propagate option to sbatch for control over limit propagation.
-- Added Gold interface to the jobacct plugin. To configure in the config
file specify...
JobAcctType=jobacct/gold
JobAcctLogFile=CLUSTER_NAME:GOLD_AUTH_KEY_FILE:GOLDD_HOST:GOLDD_PORT7112
-- In slurmctld job record, set begin_time to time when all of a job's
dependencies are met.
* Changes in SLURM 1.2.20
=========================
-- In switch/federation, fix small memory leak effecting slurmd.
-- Add PMI_FANOUT_OFF_HOST environment variable to control how message
forwarding is done for PMI (MPICH2). See "man srun" for details.
-- From sbatch set SLURM_NTASKS_PER_NODE when --ntasks-per-node option is
specified.
-- BLUEGENE: Documented the prefix should always be lower case and the 3
digit suffix should be uppercase if any letters are used as digits.
-- In sched/wiki and sched/wiki2, add support for --cpus-per-task option.
From Miguel Ros, BSC.
-- In sched/wiki2, prevent invalid memory pointer (and likely seg fault)
for job associated with a partition that has since been deleted.
-- In sched/wiki2 plus select/cons_res, prevent invalid memory pointer
(and likely seg fault) when a job is requeued.
-- In sched/wiki, add support for job suspend, resume, and modify.
-- In sched/wiki, add suppport for processor allocation (not just node allocation)
with layout control.
-- Prevent re-sending job termination RPC to a node that has already completed
the job. Only send it to specific nodes which have not reported completion.
-- Support larger environment variables 64K instead of BUFSIZ (8k on some
systems).
-- If a job is being requeued, job step create requests will print a
warning and repeatedly retry rather than aborting.
-- Add optional mode value to srun and sbatch --get-user-env option.
-- Print error message and retry job submit commands when MaxJobCount
is reached. From Don Albert, Bull.
-- Treat invalid begin time specification as a fatal error in sbatch and
srun. From Don Albert, Bull.
-- Validate begin time specification to avoid hours >24, minutes >59, etc.
* Changes in SLURM 1.2.19
=========================
*** NOTE IMPORTANT CHANGE IN RPM BUILD BELOW ****
-- slurm.spec file (used to build RPMs) was updated in order to support Mock, a
chroot build environment. See https://hosted.fedoraproject.org/projects/mock/
for more information. The following RPMs are no longer build by default:
aix-federation, auth_none, authd, bluegene, sgijob, and switch-elan. Change
the RPMs built using the following options in ~/rpmmacros: "%_with_authd 1",
"%_without_munge 1", etc. See the slurm.spec file for more details.
-- Print warning if non-privileged user requests negative "--nice" value on
job submission (srun, salloc, and sbatch commands).
-- In sched/wiki and sched/wiki2, add support for srun's --ntasks-per-node
option.
-- In select/bluegene with Groups defined for Images, fix possible memory
corruption. Other configurations are not affected.
-- BLUEGENE - Fix bug that prevented user specification of linux-image,
mloader-image, and ramdisk-image on job submission.
-- BLUEGENE - filter Groups specified for image not just by submitting
user's current group, but all groups the user has access to.
-- BLUEGENE - Add salloc options to specify images to be loaded (--blrts-image,
--linux-image, --mloader-image, and --ramdisk-image).
-- BLUEGENE - In bluegene.conf, permit Groups to be comma separated in addition
to colon separators previously supported.
-- sbatch will accept batch script containing "#SLURM" options and advise
changed to "#SBATCH".
-- If srun --output or --error specification contains a task number rather
than a file name, send stdout/err from specified task to srun's stdout/err
rather than to a file by the same name as the task's number.
-- For srun --multi-prog option, verify configuration file before attempting
to launch tasks, report clear explanation of any configuration file errors.
-- For sched/wiki2, add optional timeout option to srun's --get-user-env
parameter, change default timeout for "su - <user> env" from 3 to 8 seconds.
On timeout, attempt to load env from file at StateSaveLocation/env_cache/<user>.
The format of this file is the same as output of "env" command. If there
is no env cache file, then abort the request.
-- squeue modified for completing job to remove nodes that have already
completed the job before applying node filter logic.
-- squeue formatted output option added for job comment, "%q" (the obvious
choices for letters are already in use).
-- Added configure option --enable-load-env-no-login for use with Moab. If
set then the user job runs with the environment built without a login
("su <user> env" rather than "su - <user> env").
-- Fix output of "srun -o %C" (allocated CPU count) for running jobs. This was
broken in 1.2.18 for handling requeue of Moab jobs.
-- Added logic to mpiexec wrapper to read in the MPIEXEC_TIMEOUT var
-- Updated qstat wrapper to display information for partitions (-Q) option
-- NOTE: SLURM should now work directly with Globus using the PBS GRAM.
* Changes in SLURM 1.2.18
=========================
-- BLUEGENE - bug fix for smap stating passthroughs are used when they aren't
-- Fixed bug in sview to be able to edit partitions correctly
-- Fixed bug so in slurm.conf files where SlurmdPort isn't defined things
work correctly.
-- In sched/wiki2 and sched/wiki add support for batch job being requeued
in Slurm either when nodes fail or upon request.
-- In sched/wiki2 and sched/wiki with FastSchedule=2 configured and nodes
configured with more CPUs than actually exist, return a value of TASKS
equal to the number of configured CPUs that are allocated to a job rather
than the number of physical CPUs allocated.
-- For sched/wiki2, timeout "srun --get-user-env ..." command after 3 seconds
if unable to perform pseudo-login and get user environment variables.
-- Add contribs/time_login.c program to test how long pseudo-login takes
for specific users or all users. This can identify users for which Moab
job submissions are unable to set the proper environment variables.
-- Fix problem in parallel make of Slurm.
-- Fixed bug in consumable resources when CR_Core_Memory is enabled
-- Add delay in slurmctld for "scontrol shutdown" RPC to get propagated
to slurmd daemons.
* Changes in SLURM 1.2.17
=========================
-- In select/cons_res properly release resources allocated to job being
suspended (rmbreak.patch, from Chris Holmes, HP).
-- Fix AIX linking problem for PMI (mpich2) support.
-- Improve PMI logic for greater scalability (up to 16k tasks run).
-- Add srun support for SLURM_THREADS and PMI_FANOUT environment variables.
-- Fix support in squeue for output format with left justification of
reason (%r) and reason/node_list (%R) output.
-- Automatically requeue a batch job when a node allocated to it fails
or the prolog fails (unless --no-requeue or --no-kill option used).
-- In sched/wiki, enable use of wiki.conf parameter ExcludePartitions to
directly schedule selected partitions without Maui control.
-- In sched/backfill, if a job requires specific nodes, schedule other jobs
ahead of it rather than completely stopping backfill scheduling for that
partition.
-- BLUEGENE - corrected logic making block allocation work in a circular
fashion instead of linear.
* Changes in SLURM 1.2.16
=========================
-- Add --overcommit option to the salloc command.
-- Run task epilog from job's working directory rather than directory
where slurmd daemon started from.
-- Log errors running task prolog or task epilog to srun's output.
-- In sched/wiki2, fix bug processing condensed hostlist expressions.
-- Release contribs/mpich1.slurm.patch without GPL license.
-- Fix bug in mvapich plugin for read/write calls that return EAGAIN.
-- Don't start MVAPICH timeout logic until we know that srun is starting
an MVAPICH program.
-- Fix to srun only allocating number of nodes needed for requested task
count when combining allocation and step creation in srun.
-- Execute task-prolog within proctrack container to insure that all
child processes get terminated.
-- Fixed job accounting to work with sgi_job proctrack plugin.
* Changes in SLURM 1.2.15
=========================
-- In sched/wiki2, fix bug processing hostlist expressions where hosts
lack a numeric suffix.
-- Fix bug in srun. When user did not specify time limit, it defaulted to
INFINITE rather than partition's limit.
-- In select/cons_res with SelectTypeParameters=CR_Socket_Memory, fix bug in
memory allocation tracking, mem.patch from Chris Holmes, HP.
-- Add --overcommit option to the sbatch command.
* Changes in SLURM 1.2.14
=========================
-- Fix a couple of bugs in MPICH/MX support (from Asier Roa, BSC).
-- Fix perl api for AIX
-- Add wiki.conf parameter ExcludePartitions for selected partitions to
be directly schedule by Slurm without Moab control
-- Optimize load leveling for shared nodes (alloc.patch, contributed
by Chris Holmes, HP).
-- Added PMI_TIME environment variable for user to control how PMI
communications are spread out in time. See "man srun" for details.
-- Added PMI timing information to srun debug mode to aid in tuning.
Use "srun -vv ..." to see the information.
-- Added checkpoint/ompi (OpenMPI) plugin (still under development).
-- Fix bug in load leveling logic added to v1.2.13 which can cause an
infinite loop and hang slurmctld when sharing nodes between jobs.
-- Added support for sbatch to read in #PBS options from a script
* Changes in SLURM 1.2.13
=========================
-- Add slurm.conf parameter JobFileAppend.
-- Fix for segv in "scontrol listpids" on nodes not in SLURM config.
-- Add support for SCANCEL_CTLD env var.
-- In mpi/mvapich plugin, add startup timeout logic. Time based upon
SLURM_MVAPICH_TIMEOUT (value in seconds).
-- Fixed pick_step_node logic to only pick the number of nodes requested