This file describes changes in recent versions of SLURM. It primarily documents those changes that are of interest to users and admins. * Changes in SLURM 1.4.0-pre1 ============================= -- Save/restore a job's task_distribution option on slurmctld retart. NOTE: SLURM must be cold-started on converstion from version 1.3.x. -- Remove task_mem from job step credential (only job_mem is used now). -- Remove --task-mem and --job-mem options from salloc, sbatch and srun (use --mem-per-cpu or --mem instead). -- Remove DefMemPerTask from slurm.conf (use DefMemPerCPU or DefMemPerNode instead). -- Modify slurm_step_launch API call. Move launch host from function argument to element in the data structure slurm_step_launch_params_t, which is used as a function argument. -- Add state_reason_string to job state with optional details about why a job is pending. -- Make "scontrol show node" output match scontrol input for some fields ("Cores" changed to "CoresPerSocket", etc.). -- Add support for a new node state "FUTURE" in slurm.conf. These node records are created in SLURM tables for future use without a reboot of the SLURM daemons, but are not reported by any SLURM commands or APIs. * Changes in SLURM 1.3.7 ======================== -- Add jobid/stepid to MESSAGE_TASK_EXIT to address race condition when a job step is cancelled, another is started immediately (before the first one completely terminates) and ports are reused. NOTE: This change requires that SLURM be updated on all nodes of the cluster at the same time. There will be no impact upon currently running jobs (they will ignore the jobid/stepid at the end of the message). -- Added Python module to process hostslists as used by SLURM. See contribs/python/hostlist. Supplied by Kent Engstrom, National Supercomputer Centre, Sweden. -- Report task termination due to signal (restored functionality present in slurm v1.2). -- Remove sbatch test for script size being no larger than 64k bytes. The current limit is 4GB. -- Disable FastSchedule=0 use with SchedulerType=sched/gang. Node configuration must be specified in slurm.conf for gang scheduling now. -- For sched/wiki and sched/wiki2 (Maui or Moab scheduler) disable the ability of a non-root user to change a job's comment field (used by Maui/Moab for storing scheduler state information). -- For sched/wiki (Maui) add pending job's future start time to the state info reported to Maui. -- Improve reliability of job requeue logic on node failure. -- Add logic to ping non-responsive nodes even if SlurmdTimeout=0. This permits the node to be returned to use when it starts responding rather than remaining in a non-usable state. -- Honor HealthCheckInterval values that are smaller than SlurmdTimeout. -- For non-responding nodes, log them all on a single line with a hostlist expression rather than one line per node. Frequency of log messages is dependent upon SlurmctldDebug value from 300 seconds at SlurmctldDebug<=3 to 1 second at SlurmctldDebug>=5. -- If a DOWN node is resumed, set its state to IDLE & NOT_RESPONDING and ping the node immediately to clear the NOT_RESPONDING flag. -- Log that a job's time limit is reached, but don't sent SIGXCPU. * Changes in SLURM 1.3.6 ======================== -- Add new function to get information for a single job rather than always getting information for all jobs. Improved performance of some commands. NOTE: This new RPC means that the slurmctld daemons should be updated before or at the same time as the compute nodes in order to process it. -- In salloc, sbatch, and srun replace --task-mem options with --mem-per-cpu (--task-mem will continue to be accepted for now, but is not documented). Replace DefMemPerTask and MaxMemPerTask with DefMemPerCPU, DefMemPerNode, MaxMemPerCPU and MaxMemPerNode in slurm.conf (old options still accepted for now, but mapped to "PerCPU" parameters and not documented). Allocate a job's memory memory at the same time that processors are allocated based upon the --mem or --mem-per-cpu option rather than when job steps are initiated. -- Altered QOS in accounting to be a list of admin defined states, an account or user can have multiple QOS's now. They need to be defined using 'sacctmgr add qos'. They are no longer an enum. If none are defined Normal will be the QOS for everything. Right now this is only for use with MOAB. Does nothing outside of that. -- Added spank_get_item support for field S_STEP_CPUS_PER_TASK. -- Make corrections in spank_get_item for field S_JOB_NCPUS, previously reported task count rather than CPU count. -- Convert configuration parameter PrivateData from on/off flag to have separate flags for job, partition, and node data. See "man slurm.conf" for details. -- Fix bug, failed to load DisableRootJobs configuration parameter. -- Altered sacctmgr to always return a non-zero exit code on error and send error messages to stderr. * Changes in SLURM 1.3.5 ======================== -- Fix processing of auth/munge authtentication key for messages originating in slurmdbd and sent to slurmctld. -- If srun is allocating resources (not within sbatch or salloc) and MaxWait is configured to a non-zero value then wait indefinitely for the resource allocation rather than aborting the request after MaxWait time. -- For Moab only: add logic to reap defunct "su" processes that are spawned by slurmd to load user's environment variables. -- Added more support for "dumping" account information to a flat file and read in again to protect data incase something bad happens to the database. -- Sacct will now report account names for job steps. -- For AIX: Remove MP_POERESTART_ENV environment variable, disabling poerestart command. User must explicitly set MP_POERESTART_ENV before executing poerestart. -- Put back notification that a job has been allocated resources when it was pending. * Changes in SLURM 1.3.4 ======================== -- Some updates to man page formatting from Gennaro Oliva, ICAR. -- Smarter loading of plugins (doesn't stat every file in the plugin dir) -- In sched/backfill avoid trying to schedule jobs on DOWN or DRAINED nodes. -- forward exit_code from step completion to slurmdbd -- Add retry logic to socket connect() call from client which can fail when the slurmctld is under heavy load. -- Fixed bug when adding associations to add correctly. -- Added support for associations for user root. -- For Moab, sbatch --get-user-env option processed by slurmd daemon rather than the sbatch command itself to permit faster response for Moab. -- IMPORTANT FIX: This only effects use of select/cons_res when allocating resources by core or socket, not by CPU (default for SelectTypeParameter). We are not saving a pending job's task distribution, so after restarting slurmctld, select/cons_res was over-allocating resources based upon an invalid task distribution value. Since we can't save the value without changing the state save file format, we'll just set it to the default value for now and save it in Slurm v1.4. This may result in a slight variation on how sockets and cores are allocated to jobs, but at least resources will not be over-allocated. -- Correct logic in accumulating resources by node weight when more than one job can run per node (select/cons_res or partition shared=yes|force). -- slurm.spec file updated to avoid creating empty RPMs. RPM now *must* be built with correct specification of which packages to build or not build. See the top of the slurm.spec file for information about how to control package building specification. -- Set SLURM_JOB_CPUS_PER_NODE for jobs allocated using the srun command. It was already set for salloc and sbatch commands. -- Fix to handle suspended jobs that were cancelled in accounting -- BLUEGENE - fix to only include bps given in a name from the bluegene.conf file. -- For select/cons_res: Fix record-keeping for core allocations when more than one partition uses a node or there is more than one socket per node. -- In output for "scontrol show job" change "StartTime" header to "EligibleTime" for pending jobs to accurately describe what is reported. -- Add more slurmdbd.conf paramters: ArchiveScript, ArchiveAge, JobPurge, and StepPurge (not fully implemented yet). -- Add slurm.conf parameter EnforcePartLimits to reject jobs which exceed a partition's size and/or time limits rather than leaving them queued for a later change in the partition's limits. NOTE: Not reported by "scontrol show config" to avoid changing RPCs. It will be reported in SLURM version 1.4. -- Added idea of coordinator to accounting. A coordinator can add associations between exsisting users to the account or any sub-account they are coordinator to. They can also add/remove other coordinators to those accounts. -- Add support for Hostname and NodeHostname in slurm.conf being fully qualified domain names (by Vijay Ramasubramanian, University of Maryland). For more information see "man slurm.conf". * Changes in SLURM 1.3.3 ======================== -- Add mpi_openmpi plugin to the main SLURM RPM. -- Prevent invalid memory reference when using srun's --cpu_bind=cores option (slurm-1.3.2-1.cea1.patch from Matthieu Hautreux, CEA). -- Task affinity plugin modified to support a particular cpu bind type: cores, sockets, threads, or none. Accomplished by setting an environment variable SLURM_ENFORCE_CPU_TYPE (slurm-1.3.2-1.cea2.patch from Matthieu Hautreux, CEA). -- For BlueGene only, log "Prolog failure" once per job not once per node. -- Reopen slurmctld log file after reconfigure or SIGHUP is received. -- In TaskPlugin=task/affinity, fix possible infinite loop for slurmd. -- Accounting rollup works for mysql plugin. Automatic rollup when using slurmdbd. -- Copied job stat logic out of sacct into sstat in the future sacct -stat will be deprecated. -- Correct sbatch processing of --nice option with negative values. -- Add squeue formatted print option %Q to print a job's integer priority. -- In sched/backfill, fix bug that was changing a pending job's shared value to zero (possibly changing a pending job's resource requirements from a processor on some node to the full node). * Changes in SLURM 1.3.2 ======================== -- Get --ntasks-per-node option working for sbatch command. -- BLUEGENE: Added logic to give back a best block on overlapped mode in test_only mode -- BLUEGENE: Updated debug info and man pages for better help with the numpsets option and to fail correctly with bad image request for building blocks. -- In sched/wiki and sched/wiki2 properly support Slurm license consumption (job state reported as "Hold" when required licenses are not available). -- In sched/wiki2 JobWillRun command, don't return an error code if the job(s) can not be started at that time. Just return an error message (from Doug Wightman, CRI). -- Fix bug if sched/wiki or sched/wiki2 are configured and no job comment is set. -- scontrol modified to report partition partition's "DisableRootJobs" value. -- Fix bug in setting host address for PMI communications (mpich2 only). -- Fix for memory size accounting on some architectures. -- In sbatch and salloc, change --dependency's one letter option from "-d" to "-P" (continue to accept "-d", but change the documentation). -- Only check that task_epilog and task_prolog are runable by the job's user, not as root. -- In sbatch, if specifying an alternate directory (--workdir/-D), then input, output and error files are in that directory rather than the directory from which the command is executed -- NOTE: Fully operational with Moab version 5.2.3+. Change SUBMITCMD in moab.cfg to be the location of sbatch rather than srun. Also set HostFormat=2 in SLURM's wiki.conf for improved performance. -- NOTE: We needed to change an RPC from version 1.3.1. You must upgrade all nodes in a cluster from v1.3.1 to v1.3.2 at the same time. -- Postgres plugin will work from job accounting, not for association management yet. -- For srun/sbatch --get-user-env option (Moab use only) look for "env" command in both /bin and /usr/sbin (for Suse Linux). -- Fix bug in processing job feature requests with node counts (could fail to schedule job if some nodes have not associated features). -- Added nodecnt and gid to jobcomp/script -- Insure that nodes select in "srun --will-run" command or the equivalent in sched/wiki2 are in the job's partition. -- BLUGENE - changed partition Min|MaxNodes to represent c-node counts instead of base partitions -- In sched/gang only, prevent possible invalid memory reference when slurmctld is reconfigured, e.g. "scontrol reconfig". -- In select/linear only, prevent invalid memory reference in log message when nodes are added to slurm.conf and then "scontrol reconfig" is executed. * Changes in SLURM 1.3.1 ======================== -- Correct logic for processing batch job's memory limit enforcement. -- Fix bug that was setting a job's requeue value on any update of the job using the "scontrol update" command. The invalid value of an updated job prevents it's recovery when slurmctld restarts. -- Add support for cluster-wide consumable resources. See "Licenses" parameter in slurm.conf man page and "--licenses" option in salloc, sbatch and srun man pages. -- Major changes in select/cons_res to support FastSchedule=2 with more resources configured than actually exist (useful for testing purposes). -- Modify srun --test-only response to include expected initiation time for a job as well as the nodes to be allocated and processor count (for use by Moab). -- Correct sched/backfill to properly honor job dependencies. -- Correct select/cons_res logic to allocate CPUs properly if there is more than one thread per core (previously failed to allocate all cores). -- Correct select/linear logic in shared job count (was off by 1). -- Add support for job preeption based upon partition priority (in sched/gang, preempt.patch from Chris Holmes, HP). -- Added much better logic for mysql accounting. -- Finished all basic functionality for sacctmgr. -- Added load file logic to sacctmgr for setting up a cluster in one step. -- NOTE: We needed to change an RPC from version 1.3.0. You must upgrade all nodes in a cluster from v1.3.0 to v1.3.1 at the same time. -- NOTE: Work is currently underway to improve placement of jobs for gang scheduling and preemption. -- NOTE: Work is underway to provide additional tools for reporting accounting information. * Changes in SLURM 1.3.0 ======================== -- In sched/wiki2, add processor count to JOBWILLRUN response. -- Add event trigger for node entering DRAINED state. -- Build properly without OpenSSL installed (OpenSSL is recommended, but not required). -- Added slurmdbd, and modified accounting_storage plugin to talk to it. Allowing multiple slurm systems to securly store and gather information not only about jobs, but the system also. See accounting web page for more information. * Changes in SLURM 1.3.0-pre11 ============================== -- Restructure the sbcast RPC to take advantage of larger buffers available in Slurm v1.3 RPCs. -- Fix several memory leaks. -- In scontrol, show job's Requeue value, permit change of Requeue and Comment values. -- In slurmctld job record, add QOS (quality of service) value for accounting purposes with Maui and Moab. -- Log to a job's stderr when it is being cancelled explicitly or upon reaching it's time limit. -- Only permit a job's account to be changed while that job is PENDING. -- Fix race condition in job suspend/resume (slurmd.sus_res.patch from HP). * Changes in SLURM 1.3.0-pre10 ============================== -- Add support for node-specific "arch" (architecture) and "os" (operating system) fields. These fields are set based upon values reported by the slurmd daemon on each compute node using SLURM_ARCH and SLURM_OS environment variables (if set, the uname function otherwise) and are intended to support changes in real time changes in operating system. These values are reported by "scontrol show node" plus the sched/wiki and sched/wiki2 plugins for Maui and Moab respectively. -- In sched/wiki and sched/wiki2: add HostFormat and HidePartitionJobs to "scontrol show config" SCHEDULER_CONF output. -- In sched/wiki2: accept hostname expression as input for GETNODES command. -- Add JobRequeue configuration parameter and --requeue option to the sbatch command. -- Add HealthCheckInterval and HealthCheckProgram configuration parameters. -- Add SlurmDbdAddr, SlurmDbdAuthInfo and SlurmDbdPort configuration parameters. -- Modify select/linear to achieve better load leveling with gang scheduler. -- Develop the sched/gang plugin to support select/linear and select/cons_res. If sched/gang is enabled and Shared=FORCE is configured for a partition, this plugin will gang-schedule or "timeslice" jobs that share common resources within the partition. Note that resources that are shared across partitions are not gang-scheduled. -- Add EpilogMsgTime configuration parameter. See "man slurm.conf" for details. -- Increase default MaxJobCount configuration parameter from 2000 to 5000. -- Move all database common files from src/common to new lib in src/database. -- Move sacct to src/accounting added sacctmgr for scontrol like operations to accounting infrastructure. -- Basic functions of sacctmgr in place to make for administration of accounting. -- Moved clusteracct_storage plugin to accounting_storage plugin, jobacct_storage is still it's own plugin for now. -- Added template for slurm php extention. -- Add infrastructure to support allocation of cluster-wide licenses to jobs. Full support will be added some time after version 1.3.0 is released. -- In sched/wiki2 with select/bluegene, add support for WILLRUN command to accept multiple jobs with start time specifications. * Changes in SLURM 1.3.0-pre9 ============================= -- Add spank support to sbatch. Note that spank_local_user() will be called with step_layout=NULL and gid=SLURM_BATCH_SCRIPT and spank_fini() will be called immediately afterwards. -- Made configure use mysql_config to find location of mysql database install Removed bluegene specific information from the general database tables. -- Re-write sched/backfill to utilize new will-run logic in the select plugins. It now supports select/cons_res and all job options (required nodes, excluded nodes, contiguous, etc.). -- Modify scheduling logic to better support overlapping partitions. -- Add --task-mem option and remove --job-mem option from srun, salloc, and sbatch commands. Enforce step memory limit, if specified and there is no job memory limit specified (--mem). Also see DefMemPerTask and MaxMemPerTask in "man slurm.conf". Enforcement is dependent upon job accounting being enabled with non-zero value for JoabAcctGatherFrequency. -- Change default node tmp_disk size to zero (for diskless nodes). * Changes in SLURM 1.3.0-pre8 ============================= -- Modify how strings are packed in the RPCs, Maximum string size increased from 64KB (16-bit size field) to 4GB (32-bit size field). -- Fix bug that prevented time value of "INFINITE" from being processed. -- Added new srun/sbatch option "--open-mode" to control how output/error files are opened ("t" for truncate, "a" for append). -- Added checkpoint/xlch plugin for use with XLCH (Hongjia Cao, NUDT). -- Added srun option --checkpoint-path for use with XLCH (Hongjia Cao, NUDT). -- Added new srun/salloc/sbatch option "--acctg-freq" for user control over accounting data collection polling interval. -- In sched/wiki2 add support for hostlist expression use in GETNODES command with HostFormat=2 in the wiki.conf file. -- Added new scontrol option "setdebug" that can change the slurmctld daemons debug level at any time (Hongjia Cao, NUDT). -- Track total total suspend time for jobs and steps for accounting purposes. -- Add version information to partition state file. -- Added 'will-run' functionality to all of the select plugins (bluegene, linear, and cons_res) to return node list and time job can start based on other jobs running. -- Major restructuring of node selection logic. select/linear now supports partition max_share parameter and tries to match like size jobs on the same nodes to improve gang scheduling performance. Also supports treating memory as consumable resource for job preemption and gang scheduling if SelectTypeParameter=CR_Memory in slurm.conf. -- BLUEGENE: Reorganized bluegene plugin for maintainability sake. -- Major restructuring of data structures in select/cons_res. -- Support job, node and partition names of arbitrary size. -- Fix bug that caused slurmd to hang when using select/linear with task/affinity. * Changes in SLURM 1.3.0-pre7 ============================= -- Fix a bug in the processing of srun's --exclusive option for a job step. * Changes in SLURM 1.3.0-pre6 ============================= -- Add support for configurable number of jobs to share resources using the partition Shared parameter in slurm.conf (e.g. "Shared=FORCE:3" for two jobs to share the resources). From Chris Holmes, HP. -- Made salloc use api instead of local code for message handling. * Changes in SLURM 1.3.0-pre5 ============================= -- Add select_g_reconfigure() function to node changes in slurmctld configuration that can impact node scheduling. -- scontrol to set/get partition's MaxTime and job's Timelimit in minutes plus new formats: min:sec, hr:min:sec, days-hr:min:sec, days-hr, etc. -- scontrol "notify" command added to send message to stdout of srun for specified job id. -- For BlueGene, make alpha part of node location specification be case insensitive. -- Report scheduler-plugin specific configuration information with the "scontrol show configuration" command on the SCHEDULER_CONF line. This information is not found in the "slurm.conf" file, but a scheduler plugin specific configuration (e.g. "wiki.conf"). -- sview partition information reported now includes partition priority. -- Expand job dependency specification to support concurrent execution, testing of job exit status and multiple job IDs. * Changes in SLURM 1.3.0-pre4 ============================= -- Job step launch in srun is now done from the slurm api's all further modifications to job launch should be done there. -- Add new partition configuration parameter Priority. Add job count to Shared parameter. -- Add new configuration parameters DefMemPerTask, MaxMemPerTask, and SchedulerTimeSlice. -- In sched/wiki2, return REJMESSAGE with details on why a job was requeued (e.g. what node failed). * Changes in SLURM 1.3.0-pre3 ============================= -- Remove slaunch command -- Added srun option "--checkpoint=time" for job step to automatically be checkpointed on a period basis. -- Change behavior of "scancel -s KILL " to send SIGKILL to all job steps rather than cancelling the job. This now matches the behavior of all other signals. "scancel " still cancels the job and all steps. -- Add support for new job step options --exclusive and --immediate. Permit job steps to be queued when resources are not available within an existing job allocation to dedicate the resources to the job step. Useful for executing simultaneous job steps. Provides resource management both at the level of jobs and job steps. -- Add support for feature count in job constraints, for example srun --nodes=16 --constraint=graphics*4 ... Based upon work by Kumar Krishna (HP, India). -- Add multi-core options to salloc and sbatch commands (sbatch.patch and cleanup.patch from Chris Holmes, HP). -- In select/cons_res properly release resources allocated to job being suspended (rmbreak.patch, from Chris Holmes, HP). -- Removed database and jobacct plugin replaced with jobacct_storage and jobacct_gather for easier hooks for further expansion of the jobacct plugin. * Changes in SLURM 1.3.0-pre2 ============================= -- Added new srun option --pty to start job with pseudo terminal attached to task 0 (all other tasks have I/O discarded) -- Disable user specifying jobid when sched/wiki2 configured (needed for Moab releases until early 2007). -- Report command, args and working directory for batch jobs with "scontrol show job". * Changes in SLURM 1.3.0-pre1 ============================= -- !!! SRUN CHANGES !!! The srun options -A/--allocate, -b/--batch, and -a/--attach have been removed! That functionality is now available in the separate commands salloc, sbatch, and sattach, respectively. -- Add new node state FAILING plus trigger for when node enters that state. -- Add new configuration paramter "PrivateData". This can be used to prevent a user from seeing jobs or job steps belonging to other users. -- Added configuration parameters for node power save mode: ResumeProgram ResumeRate, SuspendExcNodes, SuspendExcParts, SuspendProgram and SuspendRate. -- Slurmctld maintains the IP address (rather than hostname) for srun communications. This fixes some possible network routing issues. -- Added global database plugin. Job accounting and Job completion are the first to use it. Follow documentation to add more to the plugin. -- Removed no-longer-needed jobacct/common/common_slurmctld.c since that is replaced by the database plugin. -- Added new configuration parameter: CryptoType. Moved existing digital signature logic into new plugin: crypto/openssl. Added new support for crypto/munge (available with GPL license). * Changes in SLURM 1.2.34 ========================= -- Permit the cancellation of a job that is in the process of being requeued. -- Ignore the show_flag when getting job, step, node or partition information for user root. -- Convert some functions to thread-safe versions: getpwnam, getpwuid, getgrnam, and getgrgid to similar functions with "_r" suffix. While no failures have been observed, a race condition would in the worst case permit a user access to a partition not normally allowed due to the AllowGroup specification or the wrong user identified in an accounting record. The job would NOT be run as the wrong user. * Changes in SLURM 1.2.33 ========================= -- Cancelled or Failed jobs will now report their job and step id on exit -- Add SPANK items available to get: SLURM_VERSION, SLURM_VERSION_MAJOR, SLURM_VERISON_MINOR and SLURM_VERSION_MICRO. -- Fixed handling of SIGPIPE in srun. Abort job. -- Fix bug introduced to MVAPICH plugin preventing use of TotalView debugger. -- Modify slurmctld to get srun/salloc network address based upon the incoming message rather than hostname set by the user command (backport of logic in SLURM v1.3). * Changes in SLURM 1.2.32 ========================= -- LSF only: Enable scancel of job in RootOnly partition by the job's owner. -- Add support for sbatch --distribution and --network options. -- Correct pending job's wait reason to "Priority" rather than "Resources" if required resources are being held in reserve for a higher priority job. -- In sched/wiki2 (Moab) report a node's state as "Drained" rather than "Draining" if it has no allocated work (An undocumented Moab wiki option, see CRI ticket #2394). -- Log to job's output when it is cancelled or reaches it's time limit (ported from existing code in slurm v1.3). -- Add support in salloc and sbatch commands for --network option. -- Add support for user environment variables that include '\n' (e.g. bash functions). -- Partial rewrite of mpi/mvapich plugin for improved scalability. * Changes in SLURM 1.2.31 ========================= -- For Moab only: If GetEnvTimeout=0 in slurm.conf then do not run "su" to get the user's environment, only use the cache file. -- For sched/wiki2 (Moab), treat the lack of a wiki.conf file or the lack of a configured AuthKey as a fatal error (lacks effective security). -- For sched/wiki and sched/wiki2 (Maui or Moab) report a node's state as Busy rather than Running when allocated if SelectType=select/linear. Moab was trying to schedule job's on nodes that were already allocated to jobs that were hidden from it via the HidePartitionJobs in Slurm's wiki.conf. -- In select/cons_res improve the resource selection when a job has specified a processor count along with a maximum node count. -- For an srun command with --ntasks-per-node option and *no* --ntasks count, spawn a task count equal to the number of nodes selected multiplied by the --ntasks-per-node value. -- In jobcomp/script: Set TZ if set in slurmctld's environment. -- In srun with --verbose option properly format CPU allocation information logged for clusters with 1000+ nodes and 10+ CPUs per node. -- Process a job's --mail_type=end option on any type of job termination, not just normal completion (e.g. all failure modes too). * Changes in SLURM 1.2.30 ========================= -- Fix for gold not to print out 720 error messages since they are potentally harmful. -- In sched/wiki2 (Moab), permit changes to a pending job's required features: CMD=CHANGEJOB ARG= RFEATURES= -- Fix for not aborting when node selection doesn't load, fatal error instead -- In sched/wiki and sched/wiki2 DO NOT report a job's state as "Hold" if it's dependencies have not been satisfied. This reverses a changed made in SLURM version 1.2.29 (which was requested by Cluster Resources, but places jobs in a HELD state indefinitely). * Changes in SLURM 1.2.29 ========================= -- Modified global configuration option "DisableRootJobs" from number (0 or 1) to boolean (YES or NO) to match partition parameter. -- Set "DisableRootJobs" for a partition to match the global parameters value for newly created partitions. -- In sched/wiki and sched/wiki2 report a node's updated features if changed after startup using "scontrol update ..." command. -- In sched/wiki and sched/wiki2 report a job's state as "Hold" if it's dependencies have not been satisfied. -- In sched/wiki and sched/wiki2 do not process incoming requests until slurm configuration is completely loaded. -- In sched/wiki and sched/wiki2 do not report a job's node count after it has completed (slurm decrements the allocated node count when the nodes transition from completing to idle state). -- If job prolog or epilog fail, log the program's exit code. -- In jobacct/gold map job names containing any non-alphanumeric characters to '_' to avoid MySQL parsing problems. -- In jobacct/linux correct parsing if command name contains spaces. -- In sched/wiki and sched/wiki2 report make job info TASK count reflect the actual task allocation (not requested tasks) even after job terminates. Useful for accounting purposes only. * Changes in SLURM 1.2.28 ========================= -- Added configuration option "DisableRootJobs" for parameter "PartitionName". See "man slurm.conf" for details. -- Fix for faking a large system to correctly handle node_id in the task afffinity plugin for ia64 systems. * Changes in SLURM 1.2.27 ========================= -- Record job eligible time in accounting database (for jobacct/gold only). -- Prevent user root from executing a job step within a job allocation belonging to another user. -- Fixed limiting issue for strings larger than 4096 in xstrfmtcat -- Fix bug in how Slurm reports job state to Maui/Moab when a job is requeued due to a node failure, but we can't terminate the job's spawned processes. Job was being reported as PENDING when it was really still COMPLETING. -- Added patch from Jerry Smith for qstat -a output -- Fixed looking at the correct perl path for Slurm.pm in torque wrappers. -- Enhance job requeue on node failure to be more robust. -- Added configuration parameter "DisableRootJobs". See "man slurm.conf" for details. -- Fixed issue with account = NULL in Gold job accounting plugin * Changes in SLURM 1.2.26 ========================= -- Correct number of sockets/cores/threads reported by slurmd (from Par Andersson, National Supercomputer Centre, Sweden). -- Update libpmi linking so that libslurm is not required for PMI use (from Steven McDougal, SiCortex). -- In srun and sbatch, do not check the PATH env var if an absolute pathname of the program is specified (previously reported an error if no PATH). -- Correct output of "sinfo -o %C" (CPU counts by node state). * Changes in SLURM 1.2.25 ========================= -- Bug fix for setting exit code in accounting for batch script. -- Add salloc option, --no-shell (for LSF). -- Added new options for sacct output -- mvapich: Ensure MPIRUN_ID is unique for all job steps within a job. (Fixes crashes when running multiple job steps within a job on one node) -- Prevent "scontrol show job" from failing with buffer overflow when a job has a very long Comment field. -- Make certain that a job step is purged when a job has been completed. Previous versions could have the job step persist if an allocated node went DOWN and the slurmctld restarted. -- Fix bug in sbcast that can cause communication problems for large files. -- Add sbcast option -t/--timeout and SBCAST_TIMEOUT environment variable to control message timeout. -- Add threaded agent to manage a queue of Gold update requests for performance reasons. -- Add salloc options --chdir and --get-user-env (for Moab). -- Modify scontrol update to support job comment changes. -- Do not clear a DRAINED node's reason field when slurmctld restarts. -- Do not cancel a pending job if Moab or Maui try to start it on unusable nodes. Leave the job queued. -- Add --requeue option to srun and sbatch (these undocumented options have no effect in slurm v1.2, but are legitimate options in slurm v1.3). * Changes in SLURM 1.2.24 ========================= -- In sched/wiki and sched/wiki2, support non-zero UPDATE_TIME specification for GETNODES and GETJOBS commands. -- Bug fix for sending accounting information multiple times for same info. patch from Hongjia Cao (NUDT). -- BLUEGENE - try FILE pointer rotation logic to avoid core dump on bridge log rotate -- Spread out in time the EPILOG_COMPLETE messages from slurmd to slurmctld to avoid message congestions and retransmission. * Changes in SLURM 1.2.23 ========================= -- Fix for libpmi to not export unneeded variables like xstr* -- BLUEGENE - added per partition dynamic block creation -- fix infinite loop bug in sview when there were multiple partitions -- Send message to srun command when a job is requeued due to node failure. Note this will be overwritten in the output file unless JobFileAppend is set in slurm.conf. In slurm version 1.3, srun's --open-mode=append option will offer this control for each job. -- Change a node's default TmpDisk from 1MB to 0MB and change job's default disk space requirement from 1MB to 0MB. -- In sched/wiki (Maui scheduler) specify a QOS (quality of service) by specifying an account of the form "qos-name". -- In select/linear, fix bug in scheduling required nodes that already have a job running on them (req.load.patch from Chris Holmes, HP). -- For use with Moab only: change timeout for srun/sbatch --get-user-env option to 2 secs, don't get DISPLAY environment variables, but explicitly set ENVIRONMENT=BATCH and HOSTNAME to the execution host of the batch script. -- Add configuration parameter GetEnvTimeout for use with Moab. See "man slurm.conf" for details. -- Modify salloc and sbatch to accept both "--tasks" and "--ntasks" as equivalent options for compatibility with srun. -- If a partition's node list contains space separators, replace them with commas for easier parsing. -- BLUEGENE - fixed bug in geometry specs when creating a block. -- Add support for Moab and Maui to start jobs with select/cons_res plugin and jobs requiring more than one CPU per task. * Changes in SLURM 1.2.22 ========================= -- In sched/wiki2, add support for MODIFYJOB option "MINSTARTTIME=