NEWS 325 KB
Newer Older
=============================
 -- Remove debugging xassert in switch/federation that were accidentally
    committed
 -- Make slurmd step manager retry slurm_container_destroy() indefinitely
    instead of giving up after 30 seconds.  If something prevents a job
    step's processes from being killed, the job will be stuck in the
    completing until the container destroy succeeds.

* Changes in SLURM 0.6.0-pre7
=============================
 -- Disable localtime_r() calls from forked processes (semaphore set 
    in another pthread can deadlock calls to localtime_r made from 
    the forked process, this will be properly fixed in the next 
    major release of SLURM).
 -- Added SLURM_LOCALID environment variable for spawned tasks
    (Dan Palermo, HP).
 -- Modify switch logic to restore state based exclusively upon
    recovered job steps (not state save file).
 -- Gracefully refuse job if there are too many job steps in slurmd.
 -- Fix race condition in job completion that can leave nodes in 
    COMPLETING state after job is COMPLETED.
 -- Added frees for BGL BrigeAPI strdups that were to this point unknown.
 -- smap scrolls correctly for BGL systems.
 -- slurm_pid2jobid() API call will now return the jobid for a step
    manager slurmd process.
* Changes in SLURM 0.6.0-pre6
=============================
 -- Added logic to return scheduled nodes to Maui scheduler (David
    Jackson, Cluster Resources)
 -- Fix bug in handling job request with maximum node count.
 -- Fix node selection scheduling bug with heterogeneous nodes and
    srun --cpus-per-task option
 -- Generate error file to note prolog failures.

* Changes in SLURM 0.6.0-pre5
=============================
 -- Modify sfree (BGL command) so that --all option no longer requires
    an argument.
 -- Modify smap so it shows all nodes and partitions by default (even 
    nodes that the user can't access, otherwise there are holes in 
 -- Added module to parse time string (src/common/parse_time.c) for 
    future use.
 -- Fix BlueGene hostlist processing for non-rectangular prisms and
    add string length checking.
 -- Modify orphan batch job time calculation for BGL to account for 
    slowness when booting many bglblocks at the same time.
* Changes in SLURM 0.6.0-pre4
=============================
 -- Added etc/slurm.epilog.clean to kill processes initiated outside of 
    slurm when a user's last job on a node terminates.
 -- Added config.xml and configurator.html files for use by OSCAR.
 -- Increased maximum job step count from 64 to 130 for BGL systems only.
Christopher J. Morrone's avatar
Christopher J. Morrone committed
* Changes in SLURM 0.6.0-pre3
 -- Add code so job request for shared nodes gets explicitly requested 
    nodes, but lightly loaded nodes otherwise.
 -- Add job step name field.
 -- Add job step network specification field.
Christopher J. Morrone's avatar
Christopher J. Morrone committed
 -- Add proctrack/rms plugin
 -- Change the proctrack API to send a slurmd_job_t pointer to both
    slurm_container_create() and slurm_container_add().  One of those
    functions MUST set job->cont_id.
 -- Remove vestigial node_use (virtual or coprocessor) field from job
    request RPC.
 -- Fix mpich-gm bugs, thanks to Takao Hatazaki (HP).
 -- Fix code for clean build with gcc 2.96, Takao Hatazaki (HP).
 -- Add node update state of "RESUME" to return DRAINED, DRAINING, or 
    DOWN node to service (IDLE or ALLOCATED state).
 -- smap keeps trying to connect to slurmctld in iterative mode rather 
    than just aborting on failure.
 -- Add squeue option --node to filter by node name.
 -- Modify squeue --user option to accept not only user names, but also
    user IDs.
Christopher J. Morrone's avatar
Christopher J. Morrone committed

* Changes in SLURM 0.6.0-pre2
=============================
 -- Removed "make rpm" target.
* Changes in SLURM 0.6.0-pre1
=============================
 -- Added bgl/partition_allocator/smap changes from 0.5.7.
 -- Added configurable resource limit propagation  (Daniel Christians, HP).
Danny Auble's avatar
Danny Auble committed
 -- Added mpi plugin specify at start of srun.
 -- Changed SlurmUser ID from 16-bit to 32-bit.
 -- Added MpiDefault slurm.conf parameter.
 -- Remove KillTree configuration parameter (replace with
    "ProctrackType=proctrack/linuxproc")
 -- Remove MpichGmDirectSupport configuration parameter (replace with
    "MpiDefault=mpich-gm")
 -- Make default plugin be "none" for mpi.
 -- Added mpi/none plugin and made it the default.
 -- Replace extern program_invocation_short_name with program_invocation_name
    due to short name being truncated to 16 bytes on some systems.
 -- Added support for Elan clusters with different CPU counts on nodes
    (Chris Holmes, HP).
 -- Added Consumable Resources web page (Susanne Balle, HP).
Christopher J. Morrone's avatar
Christopher J. Morrone committed
 -- "Session manager" slurmd process has been eliminated.
 -- switch/federation fixes migrated from 0.5.*
 -- srun pthreads really set detached, fixes scaling problem
 -- srun spawns message handler process so it can now be stopped (via 
    Ctrl-Z or TotalView) without inducing failures.

* Changes in SLURM 0.5.7
========================
 -- added infrastructure for (eventual) support of AIX checkpointing
    of slurm batch and interactive poe jobs
 -- added wiring for BGL to do wiring for physical location first and then
    logical.
 -- only one thread used to query database before polling thread is there.

* Changes in SLURM 0.5.6
========================
 -- fix for BGL hostnames and full system partition finding

* Changes in SLURM 0.5.5
========================
 -- Increase SLURM_MESSAGE_TIMEOUT_MSEC_STATIC to 15000
 -- Fix for premature timeout in _slurm_send_timeout
 -- Fix for federation overlapping calls to non-thread-safe _get_adapters

* Changes in SLURM 0.5.4
========================
 -- Added support for no reboot for VN to CO on BGL
 -- Fix for if a job starts after it finishes on BGL

* Changes in SLURM 0.5.3
========================
 -- federation patch so the slurm controller has sane window status at
    start-up regardless of the window status reported in the slurmd
    registration.
 -- federation driver exits with fatal() if the federation driver can not
    find all of the adapters listed in the federation.conf

* Changes in SLURM 0.5.2
========================
 -- Extra federation driver sanity checks

* Changes in SLURM 0.5.1
========================
 -- Fix federation driver bad free(), other minor fed fixes
 -- Allow slurm to parse very long lines in the slurm.conf
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.5.0
========================
 -- Fix race condition in job accouting plugin, could hang slurmd
 -- Report SlurmUser id over 16 bits as an error (fix on v0.6)

* Changes in SLURM 0.5.0-pre19
==============================
 -- Fix memory management bug in federation driver

* Changes in SLURM 0.5.0-pre18
==============================
 -- elan switch plugin memory leak plugged
 -- added g_slurmctld_jobacct_fini() to release all memory (useful 
    to confirm no memory leaks)
 -- Fix slurmd bug introduced in pre17
* Changes in SLURM 0.5.0-pre17
==============================
 -- slurmd calls the proctrack destroy function at job step completion
 -- federation driver tries harder to clean up switch windows
 -- BGL wiring changes
* Changes in SLURM 0.5.0-pre16
==============================
 -- Check slurm.conf values for under/overflows (some are 16 bit values).
 -- Federation driver clears windows at job step completion
 -- Modify code for clean build with gcc v4.0
 -- New SLURM_NETWORK environmant variable used by slurm_ll_api
* Changes in SLURM 0.5.0-pre15
==============================
 -- Added "network" field to "scontrol show job" output. 
 -- Federation fix for unfreed windows when multiple adapters on
    one node use the same LID
Danny Auble's avatar
Danny Auble committed
* Changes in SLURM 0.5.0-pre14
==============================
 -- RDMA works on fed plugin.

* Changes in SLURM 0.5.0-pre13
==============================
 -- Major mods to support checkpoint on AIX.
 -- Job accounting documenation expanded, added tuning options, minor bug fixes
Danny Auble's avatar
Danny Auble committed
 -- BGL wiring will now work on <= 4 node X-dim partitions and also 8 node 
    X-dim partitions.
 -- ENV variables set for spawning jobs. 
 -- jobacct patch from HP to not erroneously lock a mutex in the 
    jobacct_log plugin.
 -- switch/federation supports multiple adapters per task.  sn_all behaviour
    is now correct, and it also supports sn_single.
* Changes in SLURM 0.5.0-pre12
==============================
 -- Minor build changes to support RPM creation on AIX

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.5.0-pre11
==============================
 -- Slurmd tests for initialized session manager (user's) slurmd pid before 
    killing it to avoid killing system daemon (race condition).
 -- srun --output or --error file names of "none" mapped to /dev/null for 
    batch jobs rather than a file actually named "none".
Moe Jette's avatar
Moe Jette committed
 -- BGL: don't try to read bglblock state until they are all created to 
    avoid having BGL Bridge API seg fault.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 0.5.0-pre10
==============================
 -- Fix bug that was resetting BGL job geometry on unrelated field update.
 -- squeue and sinfo print timestamp in interate mode by default.
 -- added scrolling windows in smap
 -- introduced new variable to start polling thread in the bluegene plugin.
 -- Latest accounting patches from Riebs/HP, retry communications.
 -- Added srun option --kill-on-bad-exit from Holmes/HP.
 -- Support large (64-bit address) log files where possible.
 -- Fix problem of signals being delivered twice to tasks.  Note that as
    part of the fix the slurmd session manger no longer calls setsid to
    create a new session.
* Changes in SLURM 0.5.0-pre9
=============================
 -- If a job and node are in COMPLETING state and slurmd stops responding for
    SlurmdTimeout, then set the node DOWN and the job COMPLETED.
 -- Add logic to switch/elan to track contexts allocated to active job steps 
    rather than just using a cyclic counter and hoping to avoid collisions. 
 -- Plug memory leak in freeing job info retrieved using API.
Danny Auble's avatar
Danny Auble committed
 -- Bluegene Plugin handles long deallocating states from driver 202.
 -- Fix bug in bitfmt2int() which can go off allocated memory.
* Changes in SLURM 0.5.0-pre8
=============================
 -- BlueGene srun --geometry was not getting propogated properly.
 -- Fix race condition with multiple simultaneous epilogs.
 -- Modify slurmd to resend job completion RPC to slurmctld in the 
    case where slurmctld is not responding.
Moe Jette's avatar
Moe Jette committed
 -- Updated sacct: handle cancelled jobs correctly, add user/group
    output, add ntasks ans synonym for nprocs, display error field 
    by default, display ncpus instead of nprocs
Danny Auble's avatar
Danny Auble committed
 -- Parallelization of queing jobs up to 32 at once.  Variable 
    MAX_AGENT_COUNT used in bgl_job_run.c to specify.
Danny Auble's avatar
Danny Auble committed
 -- bgl_job_run.c fixed threading issue with uid_to_string use.
* Changes in SLURM 0.5.0-pre7
=============================
 -- Preserve next_job_id across restarts.
 -- Add support for really long job names (256 bytes).
 -- Add configuration parameter SchedulerRootFilter to control what 
    entity manages prioritization of jobs in RootOnly partition 
    (internal scheduler plugin or external entity).
 -- Added support for job accounting.
 -- Added support for consumable resource based node scheduling.
 -- Permit batch job to be launched to re-existing allocation.

* Changes in SLURM 0.5.0-pre6
=============================
 -- Load bluegene.conf and federation.conf based upon SLURM_CONF env 
    var (if set).
 -- Fix slurmd shutdown signal synchronization bug (not consistently 
    terminating).
 -- Add doc/html/ibm.html document. Update bluegene.html.
Danny Auble's avatar
Danny Auble committed
 -- Add sfree to bluegene plugin. 
 -- Remove geometry[SYSTEM_DIMENSIONS] from opaque node_select data 
    type if SYSTEM_DIMENSIONS==0 (not ASCI-C compliant).
 -- Modify smap to test for valid libdb2.so before issuing any BGL 
    Bridge API calls.
 -- Modify spec file for optional inclusion of select_bluegene and 
    sched_wiki plugin libraries.
 -- Initialize job->network in data structure, could cause job 
    submit/update to fail depending upon what is left on stack.
* Changes in SLURM 0.5.0-pre5
=============================
 -- Expand buffer to hold node_select info in job termination log.
 -- Modify slurmctld node hashing function to reduce collisions.
 -- Treat bglblock vanishing as fatal error for job, prolog and epilog 
    exit immediately.
Danny Auble's avatar
Danny Auble committed
 -- bug fix for following multiple X-dim partitions
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.5.0-pre4
=============================
 -- Fix bug in slurmd that could double KillWait time on job timeout.
 -- Fix bug in srun's error code reporting to slurmctld, could DOWN 
    a node if job run as root has non-zero error code.
Moe Jette's avatar
Moe Jette committed
 -- Remove a node's partition info when removed from existing partition.
Moe Jette's avatar
Moe Jette committed
 -- Use proctrack plugin to call all processes in a job step before 
    calling interconnect_postfini() to insure no processes escape from 
    job and prevent switch windows from being released.
 -- Added mail.html web page telling how to get on slurm mailing lists.
Moe Jette's avatar
Moe Jette committed
 -- Added another directory to search for DB2 files on BGL system.
 -- Added overview man page slurm.1.
Moe Jette's avatar
Moe Jette committed
 -- Added new configure option "--with-db2-dir=PATH" for BGL.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 0.5.0-pre3
=============================
 -- Merge of SLURM v0.4-branch into v0.5/HEAD.

* Changes in SLURM 0.5.0-pre2
=============================
 -- Fix bug in srun to clean-up upon failure of an allocated node
    (srun -A would generate a segmentation fault, Chris Holmes, HP).
 -- If slurmd's node name is mapped to NULL (due to bad configuration)
    terminate slurmd with a fatal error and don't crash slurmctld.
 -- Add SLURMD_DEBUG env var for use with AIX/POE in spawn_task RPC.
 -- Always pack job's "features" for access by prolog/epilog
* Changes in SLURM 0.5.0-pre1
=============================
 -- Add network option to srun and job creation API for specification 
    of communication protocol over IBM Federation switch.
 -- Add new slurm.conf parameter ProctrackType (process tracking) and 
    associated plugin in the slurmd module.
Moe Jette's avatar
Moe Jette committed
 -- Send node's switch state with job epilog completion RPC and 
    node registration (only when slurmd starts, not on periodic 
    registrtions).
 -- Add federation switch plugin.
 -- Add new configuration keyword, SchedulerRootFilter, to control 
    external scheduler control of RoolOnly partition (Chris Holmes, HP).
 -- Modify logic to set process group ID for spawned processes (last 
    patch from slurm v0.3.11).
 -- "srun -A" modified to return exit code of last command executed
    (Chris Holmes, HP).
 -- Add support for different slurm.conf files controlled via SLURM_CONF
    env var (Brian O'Sullivan, pathscale)
 -- Fix bug if srun given --uid without --gid option (Chris Holmes, HP).
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 0.4.24
=========================
 -- DRAIN nodes with switches on base partitions are in ERROR, MISSING, 
    or DOWN states.
 
* Changes in SLURM 0.4.23
========================= 
 -- Modified bluegene plugin to only sync bglblocks to jobs on initial 
    startup, not on reconfig. Fixes race condition.
 -- Modified bluegene plugin to work with 141 driver. Enabling it to 
    only have to reboot when switching from coproc -> virtual and back.
 -- added support for a full system partition to make sure every other 
    partition is free and vice-verse.
 -- smap resizing issue fixed.
 -- change prolog not to add time when a partition is in deallocating 
    state.
 -- NOTE: This version of SLURM requires BGL driver 141/2005.

* Changes in SLURM 0.4.22
=========================
 -- Modified bluegene plugin to not do anything if the bluegene.conf file 
    is altered.
 -- added checking for lists before trying to create iterator on the list.

* Changes in SLURM 0.4.21
=========================
 -- Fix in race condition with time in Status Thread of BGL
 -- Fix no leading zeros in smap output.

* Changes in SLURM 0.4.20
=========================
 -- Smap output is more user friendly with -c option

* Changes in SLURM 0.4.19
=========================
 -- Added new RPCs for getting bglblock state info remotely and cache data 
    within the plugin (permits removal of DB2 access from BGL FEN and 
    dramatically increases smap responsivenss, also changed prolog/epilog
    operation)
 -- Move smap executable to main slurm RPM (from separate RPM).
 -- smap uses RPC instead of DB2 to get info about bgl partitions.
 -- Status function added to bluegene_agent thread.  Keeps current state
    of BGL partitions updating every second.  will handle multiple attempts 
    at booting if booting a partition fails. 

* Changes in SLURM 0.4.18
=========================
 -- Added error checking of rm_remove_partition calls.
 -- job_term() was terminating a job in real time rather than 
    queueing the request. This would result in slurmctld hanging 
    for many seconds when a job termination was required.

* Changes in SLURM 0.4.17
========================
 -- Bug fixes from testing .16.

* Changes in SLURM 0.4.16
========================
 -- Added error checking to a bunch of Bridge API calls and more 
    gracefully handle failure modes.
 -- Made smap more robust for more jobs.

* Changes in SLURM 0.4.15
========================
 -- Added error checking to a bunch of Bridge API calls and more 
    gracefully handle failure modes.

* Changes in SLURM 0.4.14
========================
 -- job state is kept on warm start of slurm

* Changes in SLURM 0.4.13
========================
 -- epilog fix for bgl plugin

* Changes in SLURM 0.4.12
========================
 -- bug shot for new api calls.
 -- added BridgeAPILogFile as an option for bluegene.conf file
 
* Changes in SLURM 0.4.11
========================
 -- changed as many rm_get_partition() to rm_get_partitions_info as we could 
    for time saving.
 
* Changes in SLURM 0.4.10
========================
 -- redesign for BGL external wiring.
 -- smap display bug fix for smaller systems.

* Changes in SLURM 0.4.9
========================
 -- setpnum works now, have to include this in bluegene.conf

* Changes in SLURM 0.4.8
========================
 -- Changed the prolog and the epilog to use the env var MPIRUN_PARTITION
    instead of BGL_PARTITION_ID

* Changes in SLURM 0.4.7
========================
Moe Jette's avatar
Moe Jette committed
 -- Remove some BGL specific headers that IBM now distributes, NOTE
    BGL driver 080 or greater required.
 -- Change autogen.sh to deal with problems running autoconf on one
    system and configure on another with different software versions.
Danny Auble's avatar
Danny Auble committed
* Changes in SLURM 0.4.6
========================
 -- smap now works on non-BGL systems.
 -- took tv.h out of partition_allocator so it would work withn driver 080 
    from IBM.
 -- updated slurmd signal handling to prevent possible user killing of daemon.

* Changes in SLURM 0.4.5
========================
 -- Change sinfo default time limit field to have 10 bytes (up from 9).
 -- Fix bug in bluegene partition selection (sorting bug).
 -- Don't display any completed jobs in smap.
 -- Add NodeCnt to filetxt job completion plugin.
 -- Minor restructuring of how MMCS is polled for DOWN nodes and switches.
 -- Fix squeue output format for "%s" (node select data).
 -- Queue job requesting more resources than exist in a partition if 
    that partition's state is DOWN (rather than just abort it).
 -- Add prolog/epilog for bluegene to code base (moved from mpirun in CVS)
Moe Jette's avatar
Moe Jette committed
 -- Add prolog, epilog and bluegene.conf.example to bluegene RPM
 -- In smap, Admin can get the Rack/midplane id from an XYZ input and vice versa.
 -- Add smap line-oriented output capability.
* Changes in SLURM 0.4.4
========================
 -- Fix race condition in slurmd seting pgid of spawned tasks for 
    process tracking.
Danny Auble's avatar
Danny Auble committed
 -- Fix scontrol reconfig does nothing to running jobs nor crash the system
 -- Fix sort of bgl_list only happens once in select_bluegene.c instead of every
    time a new job is inserted.
* Changes in SLURM 0.4.3
========================
 -- Turn off some RPM build checks (bug in RPM, see slurm.spec.in)
Danny Auble's avatar
Danny Auble committed
 -- starting slurmctrld will destroy all RMP*** partitions everytime.  
 
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.4.2
========================
 -- Fix memory leak in BlueGene plugin.
Moe Jette's avatar
Moe Jette committed
 -- Srun's --test-only option takes precedence over --batch option.
Moe Jette's avatar
Moe Jette committed
 -- Add sleep(1) after setting bglblock owner due to apparent race condition 
    in the BGL API.
 -- Slurm was timing out and killing batch jobs if the node registered when 
    a job prolog was still running.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 0.4.1
Moe Jette's avatar
Moe Jette committed
========================
Moe Jette's avatar
Moe Jette committed
 -- BlueGene plugin kills jobs running in defunct bglblock on restart.
 -- Smap displays pending jobs now, in addition to running and completing jobs.
 -- Remove node "use=" from bluegene.conf file, create both coprocessor and 
    virtual bglblocks for now (later create just one and use API to change 
    it when such an API is available).
 -- Add "ChangeNumpsets" parameter to bluegene.conf to use script to 
    update the numpsets parameter for newly created bglblocks (to be 
    removed once the API functions).
 -- Add all patches from slurm v0.3.11 (through 2/7/2005)
   - Added srun option --disable-status,-X to disable srun status feature
     and instead forward SIGINT immediately to job upon receipt of Ctrl-C.
   - Fix for bogus slurmd error message "Unable to put task N into pgrp..."
   - Fix case where slurmd may erroneously detect shared memory entry
     as "stale" and delete entry for unkillable or slow-to-exit job.
   - (qsnet) Fix for running slurmd on node without and elan3 adapter.
   - Fix for reported problem: slurm/538: user tasks block writing to stdio
* Changes in SLURM 0.4.0
Moe Jette's avatar
Moe Jette committed
========================
 -- Minor tweak to init.d/slurm for BlueGene systems.
Moe Jette's avatar
Moe Jette committed
 -- Added smap RPM package (to install binary built on BlueGene 
    service node on front-end nodes).
 -- Added wait between bglblock destroy and creation of new blocks
    so that MMCS can complete the operation.
 -- Fix bug in synchronizing bglblock owners on slurmctld restart.
* Changes in SLURM 0.4.0-pre11
==============================
 -- Add new srun option "--test-only" for testing slurm_job_will_run API.
 -- Fix bugs in slurm_job_will_run() processing.
 -- Change slurm_job_will_run() to not return a message, just an error code.
 -- Sync partition owners with running jobs on slurmctld restart.
* Changes in SLURM 0.4.0-pre10
==============================
 -- Specify number of I/O nodes associated with BlueGene partition.
 -- Do not launch a job's tasks if the job is cancelled while its
    prolog is running (which can be slow on BlueGene).
 -- Add new error code, ESLURM_BATCH_ONLY for attepts to launch 
    job steps on front-end system (e.g. Blue Gene).
 -- Updates to html documents.
 -- Assorted fixes in smap, partition creation mode.
 -- Add proper support for "srun -n" option on BGL recognizing 
    processor count in both virual and coprocessor modes.
 -- Make default node_use on Blue Gene be coprocessor, as documented.
 -- Add SIGKILL to BlueGene jobs as part of cleanup.
* Changes in SLURM 0.4.0-pre9
=============================
 -- Change in /etc/init.d/slurm for RedHat and Suze compatability

* Changes in SLURM 0.4.0-pre8
=============================
 -- Add logic to create and destroy Bluegene Blocks automatically as needed.
 -- Update smap man page to include Bluegene configuration commands.
* Changes in SLURM 0.4.0-pre7
=============================
 -- Port all patches from slurm v0.3 up through v0.3.10:
   - Remove calls in auth/munge plugin deprecated by munge-0.4.
   - Allow single task id to be selected with --input, --output, and --error.
   - Create shared memory segment for Elan statistics when using the
     switch/elan plugin.
   - More fixes necessary for TotalView.
* Changes in SLURM 0.4.0-pre6
=============================
 -- Add new job reason value "JobHeld" for jobs with priority==0
 -- Move startup script from "/etc/rc.d/init.d/slurm" to "/etc/init.d/slurm"
 -- Modify prolog/epilog logic in slurmd to accomodate very long run times, 
    on BGL these scripts wait for events that can take a very long time 
    (tens of seconds).
 -- This code base was used for BGLb acceptance test with pre-defined 
    BGL blocks.
* Changes in SLURM 0.4.0-pre5
=============================
 -- select/bluegene plugin confirms db.properties file in $sysconfdir
    and copies it to StateSaveLocation (slurmctld's working directory)
 -- select/bluegene plugin confirms environment variable required for 
    DB2 interaction are set (execute "db2profile" script before slurmctld)
 -- slurmd to always give jobs KillWait time between SIGTERM and SIGKILL
    at termination
 -- set job's start_time and end_time = now rather than leaving zero if 
    they fail to execute
 -- modify srun to forward SIGTERM
 -- enable select/bluegene testing for DOWN nodes and switches
 -- select/bluegene plugin to delete orphan jobs, free BGLblocks and 
    set owner as jobs terminate/start
* Changes in SLURM 0.4.0-pre4
=============================
 -- Fixes for reported problems:
   - slurm/512: Let job steps run on DRAINING nodes
   - slurm/513: Gracefully deal with UIDs missing from passwd file
 -- Add support for MPICH-GM (from takao.hatazaki@hp.com)
 -- Add support for NodeHostname in node configuration
 -- Make "scontrol show daemons" function properly on front-end system 
    (e.g. Blue Gene)
 -- Fix srun bug when --input, --output and --error are all "none"
 -- Don't schedule jobs for user root if partition is DOWN
 -- Modify select/bluegene to honor job's required node list
 -- Modify user name logic to explicitly set UID=0 to "root", 
    Suse Linux was not handling multiple users with UID=0 well.
* Changes in SLURM 0.4.0-pre3
=============================
 -- Send SIGTERM to batch script before SIGKILL for mpirun cleanup on 
    Blue Gene/L
 -- Create new allocation as needed for debugger in case old allocation 
    has been purged
 -- Add Blue Gene User Guide to html documents
 -- Fix srun bug that could cause seg fault with --no-shell option if not 
    running under a debugger
 -- Propogate job's task count (if set) for batch job via SLURM_NPROCS.
 -- Add new job parameters for Blue Gene: geometry, rotate, mode (virtual
    or co-processor), communications type (mesh or torus), and partition ID.
 -- Exercise a bunch of new switch plugin functions for Federation 
    switch support.
 -- Fix bug in scheduling jobs when a processor count is specified
    and FastSchedule=0 and the cluster is heterogeneous.
* Changes in SLURM 0.4.0-pre2
=============================
 -- NOTE: "startclean" when transitioning from version 0.4.0-pre1, JOBS ARE LOST
 -- Fixes for reported problems:
   - slurm/477: Signal of batch job script (scancel -b) fixed
   - slurm/481: Permit clearing of AllowGroups field for a partition
   - slurm/482: Adjust Elan base context number to match RMS range
Moe Jette's avatar
Moe Jette committed
   - slurm/489: Job completion logger was writing NULL to text file
 -- Preserve job's requested processor count info after job is initiated 
    (for viewing by squeue and scontrol)
 -- srun cancels created job if job step creation fails
 -- Added a lots of Blue Gene/L support logic: slurmd executes on a single 
    node to front-end the 512-CPU base-partitions (Blue Gene/L's nodes)
 -- Add node selection plugin infrastructure, relocate existing logic 
    to select/linear, add configuration parameter SelectType
 -- Modify node hashing algorithm for better performance on Blue Gene/L
 -- Add ability to specify node ranges for 3-D rectangular prism
* Changes in SLURM 0.4.0-pre1
=============================
 -- NOTE: "startclean" when transitioning from version 0.3, JOBS ARE LOST
 -- Added support for job account information (arbitrary string)
 -- Added support for job dependencies (start job X after job Y completes)
 -- Added support for configuration parameter CheckpointType
Moe Jette's avatar
Moe Jette committed
 -- Added new job state "CANCELLED"
 -- Don't strip binaries, breaks parallel debuggers
 -- Fix bug in Munge authentication retry logic
 -- Change srun handling of interupts to work properly with TotalView
 -- Added "reason" field to job info showing why a job is waiting to run
* Changes in SLURM 0.3.7
========================
 -- Fixes required for TotalView operability under RHEL3.0
    (Reported by Dong Ahn <dahn@llnl.gov>)
   - Do not create detached threads when running under parallel debugger.
   - Handle EINTR from sigwait().

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.3.6
========================
Moe Jette's avatar
Moe Jette committed
 -- Fixes for reported problems:
   - slurm/459: Properly support partition's "Shared=force" configuration.
Moe Jette's avatar
Moe Jette committed
 -- Resync node state to DRAINED or DRAINING on restart in case job 
    and node state recovered are out of sync.
 -- Added jobcomp/script plugin (execute script on job completion, 
    from Nathan Huff, North Dakota State University).
 -- Added new error code ESLURM_FRAGMENTED for immediate resource 
    allocation requests which are refused due to completing job (formerly 
    returned ESLURM_NOT_TOP_PRIORITY)
 -- Modified job completion logging plugin calling sequence.
Moe Jette's avatar
Moe Jette committed
 -- Added much of the infrastructure required for system checkpoint
    (APIs, RPCs, and NULL plugin)

* Changes in SLURM 0.3.5
========================
 -- Fix "SLURM_RLIMIT_* not found in environment" error message when
    distributing large rlimit to jobs.
 -- Add support for slurm_spawn() and associated APIs (needed for IBM 
    SP systems).
 -- Fix bug in update of node state to DRAINING/DRAINED when update 
    request occurs prior to initial node registration.
 -- Fix bug in purging of batch jobs (active batch jobs were being 
    improperly purged starting in version 0.3.0).
 -- When updating a node state to DRAINING/DRAINED a Reason must be 
    provided. The user name and a timestamp will automatically be 
    appended to that Reason.
* Changes in SLURM 0.3.4
========================
   - slurm/404: Explicitly set pthread stack size to 1MB for srun
 -- Allow srun to respond to ctrl-c and kill queued job while waiting
    for allocation from controller.

* Changes in SLURM 0.3.3 
 -- Fix slurmctld handling of heterogeneous processor count on elan 
    switch (was setting DRAINED nodes in state DRAINING).
 -- Fix sinfo -R, --list-reasons to list all relevant node states.
 -- Fix slurmctld to honor srun's node configuration specifications 
    with FastSchedule==0 configuration.
 -- Added srun option --debugger-test to confirm that slurm's debugger 
    infrastructure is operational.
 -- Removed debugging hacks for srun.wrapper.c. Temporarily use 
    RPM's debugedit utility if available for similar effect.
========================
Mark Grondona's avatar
Mark Grondona committed
 -- The srun command wakes immeditely upon resource allocation (via new RPC)
Mark Grondona's avatar
Mark Grondona committed
 -- SLURM daemons log current version number at startup.
 -- If slurmd can't respond to ping (e.g. paging is keeping it from 
    responding in a timely fashion) then send a registration RPC
Mark Grondona's avatar
Mark Grondona committed
    to slurmctld.
 -- Fix slurmd -M option to call mlockall() after daemonizing.
 -- Add "slurm_" prefix to slurm's hostlist_ function man pages.
 -- Change get info calls from using show_all to more general show_flags
    with #define for SHOW_ALL flag. 

* Changes in SLURM 0.3.1
 -- Set SLURM_TASKS_PER_NODE env var for batch jobs (and LAM/MPI).
 -- Fix for slurmd spinning when stdin buffers full (gnats:434)
 -- Change some slurmctld malloc sizes to reduce demand for realloc calls, 
    improves performance and eliminates realloc failure on RH EL3 under 
    extremely heavy workload apparently due to memory fragmentation.
 -- Fix scheduling logic for heterogeneous processor count.
 -- Modify security_2_2 test to function with release 0.3
 -- Fix broken rpm build when libslurm not already installed.
 -- New slurmd option -M to mlock() slurmd process into memory.
 -- New srun option --no-shell causes srun to exit instead of spawning 
    shell when using --allocate, -A.
 -- Modify  srun --uid=user and --gid=group options to maintain invoking 
    user's credentials until after nodes have been allocated to requested 
    user/group (allows root to run jobs and allocate nodes for other users 
    in a RootOnly partition).
 -- Fix node processing if state change requested via scontrol prior to 
    initial node registration.
* Changes in SLURM 0.3.0
========================
Moe Jette's avatar
Moe Jette committed
 -- Support for AIX added (a few bugs do remain).
 -- Fix memory leak in slurmctld, slurm_cred_create().
 -- On ELF systems, export BNR_* functions from SLURM API. 
 -- Add support for "hidden" partitions (applies to their 
    nodes, jobs, and job steps as well). APIs and commands 
    modified to optionally display hidden partitions.
 -- Modify partition's group_allow test to be based upon the user 
    of the allocation rather than the user making the allocation 
    request (user root for LCRM batch jobs).
 -- Restructure plugin directory structure.
Mark Grondona's avatar
Mark Grondona committed
 -- New --core=type option in srun for lightweight corefile support.
    (requires liblwcf).
 -- Let user root and SlurmUser exceed any partition limits.
 -- Srun treats "--time=0" as a request for an infinite time limit.
 
* Changes in SLURM 0.3.0.0-pre10
Moe Jette's avatar
Moe Jette committed
================================
 -- Fix bugs in support of slurmctld "-f" option (specify different 
    slurm.conf pathname).
 -- Remove slurmd "-f" option.
 -- Several documenation changes for slurm administrators.
 -- On ELF systems, export only slurm_* functions from slurm API and 
    ensure plugins use only slurm_ prefixed functions (created aliases
    where necessary).
 -- New srun option -Q, --quiet to suppress informational messages.
Moe Jette's avatar
Moe Jette committed
 -- Fix bug in slurmctld's building of nodelist for job (failed if 
    more than one numeric field in node name).
 -- Change "scontrol completing" and "sinfo" to use job's node bitmap
    to identify nodes associated with that particular job that are 
    still processing job completion. This will work properly for 
    shared nodes.
 -- Set SLURM_DISTRIBUTION environment varible for user tasks.
 -- Fix for file descriptor leak in slurmd.
 -- Propagate stacksize limit to jobs along with other resource limits
    that were previously ignored.
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 0.3.0.0-pre9
===============================
Moe Jette's avatar
Moe Jette committed
 -- Restructure how slurmctld state saves are performed for better 
    scalability.
 -- New sinfo option "--list-reason" or "-R". Displays down or drained 
    nodes along with their REASON field.
* Changes in SLURM 0.3.0.0-pre8
===============================
 -- Queue outgoing message traffic rather than immediately spawning 
    pthreads (under heavy load this resulted in hundreds of pthreads 
    using more memory than was available).
 -- Restructure slurmctld message agent for higher throughput.
 -- Add new sinfo options --responding and --dead (i.e. non-responding)
    for filtering node states.
 -- Fix bug in sinfo to properly process specified state filter including
    "*" suffix for non-responding nodes.
 -- Create StateSaveLocation directory if changes via slurmctld reconfig

* Changes in SLURM 0.3.0.0-pre7
===============================
 -- Fixes for reported problems:
   - slurm/381: Hold jobs requesting more resources than partition limit.
   - slurm/387: Jobs lost and nodes DOWN on slurmctld restart.
 -- Add support for getting node's real memory size on AIX.
 -- Sinfo sort partitions in slurm.conf order, new sort option ("#P").
 -- Document how to gracefully change plugin values.
 -- Slurmctld does not attempt to recover jobs when the switch plugin
    value changes (decision reached when any job's switch state recovery
    fails).
 -- Node does not transition from COMPLETING to DOWN state due to
    not responding. Wait for tasks to complete or admin to set DOWN.
 -- Always chmod SlurmdSpoolDir to 755 (a umask of 007 was resulting 
    in batch jobs failing).
 -- Return errors when trying to change configuration parameters
    AuthType, SchedulerType, and SwitchType via "scontrol reconfig"
    or SIGHUP. Document how to safely change these parameters.
 -- Plugin-specific error number definitions and descriptive strings 
    moved from common into plugin modules.
 -- Documentation for writing scheduler, switch, and job completion 
    logging plugins added.
 -- Added job and node state descriptions to the squeue and sinfo man pages.
 -- Backup slurmctld to generate core file on SIGABRT.
 -- Backup slurmctld to re-read slurm.conf on SIGHUP.
 -- Added -q,--quit-on-interrupt option to srun.
 -- Elan switch plugin now starts neterr resolver thread on all Elan3
    systems (QsNet and QsNetII).
 -- Added some missing read locks for references for slurmctld's 
    configuration data structure
 -- Modify processing of queued slurmctld message traffic to get better
    throughput (resulted in job inactivity limit being reached improperly 
    when hundreds of jobs running simultaneously)
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.3.0.0-pre6
===============================
Moe Jette's avatar
Moe Jette committed
 -- Fixes for reported problems:
   - slurm/372: job state descriptions added to squeue man page
 -- Switch plugin added. Add "SwitchType=switch/elan" to slurm.conf for 
    systems with Quadrics Elan3 or Elan4 switches.
 -- Don't treat DOWN nodes with too few CPUs as a fatal error on Elan
 -- Major re-write of html documents
 -- Updates to node pinging for large numbers of unresponsive nodes 
 -- Explicitly set default action for SIGTERM (action on Thunder was 
    to ignore SIGTERM)
 -- Sinfo "--exact" option only applies to fields actually displayed
 -- Partition processor count not correctly computed for heterogeneous 
    clusters with FastSchedule=0 configuration
 -- Only return DOWN nodes to service if the reason for them being in 
    that state is non-responsiveness and "ReturnToService=1" configuration
 -- Partition processor count now correctly computed for heterogeneous 
    clusters with FastSchedule configured off
 -- New macros and function to export SLURM version number
* Changes in SLURM 0.3.0.0-pre5
===============================
 -- Fixes for reported problems:
   - slurm/346: Support multiple colon-separated PluginDir values
 -- Fix node state transition: DOWN to DRAINED (instead of DRAINING)
 -- Fix a couple of minor slurmctld memory leaks

* Changes in SLURM 0.3.0.0-pre4
===============================
 -- Fix bug where early launch failures (such as invalid UID/GID) resulted
    in jobs not terminating properly.
 -- Initial support for BNR committed (not yet functional).
 -- QsNet: SLURM now uses /etc/elanhosts exclusively for converting 
    hostnames to ElanIDs.

* Changes in SLURM 0.3.0.0-pre3
===============================
 -- Fixes for reported problems:
   - slurm/328: Slurmd was restarting with a new shared memory segment and 
     losing track of jobs
   - slurm/329: Job processing may be left running when one task dies
   - slurm/333: Slurmd fails to launch a job and deletes a step, due to 
     a race condition in shared memory management
   - slurm/334: Slurmd was getting a segv due to a race condition in shared 
     memory management
   - slurm/342: Properly handle nodes being removed from configuration 
     even when there are partitions, nodes, or job steps still associated 
     with them
 -- Srun properly terminates jobs/steps upon node failure (used to hang 
    waiting for I/O completion)
 -- Job time limits enforced even if InactiveLimit configured as zero
 -- Support the sending of an arbitrary signal to a batch script (but not 
    the processses in its job steps) 
 -- Re-read slurm configuration file whenever changed, needed by users 
    of SLURM APIs
 -- Scancel was generating a assert failure
 -- Slurmctld sends a launch response message upon scheduling of a queued
    job (for immediate srun response)
 -- Maui scheduler plugin added
 -- Backfill scheduler plugin added
 -- Batch scripts can now have arguments that are propogated
 -- MPICH support added (via patch, not in SLURM CVS)
 -- New SLURM environment variables added SLMR_CPUS_ON_NODE and 
    SLURM_LAUNCH_NODE_IPADDR, these provide support needed for LAM/MPI
    (version 7.0.4+)
 -- The TMPDIR directory is created as needed before job launch
 -- Do not create duplicate SLURM environment variables with the same name
 -- Insure proper enforcement of node sharing by job
 -- Treat lack of SpoolDir or StateSaveDir as a fatal error
 -- Quickstart.html guide expanded
 -- Increase maximum jobs steps per node from 16 to 64
 -- Delete correct shared memory segment on slurmd -c (clean start)
* Changes in SLURM 0.3.0.0-pre2
===============================
 -- Fixes for reported problems:
   - slurm/326: Properly clean-up jobs terminating on non-responding nodes
 -- Move all configuration data structure into common/read_config, scontrol
    now always shows default values if not specified in slurm.conf file
 -- Remove the unused "Prioritize" configuration parameter

* Changes in SLURM 0.3.0.0-pre1
===============================
 -- Fixes for reported problems:
   - slurm/252: "jobs left orphaned when using TotalView:" SLURM controller 
     now pings srun and kills defunct jobs.
   - slurm/253: "srun fails to accept new IO connection." 
   - slurm/317: "Lack of default partition in config file causes errors." 
   - slurm/319: Socket errors on multiple simultaneous job launches fixed
   - slurm/321: slurmd shared memory synchronization error.
 -- Removed slurm_tv_clean daemon which has been obsoleted by slurm/252 fix.
 -- New scontrol command ``delete'' and RPC added to delete a partition
 -- Squeue can now print and sort by group id/name
 -- Scancel has new option -q,--quiet to not report an error if a job 
    is already complete 
 -- Add the excluded node list to job information reported.
 -- RPC version mis-match now properly handled
 -- New job completion plugin interface added for logging completed jobs.
 -- Fixed lost digit in scontrol job priority specification.
 -- Remove restriction in the number of consecutive node sets (no longer
    needed after DPCS upgrade)
 -- Incomplete state save write now properly handled.
 -- Modified slurmd setrlimit error for greater clarity.
 -- Slurmctld performs load-leveling across shared nodes.
 -- New user function added slurm_get_end_time for user jobs.
 -- Always compile srun with stabs debug section when TotalView support 
    is requested.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 0.2.21
=========================
 -- Fixes for reported problems:
   - slurm/253: Try using different port if connect() fails (was rarely 
     failing when an existing defunct connection was in TIME_WAIT state)
   - slurm/300: Possibly killing wrong job on slurmd restart
   - slurm/312: Freeing non-allocated memory and killing slurmd
Moe Jette's avatar
Moe Jette committed
 -- Assorted changes to support RedHat Enterprise Linux 3.0 and IA64
 -- Initial Elan4 and libelanctrl support (--with-elan).
Moe Jette's avatar
Moe Jette committed
 -- Slurmctld was sometimes inappropriately setting a job's priority 
    to 1 when a node was down (even if up nodes could be used for the 
    job when a running job completes)
 -- Convert all user commands from use of popt library to getopt_long()
Mark Grondona's avatar
Mark Grondona committed
 -- If TotalView support is requested, srun exports "totalview_jobid"
    variable for `%J' expansion in TV bulk launch string.
 -- Fix several locking bugs in slurmd IO layer.
Mark Grondona's avatar
Mark Grondona committed
 -- Throttle back repetitious error messages in slurmd to avoid filling
    log files.
* Changes in SLURM 0.2.20
=========================
 -- Fixes for reported problems:
   - slurm/298: Elan initialization error (Invalid vp 2147483674).
   - slurm/299: srun fails to exit with multiple ^C's.
 -- Temporarily prevent DPCS from allocating jobs with more than eight 
    sets of consecutive nodes. This was likely causing user applications 
    to fail with libelan errors. This will be removed after DPCS is updated.
 -- Fix bug in popt use, was failing in some versions of Linux.
 -- Resend KILL_JOB messages as needed to clear COMPLETING jobs.
 -- Install dummy SIGCHLD handler in slurmd to fix problem on NPTL systems
    where slurmd was not notified of terminated tasks.

* Changes in SLURM 0.2.19
=========================
 -- Memory corruption bug fixed, it was causing slurmctld to seg-fault

* Changes in SLURM 0.2.18
=========================
Mark Grondona's avatar
Mark Grondona committed
 -- Fixes for reported problems:
   - slurm/287: slurm protocol timeouts when using TotalView.
   - slurm/291: srun fails using ``-n 1'' under multi-node allocation.
   - slurm/294: srun IO buffer reports ENOSPC.
 -- Memory corruption bug fixed, it was causing slurmctld to seg-fault
 -- Non-responding nodes now go from DRAINING to DRAINED state when 
    jobs complete
 -- Do not schedule pending jobs while any job is actively COMPLETING 
    unless the submitted job specifically identifies its nodes (like DPCS)
 -- Reset priority of jobs with priority==1 when a non-responding node 
    starts to respond again
 -- Ignore jobs with priority==1 when establishing new baseline upon 
    slurmctld restart
 -- Make slurmctld/message retry be timer based rather than queue based 
    for better scalability
 -- Slurmctld logging is more concise, using hostlists more
 -- srun --no-allocate used special job_id range to avoid conflicts 
    or premature job termination (purging by slurmctld)
Mark Grondona's avatar
Mark Grondona committed
 -- New --jobid=id option in srun to initiate job step under an existing 
    allocation.
 -- Support in srun for TotalView bulk launch.
* Changes in SLURM 0.2.17
=========================
 -- Fixes for reported problems:
   - slurm/279: Hold jobs that can't execute due to DOWN or DRAINED 
     nodes and release when nodes are returned to service.
   - slurm/285: "srun killed due to SIGPIPE"
 -- Support for running job steps on nodes relative to current 
    allocation via srun -r, --relative=n option.
 -- SIGKILL no longer broadcasted to job via srun on task failure unless
    --no-allocate option is used.
 -- Re-enabled "chkconfig --add" in default RPMs.
 -- Backup controller setting proper PID into slurmctld.pid file.
 -- Backup controller restores QSW state each time it assumes control
 -- Backup controller purges old job records before assuming control
    to avoid resurrecting defunct jobs.
 -- Kill jobs on non-responding DRAINING nodes and make their state
    DRAINED.
 -- Save state upon completion of a job's last EPILOG_COMPLETION to