1. 12 Oct, 2011 5 commits
    • Mark A. Grondona's avatar
      cgroups: Add MaxRAMPercent and MaxSwapPercent config parameters · f8afbebc
      Mark A. Grondona authored
      As a failsafe we may want to put a hard limit on memory.limit_in_bytes
      and memory.memsw.limit_in_bytes when using cgroups. This patch adds
      MaxRAMPercent and MaxSwapPercent which are taken as percentages of
      available RAM (RealMemory as reported by slurmd), and which will be
      applied as upper bounds when creating memory controller cgroups.
      f8afbebc
    • Mark A. Grondona's avatar
      Propagate real_memory_size to slurmstepd at job start · 4cf2f340
      Mark A. Grondona authored
      Add conf->real_memory_size to the list of slurmd_conf_t members that
      are propagated to slurmstepd during a job step launch. This makes the
      amount of RAM available on the system (as determined by slurmd) available
      for use in slurmstepd plugins or slurmstepd itself, without having to
      recalculate its value.
      4cf2f340
    • Mark A. Grondona's avatar
      task/cgroup: Refactor task_cgroup_memory_create · 941262a3
      Mark A. Grondona authored
      There was some duplicated code in task_cgroup_memory_create. In order
      to facilitate extending this code in the future, refactor it into
      a common function memcg_initialize().
      941262a3
    • Mark A. Grondona's avatar
      cgroups: Support configurable cgroup mount dir in release agent · fa6b256e
      Mark A. Grondona authored
      The example cgroup release agent packaged and installed with
      SLURM assumes a base directory of /cgroup for all mounted
      subsystems. Since the mount point is now configurable in SLURM,
      this script needs to be augmented to determine the location
      of the subsystem mount point at runtime.
      fa6b256e
    • Mark A. Grondona's avatar
      cgroups: Allow cgroup mount point to be configurable · c9ea11b5
      Mark A. Grondona authored
      cgroups code currently assumes cgroup subsystems will be mounted
      under /cgroup, which is not the ideal location for many situations.
      Add a new cgroup.conf parameter to redefine the mount point to an
      arbitrary location. (for example, some systems may already have
      cgroupfs mounted under /dev/cgroup or /sys/fs/cgroup)
      c9ea11b5
  2. 07 Oct, 2011 1 commit
  3. 05 Oct, 2011 2 commits
  4. 04 Oct, 2011 3 commits
  5. 03 Oct, 2011 1 commit
  6. 30 Sep, 2011 4 commits
  7. 29 Sep, 2011 6 commits
  8. 28 Sep, 2011 4 commits
  9. 27 Sep, 2011 1 commit
    • Mark A. Grondona's avatar
      Allow job owner to use scontrol notify · 141d87a4
      Mark A. Grondona authored
      The slurmctld code that processes job notify messages unecessarily
      restricts these messages to be from the slurm user or root. This
      patch allows users to send notifications to their own jobs.
      141d87a4
  10. 26 Sep, 2011 4 commits
  11. 19 Sep, 2011 1 commit
  12. 17 Sep, 2011 1 commit
  13. 16 Sep, 2011 2 commits
    • Morris Jette's avatar
      Problem using salloc/mpirun with task affinity socket binding · 98b203d4
      Morris Jette authored
      salloc/mpirun does not play well together with task affinity socket binding.  The following example illustrates the problem.
      
      [sulu] (slurm) mnp> salloc -p bones-only -N1-1 -n3 --cpu_bind=socket mpirun cat /proc/self/status | grep Cpus_allowed_list
      salloc: Granted job allocation 387
      --------------------------------------------------------------------------
      An invalid physical processor id was returned ...
      
      The problem is that with mpirun jobs Slurm launches only a single task, regardless of the value of -n. This confuses the socket binding logic in task affinity.  The result is that task affinity binds the task to only a single cpu, instead of all the allocated cpus on the socket.  When mpi attempts to bind to any of the other allocated cpus on the socket, it gets the "invalid physical processor id" error. Note that the problem may occur even if socket binding is not explicitly requested by the user.  If task/affinity is configured and the allocated CPUs are a whole number of sockets, Slurm will use "implicit auto binding" to sockets, triggering the problem.
      Patch from Martin Perry (Bull).
      98b203d4
    • Morris Jette's avatar
      Describe mechanism to reserve CPUs rather than whole nodes · 7e181113
      Morris Jette authored
      Update reservation web page to describe mechanism to reserve CPUs rather than whole nodes and provide an example.
      7e181113
  14. 15 Sep, 2011 3 commits
  15. 14 Sep, 2011 2 commits