1. 24 Feb, 2012 1 commit
  2. 23 Feb, 2012 1 commit
  3. 20 Feb, 2012 1 commit
  4. 06 Feb, 2012 1 commit
    • Danny Auble's avatar
      The openpty(3) call used by slurmstepd to allocate a pseudo-terminal · 2a1c08b0
      Danny Auble authored
      is a convenience function in BSD and glibc that internally calls
      the equivalent of
      
          int masterfd = open("/dev/ptmx", flags);
          grantpt (masterfd);
          unlockpt (masterfd);
          int slavefd = open (slave, O_RDRW|O_NOCTTY);
      
      (in psuedocode)
      
      On Linux, with some combinations of glibc/kernel (in this
      case glibc-2.14/Linux-3.1), the equivalent of grantpt(3) was failing
      in slurmstepd with EPERM, because the allocated pty was getting
      root ownership instead of the user running the slurm job.
      
      From the POSIX description of grantpt:
      
       "The grantpt() function shall change the mode and ownership of the
        slave pseudo-terminal device... The user ID of the slave shall
        be set to the real UID of the calling process..."
      
       http://pubs.opengroup.org/onlinepubs/007904875/functions/grantpt.html
      
      This means that for POSIX-compliance, the real user id of slurmstepd
      must be the user executing the SLURM job at the time openpty(3) is
      called. Unfortunately, the real user id of slurmstepd at this
      point is still root, and only the effective uid is set to the user.
      
      This patch is a work-around that uses the (non-portable) setresuid(2)
      system call to reset the real and effective uids of the slurmstepd
      process to the job user, but keep the saved uid of root. Then after
      the openpty(3) call, the previous credentials are reestablished
      using the same call.
      2a1c08b0
  5. 03 Feb, 2012 1 commit
    • Morris Jette's avatar
      Fix for srun with --exclude and --nodes · a4551158
      Morris Jette authored
      Fix for srun allocating running within existing allocation with --exclude
      option and --nnodes count small enough to remove more nodes.
      
          > salloc -N 8
          salloc: Granted job allocation 1000008
          > srun -N 2 -n 2 --exclude=tux3 hostname
          srun: error: Unable to create job step: Requested node configuration is not available
      
      Patch from Phil Eckert, LLNL.
      a4551158
  6. 02 Feb, 2012 1 commit
  7. 01 Feb, 2012 2 commits
    • Morris Jette's avatar
      Fix job requeue bug · c0a7a7a4
      Morris Jette authored
      Fix bug when requeued batch job is scheduled to run on a different node
      zero, but attemts job launch on old node zero causing fatal error
      "Invalid host_index -1 for job #"
      c0a7a7a4
    • Morris Jette's avatar
      Avoid slurmctld abort due to bad pointer · 43936335
      Morris Jette authored
      Avoid slurmctld abort due to bad pointer when setting an advanced
      reservation MAINT flag if it contains no nodes (only licenses).
      43936335
  8. 31 Jan, 2012 3 commits
  9. 27 Jan, 2012 2 commits
  10. 25 Jan, 2012 1 commit
    • Morris Jette's avatar
      Set DEFAULT flag in partition structure · 9f4ef925
      Morris Jette authored
      Set DEFAULT flag in partition structure when slurmctld reads the
      configuration file. Patch from Rémi Palancher. Note the flag is set
      when the information is sent via RPC for sinfo.
      9f4ef925
  11. 24 Jan, 2012 1 commit
  12. 20 Jan, 2012 1 commit
  13. 19 Jan, 2012 1 commit
  14. 18 Jan, 2012 1 commit
  15. 13 Jan, 2012 3 commits
  16. 09 Jan, 2012 2 commits
  17. 28 Dec, 2011 1 commit
  18. 21 Dec, 2011 1 commit
  19. 19 Dec, 2011 1 commit
  20. 17 Dec, 2011 1 commit
  21. 15 Dec, 2011 1 commit
  22. 14 Dec, 2011 1 commit
  23. 09 Dec, 2011 4 commits
  24. 08 Dec, 2011 1 commit
  25. 06 Dec, 2011 1 commit
    • Morris Jette's avatar
      Permit pending job to exeeded partition limit with QOS flag change. · 0e1abeda
      Morris Jette authored
      One of our testers discovered a regression in version 2.3.1.  If a job is
      pending due to PartitionNodeLimit and the limit is relieved with a
      'sacctmgr modify qos name=<qos name> set flags=partitionmaxnodes' new jobs
      exceeding the partition limit (but not the QOS limit) are allowed to run.
      However, the pending job is never allowed to run.  Attached is a patch to
      address this problem.  FYI, this problem doesn't exist in version 2.4.
      Patch from Bill Brophy, Bull.
      0e1abeda
  26. 05 Dec, 2011 2 commits
  27. 02 Dec, 2011 1 commit
  28. 01 Dec, 2011 1 commit
    • jette's avatar
      Fix for "fatal: cons_res: sync loop not progressing" · d70a9ac4
      jette authored
      This was due to a bug in select/cons_res with some configuration
      optiions and job options, especially if there is more than one
      thread per core and the job option includes "--threads-per-core=1".
      Fixes problem reported by CSCS.
      d70a9ac4
  29. 30 Nov, 2011 1 commit