- 02 May, 2013 2 commits
- 01 May, 2013 6 commits
-
-
Morris Jette authored
Also add size specification of "%0" to not limit a field size. For example "sacct --format=%0ALL" to print everything.
-
Morris Jette authored
-
Danny Auble authored
-
Morris Jette authored
also "-euidevice sn_single".
-
Morris Jette authored
-
Morris Jette authored
Modify slurmctld data structure locking to interleave read and write locks rather than always favor write locks over read locks.
-
- 30 Apr, 2013 3 commits
-
-
Morris Jette authored
Make timeout configurable at build time by defining SAVE_MAX_WAIT.
-
Olli-Pekka Lehto authored
Dear all, As quick fix, I have put together this script to help manage native and symmetric MPI runs within SLURM. It's a bit bare-bones currently but I needed to get it working quickly :) It does not provide tight integration between the scheduler and MPI daemons and requires a slot on the host, even when running fully on the MIC, so it's really far from an optimal solution but could be a stopgap. It's inspired by the TACC Stampede documentation. They seem to have a similar script in place. It's fairly simple, you provide the names of the MIC binary (with -m) and host binary (with -c). The host MPI/OpenMP parameters are given as usual and the Xeon Phi side parameters as environment variables (MIC_PPN, MIC_OMP_NUM_THREADS). Currently it supports only 1 card per host but extending it should be simple enough. Here are a couple of links to documentation: Our prototype cluster documentation: https://confluence.csc.fi/display/HPCproto/HPC+Prototypes#HPCPrototypes-XeonPhiDevelopment Presentation at the PRACE Spring School in Umeå earlier this week: https://www.hpc2n.umu.se/sites/default/files/1.03%20CSC%20Cluster%20Introduction.pdf Feel free to include this in the contribs -directory. It might need a bit of cleanup though and I don't know when I have the time to do this. I have also added support for TotalView debugger (provided it's installed and configured properly for Xeon Phi usage). Future ideas: For the native MIC client, I've been testing it out a bit and looking at ways to minimize the changes needed for support. The two major challenges seem to be in scheduling and affinity: I think it might be necessary to put it into a specific topology plugin, like the one for BG/Q, but it looks like a lot of work to do that. Best regards, Olli-Pekka
-
Danny Auble authored
-
- 29 Apr, 2013 3 commits
-
-
Morris Jette authored
Avoid placing pending jobs in AdminHold state due to backfill scheduler interactions with advanced reservation. Specifically, if the backfill scheduler tests a pending job can be scheduled after it's advanced reservation ends then the job was assigned a priority of zero (AdminHold).
-
Danny Auble authored
-
Danny Auble authored
undefined variable.
-
- 26 Apr, 2013 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
requested and allocated.
-
Phil Sharfstein authored
-
- 25 Apr, 2013 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 24 Apr, 2013 1 commit
-
-
Danny Auble authored
user requests. We have found any srun/aprun afterwards will work on a subset of resources. Before next release remove vestigial code, leaving it there for now just in case we find something out of the ordinary and we have to revert.
-
- 23 Apr, 2013 3 commits
-
-
David Bigagli authored
thread.
-
Danny Auble authored
-
Danny Auble authored
allocation as taking up the entire node instead of just part of the node allocated. And always enforce exclusive on a step request.
-
- 19 Apr, 2013 3 commits
-
-
Danny Auble authored
to attempt to signal tasks on the frontend node.
-
Danny Auble authored
deny the job instead of holding it.
-
Danny Auble authored
to attempt to signal tasks on the frontend node.
-
- 18 Apr, 2013 1 commit
-
-
Danny Auble authored
deny the job instead of holding it.
-
- 17 Apr, 2013 3 commits
-
-
Morris Jette authored
Fix for bug 268
-
Danny Auble authored
to implicitly create full system block.
-
Danny Auble authored
cpu count would be reflected correctly.
-
- 16 Apr, 2013 2 commits
-
-
Danny Auble authored
-
Danny Auble authored
-
- 12 Apr, 2013 3 commits
-
-
Danny Auble authored
-
Danny Auble authored
plugins. For those doing development to use this follow the model set forth in the acct_gather_energy_ipmi plugin.
-
Morris Jette authored
We're in the process of setting up a few GPU nodes in our cluster, and want to use Gres to control access to them. Currently, we have activated one node with 2 GPUs. The gres.conf file on that node reads ---------------- Name=gpu Count=2 File=/dev/nvidia[0-1] Name=localtmp Count=1800 ---------------- (the localtmp is just counting access to local tmp disk.) Nodes without GPUs have gres.conf files like this: ---------------- Name=gpu Count=0 Name=localtmp Count=90 ---------------- slurm.conf contains the following: GresTypes=gpu,localtmp Nodename=DEFAULT Sockets=2 CoresPerSocket=8 ThreadsPerCore=1 RealMemory=62976 Gres=localtmp:90 State=unknown [...] Nodename=c19-[1-16] NodeHostname=compute-19-[1-16] Weight=15848 CoresPerSocket=4 Gres=localtmp:1800,gpu:2 Feature=rack19,intel,ib Submitting a job with sbatch --gres:1 ... sets the CUDA_VISIBLE_DEVICES for the job. However, the values seem a bit strange: - If we submit one job with --gres:1, CUDA_VISIBLE_DEVICES gets the value 0. - If we submit two jobs with --gres:1 at the same time, CUDA_VISIBLE_DEVICES gets the value 0 for one job, and 1633906540 for the other. - If we submit one job with --gres:2, CUDA_VISIBLE_DEVICES gets the value 0,1633906540
-
- 11 Apr, 2013 3 commits
-
-
Danny Auble authored
APRUN_DEFAULT_MEMORY env var for aprun. This scenario will not display the option when used with --launch-cmd.
-
Danny Auble authored
per cpu.
-
Danny Auble authored
per cpu.
-
- 10 Apr, 2013 2 commits
-
-
Morris Jette authored
If task count specified, but no tasks-per-node, then set the tasks per node in the BASIL reservation request.
-
Danny Auble authored
as the hosts given.
-