Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • autosubmit autosubmit
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 338
    • Issues 338
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 21
    • Merge requests 21
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • Earth SciencesEarth Sciences
  • autosubmitautosubmit
  • Wiki
  • background

background · Changes

Page history
Update background authored Jul 02, 2020 by Miguel Castrillo's avatar Miguel Castrillo
Hide whitespace changes
Inline Side-by-side
background.md
View page @ 9aad6476
......@@ -11,5 +11,64 @@ Autosubmit can also restart a failed job, stop the submission process and restar
New object oriented design and refactoring of Python code has been done in Autosubmit and now there is a new module to create experiments from scratch and store small information into a SQLite database.
Thanks to this, there is also the possibility to create, manage and monitor different types of experiments and to tackling with different queue schedulers.
{{file:scheduler.png}}
## What is a Job?
A job in the HPC jargon is a program submitted to the queue system. It can be serial or multi-threaded, use different type of queue and have all the different directives than the scheduler of the HPC system provides.
Within Autosubmit a Job Class has been created and in the rest of the documentation the term "Job" will refer to the python object from that class.
A job has several attributes:
* job.name : This name must be unique if several jobs are created.
* job.id : This jobid is 0 by construction and will be set by the scheduler, hence will only be unique once the job has been submitted.
* job.status: The status is updated regularly and will tell Autosubmit whether a Job is Ready to be submitted, completed, queuing etc.
* job.type: Each job type has a different template, so you can treat differently multi-processors and serial jobs for example.
* job.failcount: This counter is to keep track of the number of time that a job has failed. At the moment if it fails more than 4 times, the job is cancelled and not resubmitted.
The dependency between jobs is dealt with the concept of inheritance. Each Job has two more attributes:
* job.Children : This is a list of dependent jobs. Those children can only be launched once this job is completed.
* job.Parents : This the list of jobs from which it has to wait for completion. Only when this list is empty can a job be submitted.
## What is a JobList?
The JobList module regroups all the functions necessary for managing a list of jobs. A joblist object can be sorted by status, type, jobid or name and sublists can also be created from there.
The updateJobList() function is called at every loop of Autosubmit and does what it says on the tin. The status of a job is then only 'true' directly after the call of that function.
The SaveJobList() function save the joblist in a pickle file which can then be reloaded for a restart for example.
Other functions like updateGenealogy() are only called once after a joblist is created. When the joblist is created, the dependency or inheritance between jobs can only be created with the job names. The updateGenealogy() function replace the children and parents names by job objects.
## General HPCQueue
Autosubmit needs to interact with the queue system regularly to know how many jobs are in the queue and thus how many jobs can be submitted. The HPCQueue abstract class provides all the functions necessary to communicate with the scheduler so a job can be at all time checked, cancel or submitted and the state of the queue assessed.
## Concrete HPCQueue
A concrete queue is a specialization of an HPCQueue that inherits all the functions common in a general queue and has concrete attributes and concrete methods within each queue system.
Autosubmit currently has the concrete modules to wrap the queue commands from SGE, LSF, SLURM, PBS and eceaccess.
A concrete queue has several attributes:
-queue.host: This is the host name or the IP to set up connections.
-queue.job_status: Each job status has a different code depending on the queue scheduler, so you can treat differently the responses of each concrete HPCQueue.
* queue.submit_cmd: This is the concrete command to submit jobs.
* queue.checkjob_cmd: This is the concrete command to check a job status.
* queue.cancel_cmd: This is the concrete command to cancel jobs.
{{file:queues.png}}
## Monitoring the experiment
Additional functionality to monitor an experiment have been added in Autosubmit.
From the joblist, it is possible to create a "tree" to visualize the status of the joblist.
Each status has a different color scheme: Green = running, red = failed etc.
{{file:job_list_tree.png}}
## Job Wrapper
Currently supercomputers are increasing their number of cores rapidly but also the rules to make use of them are become more strict (e.g. minimum number of cores per job 2000). This is not feasible with the current state of the EC-Earth which is difficult to scale beyond a few hundred cores.
In order to provide a solution to the climate community we have been making some test with a job wrapper. The idea behind this is to run several ensamble members at the same time under the control of a python script. We upload the script for each ensamble member we want to run. The wrapper has to allocate resources for each of the script to run (i.e. if each of the scripts requires 45 CPU and we want to run 10 that would be 450). The wrapping python script creates a thread for every ensamble member and runs them.
Further information:
- International Conference on Computational Science (Cairns, Australia, June 10 - 12, 2014), Impact of I/O and Data Management in Ensemble Large Scale Climate Forecasting Using EC-Earth3. {{file:_poster_masif_iccs_2014.pdf}}
- {{file:_masif_procs_2014.pdf|Asif}}, M., A. Cencerrado, O. Mula-Valls, D. Manubens, F.J. Doblas-Reyes and A. Cortés (2014). Impact of I/O and data management in ensemble large scale climate forecasting using EC-Earth3. [[http://www.sciencedirect.com/science/article/pii/S1877050914003986|Procedia Computer Science, 29, 2370-2379, 10.1016/j.procs.2014.05.221]] (SPECS, IS-ENES2, INCITE).
Clone repository
  • Code coverage
  • Deployment
  • Issues documenting different aspects
  • Leaflet
  • Possible Operational Problems and Solutions
  • Running Autosubmit in Earth Sciences
  • Testing_Suite
  • Updating ReadTheDocs Autosubmit documentation
  • Visual Identity
  • [DestinE] Autosubmit VM on Lumi
  • background
  • bibtex
  • databases
  • development
  • dissemination
View All Pages