Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • autosubmit autosubmit
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 338
    • Issues 338
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 21
    • Merge requests 21
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • Earth SciencesEarth Sciences
  • autosubmitautosubmit
  • Wiki
  • background

background · Changes

Page history
Update background authored Aug 18, 2020 by Miguel Castrillo's avatar Miguel Castrillo
Hide whitespace changes
Inline Side-by-side
background.md
View page @ 870963b5
### Background
A typical climate forecast experiment is a run of a climate model over a supercomputer having variable range of forecast length from a few months to a few years. And an experiment may have one or more than one start-dates and every start-date may comprise of single or many members. The full length of forecasting period for the experiment could be divided into number of chunks of fixed forecast length by exploiting the available options of model restart. Furthermore, in the context of computing operations, every chunk could have two big sections; parallel section where the actually model run would be performed by using computing cores of supercomputer and serial section(s) for performing other necessary operations like post-processing of the model output, archiving the model output and cleaning the disk space for the smooth proceeding of the experiment.
![experiment_new](uploads/c063ab333a77f59cdcca49f13e6e4bc0/experiment_new.png)
As we could see in the sample experiment which consists of 10 start-dates from 1960 to 2005 where every start-date is independent of each other and starting after every 5 years while each start-date comprise of 5 members. Every member is also independent and has been divided into 10 chunks which are dependent on each other. Now let us suppose that the forecast length for each chunk is one year and every chunk comprises of three types of jobs; a simulation (Sim), a post-processing (Post) and an archiving and cleaning job (Clean). Therefore with this typical exemplary experiment, one start-date with one member comprise of 30 jobs and eventually 1500 jobs will be run in total for the completion of the experiment. In short, there is a need of a system to automate such type of typical experiments and optimize the use of resources.
Originally, Autosubmit consisted in one perl script (written by Xavi Abellan*) and could submit to the queue a sequence of jobs with different parameters.
All the jobs had a common template and autosubmit would fill this template with different parameters value and submit the jobs to the queue.
Autosubmit would act as a wrapper around the scheduler, monitoring the number of jobs submitted or queuing and would submit a new one as soon as a space in the queue would appear until the entire sequence of jobs is submitted.
......
Clone repository
  • Code coverage
  • Deployment
  • Issues documenting different aspects
  • Leaflet
  • Possible Operational Problems and Solutions
  • Running Autosubmit in Earth Sciences
  • Testing_Suite
  • Updating ReadTheDocs Autosubmit documentation
  • Visual Identity
  • [DestinE] Autosubmit VM on Lumi
  • background
  • bibtex
  • databases
  • development
  • dissemination
View All Pages