Handling massive workflows
New massive experiments are starting to be created in the Climate DT VM. Experiment a0it
has around 10k jobs, and it took around 25 seconds to generate the View from the API. The final payload was 13 MB for the tree view and 10 MB for the graph view. Talking about the experience of the user, this is slow but tolerable.
Then experiment a0s9
has around 100k jobs, it took more than a minute and the payload weights more than 300 MB. This is excessive for a browser, especially when rendering the resulting graph.
Maybe is time to start setting limits on how big a graph/tree view can be retrieved from the API or find a way to compact this payload when a size limit is passed