Efficiency of s2dv functions regarding Apply()
After some tests and re-read the multiApply documentation, I leave the summary of the efficiency of general s2dv functions here.
Here are some points in multiApply issue 3:
When the function is simple and fast, Apply() with multiple cores can only be as fast as apply(), at the cost of much larger memory usage.
My tests (e.g., mean()) show the same result. Apply() is never faster than apply() and even becomes slower when too many cores are used. As for memory usage, I don't have a conclusion now because I used
Rprof() and they showed me different results. But we know that multiApply tends to consume much more resources usually.
apply() should be recommended over Apply() for cases where functions are to be applied over large margins of a single data array.
Here points out the criteria of using apply(): (1) few target dims (2) one data array (3) (from the first point) simple function.
Based on these criteria, some s2dv functions can be improved. For example, MeanDims() doesn't need to use Apply(). Multiple cores won't speed up the process even the data size is big (I tested 1.5GB data). In the inner function .Clim(), Regression() and Trend() are used. But it may be more efficient to use apply or for loop instead. Of course, the modification should not sacrifice the intention of s2dv package, which is to make data structure more flexible and compatible with startR.
I still have some questions about the trade-off between time-memory-flexibility.
How 'simple' a function is can be simple enough to use apply()?
Some functions are so simple that apply() is always faster than Apply() regardless of how many cores are used. Some functions with "Apply() + 1 core" is faster than apply() but with more cores, they become slower. Some functions with more cores can be faster than apply() but with only 1 core are very slow (actually I haven't seen this case. With multiple cores it can barely reach the speed of apply()).
The number of target dimensions matters
As the second statement above, apply() have advantages when the target dimension number is few. But some functions have flexible input dimensions.
Does the data size impact the comparison?
With the same functions, is it possible to be faster with apply() with small data size while slower with big data size? You may mention this before, Nuria, but I missed the details.
What is the best way to detect memory usage?
As I mentioned above, the functions
Rprofprovide inconsistent results, especially when multiple cores are used.
It would be easier to solve this issue in a systematic way, and I need to clarify these questions first. Any suggestions will be appreciated, thanks!