You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."
However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.
I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.
This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.
I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!
The text was updated successfully, but these errors were encountered:
The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."
However, it appears that the implementation of
LD-MMA
calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.I believe this is a two-line change: This line could be changed to something like
fcur = f(n, xcur, NULL, f_data);
, and then after line 299 in the same file one could add the codeif (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }
.This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.
I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!
The text was updated successfully, but these errors were encountered: