Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce number of gradient calculations in LD-MMA #484

Open
cpixton opened this issue Dec 12, 2022 · 0 comments
Open

Reduce number of gradient calculations in LD-MMA #484

cpixton opened this issue Dec 12, 2022 · 0 comments

Comments

@cpixton
Copy link

cpixton commented Dec 12, 2022

The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."

However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.

I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.

This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.

I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant