-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve R2 Allocation test #164
base: master
Are you sure you want to change the base?
Resolve R2 Allocation test #164
Conversation
src/utils.jl
Outdated
More specifically, construct a GenericExecutionStats on the NLPModel of reg_nlp and add three solver_specific entries namely :smooth_obj, :nonsmooth_obj and :xi. | ||
This is useful for reducing the number of allocations when calling solve!(..., reg_nlp, stats) and should be used by default. | ||
Warning: This should *not* be used when adding other solver_specific entries that do not have the current scalar type. | ||
For instance, when one adds the history of the objective value as a solver_specific entry (which has Vector{T} type), this will cause an error and `GenericExecutionStats(reg_nlp.model)` should be used instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can remove this sentence because history vectors should eventually be collected in the callback and not stored in stats
.
@@ -20,3 +24,20 @@ ShiftedProximalOperators.iprox!( | |||
|
|||
LinearAlgebra.diag(op::AbstractDiagonalQuasiNewtonOperator) = copy(op.d) | |||
LinearAlgebra.diag(op::SpectralGradient{T}) where {T} = zeros(T, op.nrow) .* op.d[1] | |||
|
|||
""" | |||
GenericExecutionStats(reg_nlp :: AbstractRegularizedNLPModel{T, V}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could give it a better name; that why I called GenericExecutionStats
“generic” in the first place. Maybe RegularizedExecutionStats
?!
stats = GenericExecutionStats(reg_nlp.model, solver_specific = Dict{Symbol, T}()) | ||
set_solver_specific!(stats, :smooth_obj, T(Inf)) | ||
set_solver_specific!(stats, :nonsmooth_obj, T(Inf)) | ||
set_solver_specific!(stats, :xi, T(Inf)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn’t dual_feas
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how
@dpo @MohamedLaghdafHABIBOULLAH
I think this solves #161.
I added a function that constructs a
GenericExecutionStats
on aRegularizedNLPModel
.solver_specific entries are added during the construction, this removes the allocation in
solve!
caused by adding an uninitialized solver_specific entry in stats.Also, for type stability, I had to specify that solver_specific entries are the same type as the one added in this new constructor, else there are allocations as well. This might cause issues if used without care.