Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get GPU solver working #381

Open
taDachs opened this issue Dec 2, 2024 · 3 comments
Open

Unable to get GPU solver working #381

taDachs opened this issue Dec 2, 2024 · 3 comments

Comments

@taDachs
Copy link

taDachs commented Dec 2, 2024

Hey, I tried following the quickstart for gpu solvers, but was unable to get the solver to run.

My code looks like this:

using JuMP
using MadNLPGPU
using MadNLP

# Constants
T_f = 10.0
S = π

N = 100
h = T_f / N

G = 9.81  # m/s^2
L = 1.0   # m

model = Model(()->MadNLP.Optimizer(linear_solver=MadNLPGPU.CUDSSSolver))

@variable(model, θ[1:N])
@variable(model, θ_dot[1:N])
@variable(model, u[1:N])

# Objective: Minimize control effort
@objective(model, Min, h * sum(u .^ 2))

@constraint(model, θ[2:end] == θ[1:end-1] + h * θ_dot[1:end-1])
@constraint(model, θ[2:end] == θ_dot[1:end-1] + h * (-G * sin.(θ[1:end-1]) / L + u[1:end-1]))

# Boundary conditions
@constraint(model, θ[1] == 0)
@constraint(model, θ_dot[1] == 0)
@constraint(model, θ[N] == S)
@constraint(model, θ_dot[N] == 0)

optimize!(model)

I get an error message which I don't really understand:

ERROR: LoadError: MethodError: no method matching MadNLPGPU.CUDSSSolver(::SparseArrays.SparseMatrixCSC{Float64, Int32}; opt::MadNLPGPU.CudssSolverOptions)
The type `MadNLPGPU.CUDSSSolver` exists, but no method is defined for this combination of argument types when trying to construct it.

Closest candidates are:
  MadNLPGPU.CUDSSSolver(::Union{Nothing, CUDSS.CudssSolver}, ::CUDA.CUSPARSE.CuSparseMatrixCSC{T}, ::CUDA.CuArray{T, 1}, ::CUDA.CuArray{T, 1}, ::MadNLPGPU.CudssSolverOptions, ::MadNLP.MadNLPLogger) where T got un
supported keyword argument "opt"
   @ MadNLPGPU ~/.julia/packages/MadNLPGPU/F7lAy/src/LinearSolvers/cudss.jl:13
  MadNLPGPU.CUDSSSolver(::CUDA.CUSPARSE.CuSparseMatrixCSC{T}; opt, logger) where T
   @ MadNLPGPU ~/.julia/packages/MadNLPGPU/F7lAy/src/LinearSolvers/cudss.jl:22

Stacktrace:
 [1] create_kkt_system(::Type{MadNLP.SparseKKTSystem}, cb::MadNLP.SparseCallback{Float64, Vector{Float64}, Vector{Int64}, MadNLPMOI.MOIModel{Float64}, MadNLP.MakeParameter{Vector{Float64}, Vector{Int64}}, MadNLP.
EnforceEquality}, ind_cons::@NamedTuple{ind_eq::Vector{Int64}, ind_ineq::Vector{Int64}, ind_fixed::Vector{Int64}, ind_lb::Vector{Int64}, ind_ub::Vector{Int64}, ind_llb::Vector{Int64}, ind_uub::Vector{Int64}}, lin
ear_solver::Type{MadNLPGPU.CUDSSSolver}; opt_linear_solver::MadNLPGPU.CudssSolverOptions, hessian_approximation::Type)
   @ MadNLP ~/.julia/packages/MadNLP/66k4O/src/KKT/Sparse/augmented.jl:128
 [2] MadNLPSolver(nlp::MadNLPMOI.MOIModel{Float64}; kwargs::@Kwargs{linear_solver::UnionAll})
   @ MadNLP ~/.julia/packages/MadNLP/66k4O/src/IPM/IPM.jl:155
 [3] optimize!(model::MadNLPMOI.Optimizer)
   @ MadNLPMOI ~/.julia/packages/MadNLP/66k4O/ext/MadNLPMOI/MadNLPMOI.jl:946
 [4] optimize!
   @ ~/.julia/packages/MathOptInterface/gLl4d/src/Bridges/bridge_optimizer.jl:367 [inlined]
 [5] optimize!
   @ ~/.julia/packages/MathOptInterface/gLl4d/src/MathOptInterface.jl:122 [inlined]
 [6] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MadNLPMOI.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float
64}}})
   @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/gLl4d/src/Utilities/cachingoptimizer.jl:321
 [7] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
   @ JuMP ~/.julia/packages/JuMP/i68GU/src/optimizer_interface.jl:595
 [8] optimize!(model::Model)
   @ JuMP ~/.julia/packages/JuMP/i68GU/src/optimizer_interface.jl:546
 [9] top-level scope
   @ /home/hcr/ws/for_issue.jl:36

Is there a problem with how i setup my optimization problem? I couldn't find any documentation on the GPU solvers.

@sshin23
Copy link
Member

sshin23 commented Dec 3, 2024

Thanks for reporting this, @taDachs. This should be fixed. To use GPU feature, please try ExaModels:
https://exanauts.github.io/ExaModels.jl/stable/guide/

If you want to use GPU features with JuMP, one option is using experimental JuMP interface, but this might be less stable/efficient
https://exanauts.github.io/ExaModels.jl/stable/jump/

@KSepetanc
Copy link

KSepetanc commented Jan 10, 2025

@sshin23 can you elaborate what is the scope of the fix? Will just the documentation be fixed that ExaModels needs to be used for GPU feature or there will be native support for JuMP so that JuMP computes derivatives? JuMP can also use ASL for derivatives as well as experimental symbolic derivatives. To my understanding cuDSS is linear solver and derivatives (Jacobian and Hessian) could be provided by other tools than just ExaModels.

@sshin23
Copy link
Member

sshin23 commented Jan 10, 2025

You may wrap the NLP model so that the AD takes place on host memory, and then they are sent to the device memory. Please check https://github.com/exanauts/ExaModels.jl/blob/main/src/utils.jl#L5-L120

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants