-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MeanPool doesn't support complex CUDA arrays #981
Comments
Even when one attempts a work-around, there is an issue with Zygote. using CUDA Julia Version 1.11.0 f(X, ps, st) = mean(abs2, mp(real(X), ps, st)[1] + im*mp(imag(X), ps, st)[1]) If you want to allow scalar iteration, use Stacktrace: Click to add a cell. |
Not really a Zygote issue. Something is promoting the inputs to the pullbacks to a complex array |
Should I raise the issue elsewhere? Thank you for the quick response.
…Sent from my Galaxy
-------- Original message --------
From: Avik Pal ***@***.***>
Date: 10/17/24 3:10 PM (GMT-05:00)
To: "LuxDL/Lux.jl" ***@***.***>
Cc: salbert83 ***@***.***>, Author ***@***.***>
Subject: Re: [LuxDL/Lux.jl] MeanPool doesn't support complex CUDA arrays (Issue #981)
Not really a Zygote issue. Something is promoting the inputs to the pullbacks to a complex array
—
Reply to this email directly, view it on GitHub<#981 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFPIHYYOTUM5PTP7IWXVSETZ4ADSRAVCNFSM6AAAAABQCUPNESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRQGMZTENRRGY>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
This should be handled in |
As several layers support complex types (e.g., Dense, Conv, Bilinear), I expected the same from MeanPool and AdaptiveMeanPool. Example below for MeanPool (copied and pasted from jupyterlab). I've experienced the same issue on Linux.
using CUDA
using LinearAlgebra
using Lux
using LuxCUDA
using Pkg
using Random
versioninfo(), Pkg.status()
*********************************** output *******************************
Julia Version 1.11.0
Commit 501a4f25c2 (2024-10-07 11:40 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: 8 × Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, icelake-client)
Threads: 1 default, 0 interactive, 1 GC (on 8 virtual cores)
Status
C:\Users\salbe\OneDrive\Documents\Research\JuliaBugs\Project.toml
[052768ef] CUDA v5.5.2
[7073ff75] IJulia v1.25.0
[b2108857] Lux v1.1.0
[d0bbae9a] LuxCUDA v0.3.3
(nothing, nothing)
CUDA.allowscalar(false)
X = randn(ComplexF64, 64, 64, 3, 10);
mp = MeanPool((5,5),stride=(1,1))
ps, st = Lux.setup(Xoshiro(), mp)
Y = mp(X, ps, st)[1]
size(Y), typeof(Y)
output*********
((60, 60, 3, 10), Array{ComplexF64, 4})
X_ = CuArray{ComplexF64}(X)
ps_ = Lux.gpu_device()(ps)
st_ = Lux.gpu_device()(st)
************************** output *******************************
NamedTuple()
Y_ = mp(X_, ps_, st_)
***********************error message ********************************
Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations do not execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.
If you want to allow scalar iteration, use
allowscalar
or@allowscalar
to enable scalar iteration globally or for the operations in question.
Stacktrace:
[1] error(s::String)
@ Base .\error.jl:35
[2] errorscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:155
[3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:128
[4] assertscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:116
[5] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:50 [inlined]
[6] scalar_getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:36 [inlined]
[7] _getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:19 [inlined]
[8] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:17 [inlined]
[9] meanpool_direct!(y::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, x::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}, ::Val{(5, 5, 1)}, ::Val{3}, ::Val{(0, 0, 0, 0, 0, 0)}, ::Val{(1, 1, 1)}, ::Val{(1, 1, 1)}; alpha::Int64, beta::Int64, kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:96
[10] meanpool_direct!(y::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, x::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}; alpha::Int64, beta::Int64, kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:7
[11] meanpool_direct!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:4 [inlined]
[12] #meanpool!#410
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:41 [inlined]
[13] meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:38 [inlined]
[14] #meanpool!#425
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:73 [inlined]
[15] meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:70 [inlined]
[16] meanpool(x::CuArray{ComplexF64, 4, CUDA.DeviceMemory}, pdims::PoolDims{2, 2, 2, 4, 2}; kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:119
[17] meanpool
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:114 [inlined]
[18] MeanPoolOp
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:39 [inlined]
[19] PoolingLayer
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:81 [inlined]
[20] apply
@ C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:155 [inlined]
[21] (::MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}})(x::CuArray{ComplexF64, 4, CUDA.DeviceMemory}, ps::@NamedTuple{}, st::@NamedTuple{})
@ LuxCore C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:266
[22] top-level scope
@ In[41]:1
The text was updated successfully, but these errors were encountered: