-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SEAL memory leak #25
Comments
I also head that seal uses some block memory management under the hood, so that might also be important |
I have modified my script to function test()
seal_params = create_seal_params()
nciphers = 1_000_000
ciphers = [encrypt_into_cipher(randn(), 2^50, seal_params.encoder, seal_params.encryptor) for i in 1:nciphers]
while true
i = rand(1:nciphers)
j = rand(1:nciphers)
k = rand(1:nciphers)
old = ciphers[k]
ciphers[k] = cipher_addition(ciphers[i], ciphers[k], seal_params.evaluator)
destroy!(old)
end
end to no avail. There have been other issues in the wild like this or this probably this is just the way seal is and probably nothing can be done |
You could try to verify your hypothesis by regularly checking the GC usage with something like using Printf: @printf
function meminfo_julia()
# @printf "GC total: %9.3f MiB\n" Base.gc_total_bytes(Base.gc_num())/2^20
# Total bytes (above) usually underreports, thus I suggest using live bytes (below)
@printf "GC live: %9.3f MiB\n" Base.gc_live_bytes()/2^20
@printf "JIT: %9.3f MiB\n" Base.jit_total_bytes()/2^20
@printf "Max. RSS: %9.3f MiB\n" Sys.maxrss()/2^20
end (originally posted in https://discourse.julialang.org/t/how-to-track-total-memory-usage-of-julia-process-over-time/91167/6?u=sloede). If you see that GC live bytes are spiraling out of control even when using |
Yeah, probably this is not julia issue: julia> meminfo_julia()
GC live: 43.181 MiB
JIT: 0.055 MiB
Max. RSS: 96301.328 MiB
julia> GC.gc()
julia> meminfo_julia()
GC live: 20.535 MiB
JIT: 0.059 MiB
Max. RSS: 96301.535 MiB 96GB is just occupied for no reason |
@sloede Thanks again for this package this has been a learning journey for me with julia, ccall and binary builder. I have succeeded in bumping the version to 4.1.1 locally and unfortunately I did not see any performance gains as I was hoping for. Just as a backstory I'm trying to implement a large machine learning model (CNN) using seal. Before this I was using pyseal and did everything in python, but now I realized that I need multi-threading to bring the performance to the next level. The reason why I was pushing for 4.1.1 is that there was a promised multiplication performance improvement in 3.7.x version. However, for some reason pyseal is consistently like 30% faster than SEAL.jl even after I compile SEAL_jll manually. Though my benchmarks suggested that single ckks cipher-cipher multiplication if anything is faster in SEAL.jl but when I'm doing a lot of them many times, pyseal seems to take over. Maybe this has to do with the following.
I attempted to do multithreading, i.e. perform many cipher-cipher multiplications in parallel using FLoops.jl. It worked fine, but then I noticed sudden crashes which I attributed to non-stopping memory growth. Then I decided to test this with a single thread, and this is a mwe i came up with:
My julia is
using a fresh Project.toml environment, this script linearly consumes more and more memory and I assume would eventually crash, GC.gc() does not help. I was trying to diagnose how is this happening, found a good text about memory management in ccall but I have no idea really. This is all new.
The text was updated successfully, but these errors were encountered: