-
Notifications
You must be signed in to change notification settings - Fork 54
fix: Fix race condition in CUDA Simulator #690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The prior fix scanned each modules entire globals dict under lock on every run, and all modules shared a lock. This update only scans the globals dict on first entry for a module. Additionally, each module has it's own lock, so a thread holding the lock in one module doesn't affect the launch of a thread for a function in another module.
Greptile SummaryThis PR fixes a critical race condition in the CUDA simulator's Key changes:
The fix correctly handles the race scenario described in the PR where Thread A and B execute device functions simultaneously. The reference counting ensures atomicity of module swapping operations while allowing concurrent execution within the same module. Confidence Score: 5/5
Important Files Changed
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a race condition in the CUDA simulator's swapped_cuda_module context manager that occurs when multiple simulated threads simultaneously call device functions from the same Python module. The fix implements per-module locks and reference counting to ensure thread-safe swapping and restoration of module globals.
Key Changes:
- Added per-module locking mechanism using
defaultdict(threading.Lock)to synchronize access to module globals - Implemented reference counting to track the number of active threads using the fake CUDA module in each Python module
- Modified the swap logic so only the first entering thread performs the swap, and only the last exiting thread performs the restoration
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
numba_cuda/numba/cuda/simulator/kernelapi.py, line 493-504 (link)logic: race condition:
defaultdictaccess is not thread-safe when key doesn't existwhen multiple threads simultaneously call device functions from the same module for the first time, they race at line 504. threads can create different lock objects for the same
gid, defeating the per-module locking
1 file reviewed, 1 comment
The previous commit introduced a global lock on creating the per-module lock. This prevented concurrent creation of locks causing one thread to have a different lock to others, and modifying `fn_globs` or `_swap_refcount` in a race with other threads. Implementing this exposed yet another race: a thread could delete the lock from `_globals_locks` while another thread was already waiting at the entrance to the first `with lock:` statement. There is no need to delete a module's lock during runtime, so this commit simply removes the `del _global_locks[gid]`` statement.
Description
There is a race condition in the CUDA simulator, specifically in the
swapped_cuda_modulecontext manager.I use the simulator for quick-running CI to avoid using up precious free GPU minutes. Occasionally, I get this error:
It is raised from a different thread each time. The error arose more commonly after I began allocating arrays in a small helper function in its own module. The error is similar to the one raised in numba/numba#1844.
Each thread in the simulator is a
threading.Threadobject, so they share memory. Every time a device function is called, it is wrapped in this context manager:numba-cuda/numba_cuda/numba/cuda/simulator/kernelapi.py
Lines 494 to 509 in aff41e9
Race:
Thread A and Thread B are executing device functions in the same python module. They don't need to be the same function. They must be in a separate file from the kernel definition, as the kernel replaces references on entry, run all threads, and restores only after all threads have exited.
orig = {}andrepl = {}, as no references to cuda exist in it's__globals__dict. Thread B yields.cuda.local.array, and sees replaced reference to numba.cuda.localis not imported as part of numba.cuda whenNUMBA_ENABLE_CUDASIM==1, so the error is thrown.MWE
The Gist below contains a script that reliably causes the error on my machine. It takes ~200s to hit the race on my machine, typically, so I have not added it to the test suite. It does seem to fail faster on xdist, but it has a very long runtime when it doesn't fail.
Reproducer
Place all three files in the same directory, and run
cudasim_race_mwe.Fix
This PR implements a per-module lock and reference count, so that the first entrance to the context for a module replaces cuda -> fake_cuda, and the last thread to exit restores fake_cuda -> cuda. There may be a performance hit associated for simulated kernels with many device function calls from many modules, but this should be small, as all threads except for the first entrant and last exit perform a single integer comparison and increment/decrement an integer counter under the lock. The short "benchmark" run in the MWE did not change duration between the patched and unpatched versions on my machine.