You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, guys!
I want to adopt taichi to call my GPU to speed up the preprocessing within pytorch. Since torch cannot use CUDA-tensor in the multi-process loading due to fork start method, I am wondering, whether Taichi can call the GPU in the torch's __getitem__() in this fork mode?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, guys!
I want to adopt
taichi
to call my GPU to speed up the preprocessing within pytorch. Sincetorch
cannot use CUDA-tensor in the multi-process loading due tofork
start method, I am wondering, whether Taichi can call the GPU in thetorch
's__getitem__()
in this fork mode?Your answer and guide will be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions