You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having DLPack support in onnxruntime allows us to have "zero" cost copys between these modules. This is not only interesting during training. Often, multiple models are used in which case the ouput of one model will be used as the input of the next one. When we want to do postprocessing/preprocessing on these models we currently can't do this without moving them to CPU using .numpy(). This comes with a significant performce cost.
Describe scenario use case
We want to use cupy for processing our model inbetween different inference runs. Cupy supports the DLPack protocol which would allow us to do so. One option would be to build with training support but this makes our package size quite a bit bigger which I'd like to avoid.
The text was updated successfully, but these errors were encountered:
Describe the feature request
Currently, the DLPack protocol can only be used in non-training build. See:
ortValue
totorch.tensor
for GPU? #10327I believe it would make sense to enable this in the main build and not only the training one. Many AI modules already support this:
Having DLPack support in onnxruntime allows us to have "zero" cost copys between these modules. This is not only interesting during training. Often, multiple models are used in which case the ouput of one model will be used as the input of the next one. When we want to do postprocessing/preprocessing on these models we currently can't do this without moving them to CPU using
.numpy()
. This comes with a significant performce cost.Describe scenario use case
We want to use cupy for processing our model inbetween different inference runs. Cupy supports the DLPack protocol which would allow us to do so. One option would be to build with training support but this makes our package size quite a bit bigger which I'd like to avoid.
The text was updated successfully, but these errors were encountered: