-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export to onnx of a standard Detectron2 zoo faster-rcnn model generates a ReduceMax op not supported by ONNXRT TensorRT EP #4896
Comments
I could reproduce your problem under Ubuntu 20.04. I had to change the order loading the weights to |
I did not get any issue with either the CPUExecutionProvider nor the CUDAExecutionProvider. Onnxrt was even faster by 30% than D2 on cpu only . I did not check cuda perf time but was pretty fast. The main problem is that tensort does not accept the reducemax instruction on the onnx model and that I could not either split the model into subgraph to assign the subgraph with faulty instruction to cpu. So we are blocked with tensorrt. This D2 zoo model is just a way for me demo the PB , but my main goal is to fix my own app model also faster rcnn based which has the same issue and which blocks it's productization . Alternative is to rewrite the app with native pytorch or try another ep maybe TVM . Did you experience the same issue with tensorrt EP? |
Yes with the tensorrt EP I get the exact same error |
Good. We are on the same page. Is there a workaround if not real fix a available ? |
My workaround is to not use the ONNX runtime, but instead use plain TensorRT. This tutorial is a good start: https://github.com/NVIDIA/TensorRT/tree/release/8.6/samples/python/detectron2. |
My company wants than the implementation be portable to cpu/GPU and to any GPU vendor. Hence the choice of onnxrt. I cannot use native try as a workaround. The current workaround is to use the cuda EP. But I can't benefit from TRT EP model optimisation and Nvidia tesnor core acceleration via TRT EP use of fp16 or automatic mixes precision options. I really need this to be fixed from detectron2. I am going to check if apache TVM EP for onnxrt is supporting the generated onnx format like the cuda EP. The final alternay would be to rewrite everything in pure pytorch without detectron2 . Il would like to avoid that. Btw I saw no new D2 version since Nov 2021. Is Detectron2 still supported? |
any clue of what is the problem due to and what workaround I could use (other than using plain TensorRT) - I am trying the TVM EP to see if the TVM accepts the ONNX faulty instruction generated by Dectectron2 |
any update on this ? |
what is the next step ? |
Apparently , the problems is similar to tensorRt not supporting Mask-rcnn model . |
Problem was narrowed down to ONNXRT TensorRT Execution Provider implementation which is under ONNXRuntime maintainers (Microsoft). This is because trtexec --onnx=output/faster_rcnn_fpn.onnx --verbose[06/30/2023-15:15:35] [I] TensorRT version: 8.6.1 So ORT TRT EP implementation must be fixed first. Second, there seems to be other issues in theD2 faster-rcnn model ONNX graph to be fixed after. : according some nice Nvidia engineer I talked to : this is a normal phase to do in running an ONNX model into tensoRT . This is because to be able to nicely optimize the graph , tensort has stricter requirements not supported by ONNX . So we have to use some nvida tool like graphSurgean to transform the ONNX model into another ONNX model that is supported by TensoRT. This was done for example with the mask-rcnn ONNX model , and have yet to be done for the faster-rcnn model. |
Hi, our team works on ORT TRT EP. |
We have been working for a while with the onnxruntime team to try to fix this issue. The problem comes from Some pytorch modelule creating a ReduceMax instruction with onn opset 13 and detectron2 creating another ReduceMax instruction with latest API (as od opset18) - despite D2 model is converted using opset 17 (opset16 does no better). Eventually we could make the faster-rcnn model from D2 model zoo work using TWO converter : onnxsim and symbolic_shape_infer . But that did not solve my own D2 model . The problem only hid another problem ( Squeez(13) ort trt EP kernel missing. The only way to fix this was to split the graph into subgraphs where (most) of the subgraph not supported by TRT EP wer fall back to CUDA EP yielding a terrible inference performance. I have to resolve to either go with CUDA EP only or completely rewrite the D2 modules in pure pytorch. Nobody answerd as to whter D2 was a good option to use. Due to all my onnx deployment problems I would not recommand. |
I also use the configuration file of the project EVA-02/det/ built on Detectron2: 'https://github.com/baaivision/EVA/blob/master/EVA-02/det/projects/ViTDet/configs/eva2_o365_to_coco/eva2_o365_to_coco_cascade_mask_rcnn_vitdet_l_8attn_1536_lrd0p8 .py' try to export it to onnx, after my modification to export_model.py using lazy-config, I can use the command "/export_model.py --config-file /mnt/data1/download_new/EVA/EVA-master- Project-lazy/Eva-02/DET/Projects/Vitdet/Configs/EVA2_O365_COCO/EVA2_TO_COCADE_MASK_RCNN_L_L_8ATTN_1536_LRRD0P8 .py-OUTPUT OUTPUT/ TRT_12.18/ --sexport-Method Tracing-Format Onnx "successfully exported onnx, then use onnxslm:" https://github.com/WeLoveAI/OnnxSlim" Simplify the model, and then use tensor's onnx parser to try to convert onnx to trt,
I found that the part where the error was reported contained the operator nms in the onnx in netron diagram, and then part of my model structure was: "(proposal_generator): RPN( |
Instructions To Reproduce the 🐛 Bug:
What exact command you run:
python3 export_model.py --output onnx_output --sample-image input.jpg
Full logs or other relevant observations:
run, such as a private dataset.
Unfortunately , requires an input RGB png or jpeg image (unless can randomize the input in teh code above)
Expected behavior:
with TensorRT Execution provider the code above should work as fine as with the CUDAExecutionProvider (or the CPUExecutionProvider)
That means that the detectron2 export to onnx should generate a onnx::ReduceMax call with no axes argument
Environment:
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.35
Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.3.18-150300.59.63-default-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 8000
Nvidia driver version: 515.76
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 7
BogoMIPS: 5387.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 77 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0+cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] Could not collect
The text was updated successfully, but these errors were encountered: