You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched related issues but cannot get the expected help.
2. I have read the FAQ documentation but cannot get the expected help.
3. The bug has not been fixed in the latest version.
Describe the bug
When exporting a pre-trained mmseg model to the ONNX format, I encountered an issue where the output does not match the expected dimensions. Specifically, the input size for the model is (1024, 2048), and based on the model’s design and expectations, the output size should be (512, 1024). However, when performing inference with the exported ONNX model, the actual output dimensions are (1024, 2048), which is not consistent with the expected output. I need to resolve this issue to ensure that the model’s output aligns with that of the original mmseg model.
Currently, the issue has been addressed by utilizing the method of probability feature maps. This approach seems to be the only solution that has resolved the discrepancy in output dimensions, allowing the ONNX model to produce the expected output size of (512, 1024) instead of the incorrect (1024, 2048).
Reproduction
none
Environment
0/11 22:56:24 - mmengine - INFO -
10/11 22:56:24 - mmengine - INFO - **********Environmental information**********
10/11 22:56:26 - mmengine - INFO - sys.platform: win32
10/11 22:56:26 - mmengine - INFO - Python: 3.10.15 | packaged by Anaconda, Inc. | (main, Oct 3 2024, 07:22:19) [MSC v.1929 64 bit (AMD64)]
10/11 22:56:26 - mmengine - INFO - CUDA available: True
10/11 22:56:26 - mmengine - INFO - MUSA available: False
10/11 22:56:26 - mmengine - INFO - numpy_random_seed: 2147483648
10/11 22:56:26 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 4060
10/11 22:56:26 - mmengine - INFO - CUDA_HOME: None
10/11 22:56:26 - mmengine - INFO - MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.41.34120 版
10/11 22:56:26 - mmengine - INFO - GCC: n/a
10/11 22:56:26 - mmengine - INFO - PyTorch: 1.12.1
10/11 22:56:26 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 2019
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX2
- CUDA Runtime 11.3
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.3.2 (built against CUDA 11.5)
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,
10/11 22:56:26 - mmengine - INFO - TorchVision: 0.13.1
10/11 22:56:26 - mmengine - INFO - OpenCV: 4.10.0
10/11 22:56:26 - mmengine - INFO - MMEngine: 0.10.5
10/11 22:56:26 - mmengine - INFO - MMCV: 2.2.0
10/11 22:56:26 - mmengine - INFO - MMCV Compiler: MSVC 192930148
10/11 22:56:26 - mmengine - INFO - MMCV CUDA Compiler: 11.3
10/11 22:56:26 - mmengine - INFO - MMDeploy: 1.3.1+unknown
10/11 22:56:26 - mmengine - INFO -
10/11 22:56:26 - mmengine - INFO - **********Backend information**********
10/11 22:56:26 - mmengine - INFO - tensorrt: None
10/11 22:56:26 - mmengine - INFO - ONNXRuntime: None
10/11 22:56:26 - mmengine - INFO - ONNXRuntime-gpu: 1.18.1
10/11 22:56:26 - mmengine - INFO - ONNXRuntime custom ops: Available
10/11 22:56:26 - mmengine - INFO - pplnn: None
10/11 22:56:26 - mmengine - INFO - ncnn: None
10/11 22:56:26 - mmengine - INFO - snpe: None
10/11 22:56:26 - mmengine - INFO - openvino: None
10/11 22:56:26 - mmengine - INFO - torchscript: 1.12.1
10/11 22:56:26 - mmengine - INFO - torchscript custom ops: NotAvailable
10/11 22:56:26 - mmengine - INFO - rknn-toolkit: None
10/11 22:56:26 - mmengine - INFO - rknn-toolkit2: None
10/11 22:56:26 - mmengine - INFO - ascend: None
10/11 22:56:26 - mmengine - INFO - coreml: None
10/11 22:56:26 - mmengine - INFO - tvm: None
10/11 22:56:26 - mmengine - INFO - vacc: None
10/11 22:56:26 - mmengine - INFO -
10/11 22:56:26 - mmengine - INFO - **********Codebase information**********
10/11 22:56:26 - mmengine - INFO - mmdet: None
10/11 22:56:26 - mmengine - INFO - mmseg: 1.2.2
10/11 22:56:26 - mmengine - INFO - mmpretrain: None
10/11 22:56:26 - mmengine - INFO - mmocr: None
10/11 22:56:26 - mmengine - INFO - mmagic: None
10/11 22:56:26 - mmengine - INFO - mmdet3d: None
10/11 22:56:26 - mmengine - INFO - mmpose: None
10/11 22:56:26 - mmengine - INFO - mmrotate: None
10/11 22:56:26 - mmengine - INFO - mmaction: None
10/11 22:56:26 - mmengine - INFO - mmrazor: None
10/11 22:56:26 - mmengine - INFO - mmyolo: None
Error traceback
No response
The text was updated successfully, but these errors were encountered:
Checklist
Describe the bug
When exporting a pre-trained mmseg model to the ONNX format, I encountered an issue where the output does not match the expected dimensions. Specifically, the input size for the model is (1024, 2048), and based on the model’s design and expectations, the output size should be (512, 1024). However, when performing inference with the exported ONNX model, the actual output dimensions are (1024, 2048), which is not consistent with the expected output. I need to resolve this issue to ensure that the model’s output aligns with that of the original mmseg model.
Currently, the issue has been addressed by utilizing the method of probability feature maps. This approach seems to be the only solution that has resolved the discrepancy in output dimensions, allowing the ONNX model to produce the expected output size of (512, 1024) instead of the incorrect (1024, 2048).
Reproduction
none
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: