Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] No module named 'mmdeploy.codebase.mmrazor' #2832

Open
3 tasks done
Sudae827 opened this issue Oct 27, 2024 · 0 comments
Open
3 tasks done

[Bug] No module named 'mmdeploy.codebase.mmrazor' #2832

Sudae827 opened this issue Oct 27, 2024 · 0 comments

Comments

@Sudae827
Copy link

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

Issue Converting Pruned RetinaNet from mmrazor to ONNX Using mmdeploy!!!!

I am trying to convert a pruned and finetuned RetinaNet model from the mmrazor library to ONNX format using mmdeploy. However, I encountered an error during the conversion process. I have checked my environment setup and confirmed that mmrazor is included in the codebase libraries.

Is there anyone who can help me resolve this issue? It’s quite urgent. Thank you very much!

Command:
razor_config=configs/pruning/mmdet/group_fisher/retinanet/group_fisher_act_deploy_retinanet_r50_fpn_1x_coco.py
deploy_config=/code/mmdeploy/configs/mmdet/detection/detection_onnxruntime_static.py
python /code/mmdeploy/tools/deploy.py $deploy_config
$razor_config \
https://download.openmmlab.com/mmrazor/v1/pruning/group_fisher/retinanet/act/group_fisher_act_finetune_retinanet_r50_fpn_1x_coco.pth
/codemmdeploy/tests/data/tiger.jpeg
--work-dir work_dirs/mmdeploy

Log:
10/27 13:03:05 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
10/27 13:03:06 - mmengine - WARNING - Import mmdeploy.codebase.mmrazor.deploy failedPlease check whether the module is the custom module.No module named 'mmdeploy.codebase.mmrazor'
Process Process-2:
Traceback (most recent call last):
File "/root/miniconda3/envs/openmmlab/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/root/miniconda3/envs/openmmlab/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 107, in call
ret = func(*args, **kwargs)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/pytorch2onnx.py", line 61, in torch2onnx
task_processor = build_task_processor(model_cfg, deploy_cfg, device)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/utils/utils.py", line 46, in build_task_processor
import_codebase(codebase_type, custom_module_list)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/codebase/init.py", line 36, in import_codebase
codebase = get_codebase_class(codebase_type)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/codebase/base/mmcodebase.py", line 86, in get_codebase_class
return CODEBASE.build({'type': codebase.value})
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
TypeError: 'module' object is not callable
10/27 13:03:06 - mmengine - ERROR - /root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - mmdeploy.apis.pytorch2onnx.torch2onnx with Call id: 0 failed. exit.

Reproduction

razor_config=configs/pruning/mmdet/group_fisher/retinanet/group_fisher_act_deploy_retinanet_r50_fpn_1x_coco.py
deploy_config=/code/mmdeploy/configs/mmdet/detection/detection_onnxruntime_static.py

python /code/mmdeploy/tools/deploy.py $deploy_config $razor_config https://download.openmmlab.com/mmrazor/v1/pruning/group_fisher/retinanet/act/group_fisher_act_finetune_retinanet_r50_fpn_1x_coco.pth /code/mmdeploy/tests/data/tiger.jpeg --work-dir work_dirs/mmdeploy

Environment

10/27 12:52:22 - mmengine - INFO - **********Environmental information**********
10/27 12:52:24 - mmengine - INFO - sys.platform: linux
10/27 12:52:24 - mmengine - INFO - Python: 3.8.17 (default, Jul  5 2023, 21:04:15) [GCC 11.2.0]
10/27 12:52:24 - mmengine - INFO - CUDA available: True
10/27 12:52:24 - mmengine - INFO - numpy_random_seed: 2147483648
10/27 12:52:24 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 3090
10/27 12:52:24 - mmengine - INFO - CUDA_HOME: /usr/local/cuda
10/27 12:52:24 - mmengine - INFO - NVCC: Cuda compilation tools, release 11.3, V11.3.109
10/27 12:52:24 - mmengine - INFO - GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
10/27 12:52:24 - mmengine - INFO - PyTorch: 1.13.1
10/27 12:52:24 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.7
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.5
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

10/27 12:52:24 - mmengine - INFO - TorchVision: 0.14.1
10/27 12:52:24 - mmengine - INFO - OpenCV: 4.8.0
10/27 12:52:24 - mmengine - INFO - MMEngine: 0.8.4
10/27 12:52:24 - mmengine - INFO - MMCV: 2.0.1
10/27 12:52:24 - mmengine - INFO - MMCV Compiler: GCC 9.3
10/27 12:52:24 - mmengine - INFO - MMCV CUDA Compiler: 11.7
10/27 12:52:24 - mmengine - INFO - MMDeploy: 1.3.1+
10/27 12:52:24 - mmengine - INFO - 

10/27 12:52:24 - mmengine - INFO - **********Backend information**********
10/27 12:52:24 - mmengine - INFO - tensorrt:    None
10/27 12:52:24 - mmengine - INFO - ONNXRuntime: 1.16.3
10/27 12:52:24 - mmengine - INFO - ONNXRuntime-gpu:     None
10/27 12:52:24 - mmengine - INFO - ONNXRuntime custom ops:      Available
10/27 12:52:24 - mmengine - INFO - pplnn:       None
10/27 12:52:24 - mmengine - INFO - ncnn:        1.0.20231228
10/27 12:52:24 - mmengine - INFO - ncnn custom ops:     NotAvailable
10/27 12:52:24 - mmengine - INFO - snpe:        None
10/27 12:52:24 - mmengine - INFO - openvino:    None
10/27 12:52:24 - mmengine - INFO - torchscript: 1.13.1
10/27 12:52:24 - mmengine - INFO - torchscript custom ops:      NotAvailable
10/27 12:52:24 - mmengine - INFO - rknn-toolkit:        None
10/27 12:52:24 - mmengine - INFO - rknn-toolkit2:       None
10/27 12:52:24 - mmengine - INFO - ascend:      None
10/27 12:52:24 - mmengine - INFO - coreml:      None
10/27 12:52:24 - mmengine - INFO - tvm: None
10/27 12:52:24 - mmengine - INFO - vacc:        None
10/27 12:52:24 - mmengine - INFO - 

10/27 12:52:24 - mmengine - INFO - **********Codebase information**********
10/27 12:52:25 - mmengine - INFO - mmdet:       3.1.0
10/27 12:52:25 - mmengine - INFO - mmseg:       None
10/27 12:52:25 - mmengine - INFO - mmpretrain:  1.2.0
10/27 12:52:25 - mmengine - INFO - mmocr:       None
10/27 12:52:25 - mmengine - INFO - mmagic:      None
10/27 12:52:25 - mmengine - INFO - mmdet3d:     None
10/27 12:52:25 - mmengine - INFO - mmpose:      None
10/27 12:52:25 - mmengine - INFO - mmrotate:    None
10/27 12:52:25 - mmengine - INFO - mmaction:    None
10/27 12:52:25 - mmengine - INFO - mmrazor:     1.0.0
10/27 12:52:25 - mmengine - INFO - mmyolo:      0.6.0

Error traceback

10/27 13:05:17 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
10/27 13:05:17 - mmengine - WARNING - Import mmdeploy.codebase.mmrazor.deploy failedPlease check whether the module is the custom module.No module named 'mmdeploy.codebase.mmrazor'
Process Process-2:
Traceback (most recent call last):
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/pytorch2onnx.py", line 61, in torch2onnx
    task_processor = build_task_processor(model_cfg, deploy_cfg, device)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/utils/utils.py", line 46, in build_task_processor
    import_codebase(codebase_type, custom_module_list)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/codebase/__init__.py", line 36, in import_codebase
    codebase = get_codebase_class(codebase_type)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/codebase/base/mmcodebase.py", line 86, in get_codebase_class
    return CODEBASE.build({'type': codebase.value})
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
TypeError: 'module' object is not callable
10/27 13:05:17 - mmengine - ERROR - /root/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.pytorch2onnx.torch2onnx` with Call id: 0 failed. exit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant