Skip to content

Commit e5d994e

Browse files
Add support CO-DETR (MMDetection)
1 parent db3a211 commit e5d994e

6 files changed

+371
-5
lines changed

README.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 /
1818
* Support for non square models
1919
* Models benchmarks
2020
* Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing
21-
* Support for RT-DETR, YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, Gold-YOLO, RTMDet (MMYOLO), YOLOX, YOLOR, YOLOv9, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
21+
* Support for RT-DETR, CO-DETR (MMDetection), YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, Gold-YOLO, RTMDet (MMYOLO), YOLOX, YOLOR, YOLOv9, YOLOv8, YOLOv7, YOLOv6 and YOLOv5 using ONNX conversion with GPU post-processing
2222
* GPU bbox parser
2323
* Custom ONNX model parser
2424
* Dynamic batch-size
@@ -49,6 +49,7 @@ NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 /
4949
* [DAMO-YOLO usage](docs/DAMOYOLO.md)
5050
* [PP-YOLOE / PP-YOLOE+ usage](docs/PPYOLOE.md)
5151
* [YOLO-NAS usage](docs/YOLONAS.md)
52+
* [CO-DETR (MMDetection) usage](docs/CODETR.md)
5253
* [RT-DETR PyTorch usage](docs/RTDETR_PyTorch.md)
5354
* [RT-DETR Paddle usage](docs/RTDETR_Paddle.md)
5455
* [RT-DETR Ultralytics usage](docs/RTDETR_Ultralytics.md)
@@ -220,8 +221,9 @@ NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 /
220221
* [RTMDet (MMYOLO)](https://github.com/open-mmlab/mmyolo/tree/main/configs/rtmdet)
221222
* [Gold-YOLO](https://github.com/huawei-noah/Efficient-Computing/tree/master/Detection/Gold-YOLO)
222223
* [DAMO-YOLO](https://github.com/tinyvision/DAMO-YOLO)
223-
* [PP-YOLOE / PP-YOLOE+](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe)
224+
* [PP-YOLOE / PP-YOLOE+](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.8/configs/ppyoloe)
224225
* [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md)
226+
* [CO-DETR (MMDetection)](https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR)
225227
* [RT-DETR](https://github.com/lyuwenyu/RT-DETR)
226228

227229
##

config_infer_primary_codetr.txt

+28
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
[property]
2+
gpu-id=0
3+
net-scale-factor=0.0039215697906911373
4+
model-color-format=0
5+
onnx-file=co_dino_5scale_r50_1x_coco-7481f903.onnx
6+
model-engine-file=model_b1_gpu0_fp32.engine
7+
#int8-calib-file=calib.table
8+
labelfile-path=labels.txt
9+
batch-size=1
10+
network-mode=0
11+
num-detected-classes=80
12+
interval=0
13+
gie-unique-id=1
14+
process-mode=1
15+
network-type=0
16+
cluster-mode=2
17+
maintain-aspect-ratio=1
18+
symmetric-padding=0
19+
#workspace-size=2000
20+
parse-bbox-func-name=NvDsInferParseYolo
21+
#parse-bbox-func-name=NvDsInferParseYoloCuda
22+
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
23+
engine-create-func-name=NvDsInferYoloCudaEngineGet
24+
25+
[class-attrs-all]
26+
nms-iou-threshold=0.45
27+
pre-cluster-threshold=0.25
28+
topk=300

docs/CODETR.md

+187
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
# CO-DETR (MMDetection) usage
2+
3+
* [Convert model](#convert-model)
4+
* [Compile the lib](#compile-the-lib)
5+
* [Edit the config_infer_primary_codetr file](#edit-the-config_infer_primary_codetr-file)
6+
* [Edit the deepstream_app_config file](#edit-the-deepstream_app_config-file)
7+
* [Testing the model](#testing-the-model)
8+
9+
##
10+
11+
### Convert model
12+
13+
#### 1. Download the CO-DETR (MMDetection) repo and install the requirements
14+
15+
```
16+
git clone https://github.com/open-mmlab/mmdetection.git
17+
cd mmdetection
18+
pip3 install openmim
19+
mim install mmengine
20+
mim install mmdeploy
21+
mim install "mmcv>=2.0.0rc4,<2.2.0"
22+
pip3 install -v -e .
23+
pip3 install onnx onnxslim onnxruntime
24+
```
25+
26+
**NOTE**: It is recommended to use Python virtualenv.
27+
28+
#### 2. Copy conversor
29+
30+
Copy the `export_codetr.py` file from `DeepStream-Yolo/utils` directory to the `mmdetection` folder.
31+
32+
#### 3. Download the model
33+
34+
Download the `pth` file from [CO-DETR (MMDetection)](https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR) releases (example for Co-DINO R50 DETR*)
35+
36+
```
37+
wget https://download.openmmlab.com/mmdetection/v3.0/codetr/co_dino_5scale_r50_1x_coco-7481f903.pth
38+
```
39+
40+
**NOTE**: You can use your custom model.
41+
42+
#### 4. Convert model
43+
44+
Generate the ONNX model file (example for Co-DINO R50 DETR)
45+
46+
```
47+
python3 export_codetr.py -w co_dino_5scale_r50_1x_coco-7481f903.pth -c projects/CO-DETR/configs/codino/co_dino_5scale_r50_8xb2_1x_coco.py --dynamic
48+
```
49+
50+
**NOTE**: To change the inference size (defaut: 640)
51+
52+
```
53+
-s SIZE
54+
--size SIZE
55+
-s HEIGHT WIDTH
56+
--size HEIGHT WIDTH
57+
```
58+
59+
Example for 1280
60+
61+
```
62+
-s 1280
63+
```
64+
65+
or
66+
67+
```
68+
-s 1280 1280
69+
```
70+
71+
**NOTE**: To simplify the ONNX model (DeepStream >= 6.0)
72+
73+
```
74+
--simplify
75+
```
76+
77+
**NOTE**: To use dynamic batch-size (DeepStream >= 6.1)
78+
79+
```
80+
--dynamic
81+
```
82+
83+
**NOTE**: To use static batch-size (example for batch-size = 4)
84+
85+
```
86+
--batch 4
87+
```
88+
89+
**NOTE**: If you are using the DeepStream 5.1, remove the `--dynamic` arg and use opset 12 or lower. The default opset is 11.
90+
91+
```
92+
--opset 12
93+
```
94+
95+
#### 5. Copy generated files
96+
97+
Copy the generated ONNX model file and labels.txt file (if generated) to the `DeepStream-Yolo` folder.
98+
99+
##
100+
101+
### Compile the lib
102+
103+
1. Open the `DeepStream-Yolo` folder and compile the lib
104+
105+
2. Set the `CUDA_VER` according to your DeepStream version
106+
107+
```
108+
export CUDA_VER=XY.Z
109+
```
110+
111+
* x86 platform
112+
113+
```
114+
DeepStream 7.1 = 12.6
115+
DeepStream 7.0 / 6.4 = 12.2
116+
DeepStream 6.3 = 12.1
117+
DeepStream 6.2 = 11.8
118+
DeepStream 6.1.1 = 11.7
119+
DeepStream 6.1 = 11.6
120+
DeepStream 6.0.1 / 6.0 = 11.4
121+
DeepStream 5.1 = 11.1
122+
```
123+
124+
* Jetson platform
125+
126+
```
127+
DeepStream 7.1 = 12.6
128+
DeepStream 7.0 / 6.4 = 12.2
129+
DeepStream 6.3 / 6.2 / 6.1.1 / 6.1 = 11.4
130+
DeepStream 6.0.1 / 6.0 / 5.1 = 10.2
131+
```
132+
133+
3. Make the lib
134+
135+
```
136+
make -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo
137+
```
138+
139+
##
140+
141+
### Edit the config_infer_primary_codetr file
142+
143+
Edit the `config_infer_primary_codetr.txt` file according to your model (example for Co-DINO R50 DETR with 80 classes)
144+
145+
```
146+
[property]
147+
...
148+
onnx-file=co_dino_5scale_r50_1x_coco-7481f903.pth.onnx
149+
...
150+
num-detected-classes=80
151+
...
152+
parse-bbox-func-name=NvDsInferParseYolo
153+
...
154+
```
155+
156+
**NOTE**: The **CO-DETR (MMDetection)** resizes the input with left/top padding. To get better accuracy, use
157+
158+
```
159+
[property]
160+
...
161+
maintain-aspect-ratio=1
162+
symmetric-padding=0
163+
...
164+
```
165+
166+
##
167+
168+
### Edit the deepstream_app_config file
169+
170+
```
171+
...
172+
[primary-gie]
173+
...
174+
config-file=config_infer_primary_codetr.txt
175+
```
176+
177+
##
178+
179+
### Testing the model
180+
181+
```
182+
deepstream-app -c deepstream_app_config.txt
183+
```
184+
185+
**NOTE**: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).
186+
187+
**NOTE**: For more information about custom models configuration (`batch-size`, `network-mode`, etc), please check the [`docs/customModels.md`](customModels.md) file.

docs/PPYOLOE.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
#### 1. Download the PaddleDetection repo and install the requirements
1616

17-
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md
17+
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.8/docs/tutorials/INSTALL.md
1818

1919
**NOTE**: It is recommended to use Python virtualenv.
2020

@@ -24,7 +24,7 @@ Copy the `export_ppyoloe.py` file from `DeepStream-Yolo/utils` directory to the
2424

2525
#### 3. Download the model
2626

27-
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppyoloe) releases (example for PP-YOLOE+_s)
27+
Download the `pdparams` file from [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.8/configs/ppyoloe) releases (example for PP-YOLOE+_s)
2828

2929
```
3030
wget https://paddledet.bj.bcebos.com/models/ppyoloe_plus_crn_s_80e_coco.pdparams

docs/RTDETR_Paddle.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
#### 1. Download the PaddleDetection repo and install the requirements
1616

17-
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/INSTALL.md
17+
https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.8/docs/tutorials/INSTALL.md
1818

1919
```
2020
git clone https://github.com/lyuwenyu/RT-DETR.git

0 commit comments

Comments
 (0)