Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coco_tracking.onnx模型可以转换为trt模型推理吗? #45

Open
shliang0603 opened this issue Nov 13, 2020 · 0 comments
Open

coco_tracking.onnx模型可以转换为trt模型推理吗? #45

shliang0603 opened this issue Nov 13, 2020 · 0 comments

Comments

@shliang0603
Copy link

@hopef 你好,请问coco_tracking.onnx模型可以转换为trt模型推理使用吗?

我在使用你们提供的coco_tracking.onnx模型转换为trt模型的时候报错:

(我使用的TensorRT版本是7.0版本的)

shl@zhihui-mint:~/shl_res/project_source/CRF/TensorRT-7.0.0.11/bin$ cat 9_CenterTrack_onnx_512_to_trt_fp16 
./trtexec --onnx=/home/shl/CenterTrack/models/coco_tracking.onnx --explicitBatch --saveEngine=/home/shl/CenterTrack/models/coco_tracking.trt --workspace=5120 --fp16

(yolov4) shl@zhihui-mint:~/shl_res/project_source/CRF/TensorRT-7.0.0.11/bin$ ./trtexec --onnx=/home/shl/CenterTrack/models/coco_tracking.onnx --explicitBatch --saveEngine=/home/shl/CenterTrack/models/coco_tracking.trt --workspace=5120 --fp16
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/shl/CenterTrack/models/coco_tracking.onnx --explicitBatch --saveEngine=/home/shl/CenterTrack/models/coco_tracking.trt --workspace=5120 --fp16
[11/13/2020-19:40:47] [I] === Model Options ===
[11/13/2020-19:40:47] [I] Format: ONNX
[11/13/2020-19:40:47] [I] Model: /home/shl/CenterTrack/models/coco_tracking.onnx
[11/13/2020-19:40:47] [I] Output:
[11/13/2020-19:40:47] [I] === Build Options ===
[11/13/2020-19:40:47] [I] Max batch: explicit
[11/13/2020-19:40:47] [I] Workspace: 5120 MB
[11/13/2020-19:40:47] [I] minTiming: 1
[11/13/2020-19:40:47] [I] avgTiming: 8
[11/13/2020-19:40:47] [I] Precision: FP16
[11/13/2020-19:40:47] [I] Calibration: 
[11/13/2020-19:40:47] [I] Safe mode: Disabled
[11/13/2020-19:40:47] [I] Save engine: /home/shl/CenterTrack/models/coco_tracking.trt
[11/13/2020-19:40:47] [I] Load engine: 
[11/13/2020-19:40:47] [I] Inputs format: fp32:CHW
[11/13/2020-19:40:47] [I] Outputs format: fp32:CHW
[11/13/2020-19:40:47] [I] Input build shapes: model
[11/13/2020-19:40:47] [I] === System Options ===
[11/13/2020-19:40:47] [I] Device: 0
[11/13/2020-19:40:47] [I] DLACore: 
[11/13/2020-19:40:47] [I] Plugins:
[11/13/2020-19:40:47] [I] === Inference Options ===
[11/13/2020-19:40:47] [I] Batch: Explicit
[11/13/2020-19:40:47] [I] Iterations: 10
[11/13/2020-19:40:47] [I] Duration: 3s (+ 200ms warm up)
[11/13/2020-19:40:47] [I] Sleep time: 0ms
[11/13/2020-19:40:47] [I] Streams: 1
[11/13/2020-19:40:47] [I] ExposeDMA: Disabled
[11/13/2020-19:40:47] [I] Spin-wait: Disabled
[11/13/2020-19:40:47] [I] Multithreading: Disabled
[11/13/2020-19:40:47] [I] CUDA Graph: Disabled
[11/13/2020-19:40:47] [I] Skip inference: Disabled
[11/13/2020-19:40:47] [I] Inputs:
[11/13/2020-19:40:47] [I] === Reporting Options ===
[11/13/2020-19:40:47] [I] Verbose: Disabled
[11/13/2020-19:40:47] [I] Averages: 10 inferences
[11/13/2020-19:40:47] [I] Percentile: 99
[11/13/2020-19:40:47] [I] Dump output: Disabled
[11/13/2020-19:40:47] [I] Profile: Disabled
[11/13/2020-19:40:47] [I] Export timing to JSON file: 
[11/13/2020-19:40:47] [I] Export output to JSON file: 
[11/13/2020-19:40:47] [I] Export profile to JSON file: 
[11/13/2020-19:40:47] [I] 
----------------------------------------------------------------
Input filename:   /home/shl/CenterTrack/models/coco_tracking.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/13/2020-19:40:48] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 136 [Plugin]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: Plugin
[11/13/2020-19:40:48] [E] Failed to parse onnx file
[11/13/2020-19:40:48] [E] Parsing model failed
[11/13/2020-19:40:48] [E] Engine creation failed
[11/13/2020-19:40:48] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/shl/CenterTrack/models/coco_tracking.onnx --explicitBatch --saveEngine=/home/shl/CenterTrack/models/coco_tracking.trt --workspace=5120 --fp16
(shl@zhihui-mint:~/shl_res/project_source/CRF/TensorRT-7.0.0.11/bin$ 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant