-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
导出int8的完整流程 #28
Comments
支持,该 repo 沿用的是 tensorRT_Pro 中的 INT8 量化方案,采用的是 TRT C++ API 实现 PTQ 量化,因此你只需要在量化时指定 calibration dataset 路径即可,使用示例如下: TRT::compile(
mode, // FP32、FP16、INT8
test_batch_size, // max batch size
onnx_file, // source
model_file, // save to
{},
int8process,
"inference",
"calibration_dataset", // 指定校准数据集路径
"calib.entropy.cache", // 指定保存的校准文件名
"Calibrator::Entropy", // 指定校准器,目前仅支持 Entropy 和 MinMax
); |
谢谢,现在提示这个错误,我用的tensorrt版本是tensorrt-8.6.1,是onnx-tensorrt版本不对是吗 [2024-06-10 12:27:13][error][trt_builder.cpp:30]:NVInfer: /home/yr/yr/code/cv/object_detection/tensorRT_Pro-YOLOv8/src/tensorRT/onnx_parser/ModelImporter.cpp:739: --- End node --- |
算子解析问题,提示说这个版本的 TensorRT 不支持动态 shape 的 GatherElements 算子,默认使用的 onnxparser 版本是 8.0 的,你可能需要手动替换 onnx_parser 具体可以参考:RT-DETR推理详解及部署实现 |
好的谢谢 |
你好,文档中没有看到转换为int8并实现推理的完整流程,能否支持该功能
The text was updated successfully, but these errors were encountered: