-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assertion failed: scaleAllPositive && "Scale coefficients must all be positive" #13
Comments
It looks like there might be a bug in our PyTorch-Quantization tool, specifically related to the generation of zero or negative scales, which should never occur. Did you follow the installation process described at installation, or did you set up the environment yourself? |
No, actually I cannot install tensorrt version 10, since I have to deploy trt model on jetson orin. So, I have to set my own enviroment Is it related to tensorrt version..? I will try to all dependencies except tensorrt in your script, then I let you know the result :) |
Ok. I have Jetson Orin Nano here.. I'll find some free time and test it |
Why I use lower version of tenssorrt is that if I convert onnx into tensonrrt in 4090 or 3090, they cannot deploy on orin nx.. It might be because the discrepencies between orin and rtx make the deployed trt file on server not use on jetson... anyway, I try to do more! |
I slove the problem. As you mentioned before, That error may occur due to pytorch quantization. that library looked like having a bug. So, I installed that package with source the steps are here: git clone https://github.com/NVIDIA/TensorRT.git After installing pytorch quantization with the source, I then coverted pt into onnx and could deploy orin nx. |
If you mind can you post only |
Sure why not. But I don't have any resource including a orin nx for that report on this week. This model is not as same as original yolov9. So, I have a additional work for that report. (e.g. converting original model and testing that model ).May I report it the next week? |
Hello, do you want me to give a report about jetson orin nx's results. right?? --batch 1-- --batch 4-- --batch 8-- |
@levipereira Sir, Do you have any plan to develop yolov9-QAT for INT4 format? |
@Wooho-Moon @levipereira I encountered the same problem and then reinstalled pytorch this way, but I still had the same problem when I try to convert onnx model to tensorrt engine agian under the guidiance of @levipereira . Do you have a better solution |
No. |
You can rewrite compute_amax function like this to fix this issue: def compute_amax(model, **kwargs):
` |
Thanks for awesomes works! I recently try to fine-tune my own model based yolov9 and can be obtain pt file sucessfully. After converting pt into onnx, I try to deploy onnx into tensorrt. the probelm occurs in this step.
this picture show that there is the scale factor whoese value is 0. I don't know exactly what I should do more. Could you give me an advise??
The text was updated successfully, but these errors were encountered: