-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Guidance for low accuracy after QAT #18
Comments
You can adjust the calibrate_model to obtain more representative data (we recommend using at least 10% of your main dataset) by modifying the batch_size parameter here: Line 431 in c293d1f
Experiment with different calibration methods, as this could be the main factor affecting your results. You can test various calibration approaches without regenerating histograms to identify which yields the best accuracy. Line 477 in c293d1f
For fine-tuning optimization, you can adjust the learning rate and other hyperparameters to improve the quantization results: Line 482 in c293d1f
|
Ok, I was going to go down the road of adjusting the hparams of LR, epochs, and LR like as if it was a standard training run. I didn't see anything about calibration_model in your docs. Our model responds well to standard settings using as if were a typical COCO dataset. |
This project was a research project that I executed, and although some parameters are documented, many low-level are not documented. |
I uncommented the line you suggested and looking at the results again. I now see the the QAT model is essentially on par with the "origin" model. BUT the scores of both are substationally lower than the fp32 model. Is this due to reparametrization? Here's the results after QAT:
However, the original model actually has a mAP50:95 of 0.10112. This is still about a 50% reduction in mAP50:95 from the real original (un reparameterized). So it appears the the majority of the loss is in the re parameterized model. Is this what you've seen as well? If so... how do I gain the accuracy back? What's going on there in reparameterization? Thanks for you help. |
Looking into this issue to see if it fixes my accuracy loss. WongKinYiu/yolov9#198. Turns out there were no discrepancies with my settings. Trying a converted model without model.half() and a gelan-s.yaml model. |
Here is the problem: Complex datasets are those with very similar classes or very small object sizes Try to solve the problem by increasing the network resolution. But you definitely have a serious issue with this model performing at 10% mAP. Evaluate if your dataset has inherent complexity (similar classes, small objects) The current performance (10% mAP) indicates fundamental issues that need to be addressed before considering model optimization techniques. |
@levipereira Ok, thanks Levi. |
Getting about 50% lower mAP scores on a custom dataset. You've done a great job on this repo, but one thing that lacks is guidance for how to improve low accuracy.
How to change number of epochs? Suggestions for tuning LR and so on?
Thanks in advanced,
Josh
The text was updated successfully, but these errors were encountered: