-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487
base: master
Are you sure you want to change the base?
Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487
Conversation
All Contributors have signed the CLA. ✅ |
👋 Hello @paraglondhe098, thank you for submitting a 🚀 PR to
To reproduce and understand the issue you're addressing more clearly, a Minimum Reproducible Example (MRE) demonstrating the AMP warning context would be useful for the reviewers. If you can provide an example of the exact conditions under which the error occurs (e.g., a specific PyTorch version, configuration details, or dataset), it will aid in validation and testing. For more information, refer to our Contributing Guide. If questions come up or further clarification is needed, feel free to add comments here. This looks like a solid and impactful improvement - thank you for contributing to the community! 🚀✨ |
I have read the CLA Document and I sign the CLA |
I have read the CLA Document and I sign the CLA
Changes made
While I was training YOLO V5, I encountered warning :
To address this, I updated the import in train.py using a try-except block for compatibility with all PyTorch versions in requirements.txt:
Additionally, the variable amp (a boolean indicating whether to use automated precision/mixed precision training) was renamed to use_amp for clarity, since amp is also the module name.
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improves AMP (Automatic Mixed Precision) integration with enhanced compatibility and functionality.
📊 Key Changes
torch.cuda.amp
iftorch.amp
is not available (ensures compatibility across PyTorch versions).amp
variable withuse_amp
for better clarity and consistency.GradScaler
) and automatic casting (autocast
), for seamless device type support (e.g., CPU, GPU).🎯 Purpose & Impact
torch.amp
.