Replies: 2 comments
-
👋 Hello @animi10, thank you for your detailed post and for exploring innovative directions with Ultralytics YOLO 🚀! Implementing Monte Carlo DropBlock for epistemic uncertainty estimation sounds like a fascinating project 💡. We encourage you to review the Ultralytics Docs for helpful information on custom predictors and how to work with the YOLO inference pipeline. Many of the modules in YOLO are designed with flexibility in mind, so you might find some useful starting points there. For your 🥷 Custom Inference Workflow:
If you believe there might be an issue in the current implementation that prevents this, could you provide a minimum reproducible example (MRE)? This will help us analyze and potentially pinpoint any blockers or offer precise guidance 🛠️. 🚀 UpgradePlease verify that you’re using the latest version of the pip install -U ultralytics Additional ResourcesIf you want to discuss ideas with the community or collaborate:
Environments and TestingWe recommend testing your modifications in a reproducible environment such as:
Finally, please note that this is an automated response 🤖—an Ultralytics engineer will review your discussion and provide additional feedback soon. Good luck with your implementation, and thank you for contributing to the innovation of YOLO! 🚀 |
Beta Was this translation helpful? Give feedback.
-
@animi10 your approach of creating a custom predictor inherited from To work with raw outputs, you can utilize the For guidance, you can refer to the |
Beta Was this translation helpful? Give feedback.
-
Hello!
I’m working on integrating a Monte Carlo DropBlock into the YOLO11 architecture to estimate epistemic uncertainty during inference.
What I’ve Done:
Module Integration:
I’ve implemented the MCDropBlock class in block.py and integrated it into the .yaml configuration file.
I’ve successfully trained the model with my implementation. Here’s a snippet of the MCDropBlock implementation:
Epistemic Uncertainty Estimation:
The epistemic uncertainty estimation needs to occur at inference time by performing multiple stochastic forward passes (Monte Carlo sampling).
To do this, I need the model to remain in training mode during inference to allow dropout.
my problem:
Inference Workflow:
Flexibility:
What I Need Help With:
Advice on Strategy:
Raw Model Outputs:
Any insights or suggestions would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions