You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a trick to increase mAP , which be used in alexeyAB project , but I think it may produce some false bbox in real case .
The first phase was original implement with yolov3 , it only do regression in one anchor which is the best iou , and second phase will search the anchor pool , and find another anchors can also do the same thing except the first phase anchor (accumulate delta), if the size of GT box was in middle of two anchors (or more) , it was helpful , especially in small objects , take more chance to detect
Thanks for reply, I figured it out with your help. I think the main point is to increase the positive samples.
In the original implementation. If there is a image with N gt boxes and there are only 3 * N anchors (every loss layer must have one anchor to match the GTs).
With alexeyAB implementation, the extra anchors (match GT enough) are added.
But I noticed that alexeyAB's impl accumulate also the MSE gradients and yours do not. Is there anything reason?
Hi, Eric. Thx for your amazing job.
But I have a little issue about the code...
MobileNet-YOLO/src/caffe/layers/yolov3_layer.cpp
Lines 647 to 678 in 6a4db28
In line 674, we find the anchor which match the GT best, we treat it as positive sample and compute its gradient.
MobileNet-YOLO/src/caffe/layers/yolov3_layer.cpp
Lines 680 to 698 in 6a4db28
But in line 680, we iterate the rest anchor, the anchor, which iou between GT is larger than iou_thresh_, is also positive sample.
So, here is my confusion, why don't we just iterate all the anchor one time, compute gradient for which iou is larger than threshold.
The text was updated successfully, but these errors were encountered: