-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss_fcos_ctr is always around 0.6 #48
Comments
I also encountered the same problem with my own dataset, how do you solve it? |
Actually, I haven't solved this problem until now. But I find pred centerness is only used in calculating centerness loss. It is not involved in NMS. Box reg loss is calculated with target_cterness. I think the source of this problem is probably here, but I need more experiment to verify this. |
I have the same problem, even I have overfit to a single batch (~60 images) and got a high AP. Maybe the authors can share the loss curves in tensorboard format so that we can analyze our loss curves? |
i have the same issue, can we just ignore this loss? |
Same problem. I guess it's the bug of FCOS. |
I implemented blendmask and centernet, both built on FCOS, on my own dataset, and I got the same problem. I am still new to FCOS, see anyone has a good suggestion here. |
Answer: tianzhi0549/FCOS#15 (comment). |
These days I trained the centermask(lite_mv2 and vov99) model on coco and my own dataset. However the training can not make loss_fcos_ctr go down. The loss_fcos_ctr is just around 0.6, the total loss is even get about 0.7. The two training experiment get the same result and I got my data checked again to make sure it correct. Now, I don't know how to solve this problem. Is there anyone else get the same problem like this?
The text was updated successfully, but these errors were encountered: