-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing your results #1
Comments
Hi @PiggyGenius, good questions, here are your answers!
def xview_class_weights(indices): # weights of each class in the training set, normalized to mu = 1
weights = 1 / torch.FloatTensor(
[74, 364, 713, 71, 2925, 209767, 6925, 1101, 3612, 12134, 5871, 3640, 860, 4062, 895, 149, 174, 17, 1624, 1846, 125, 122, 124, 662, 1452, 697, 222, 190, 786, 200, 450, 295, 79, 205, 156, 181, 70, 64, 337, 1352, 336, 78, 628, 841, 287, 83, 702, 1177, 313865, 195, 1081, 882, 1059, 4175, 123, 1700, 2317, 1579, 368, 85])
weights /= weights.sum()
return weights[indices]
To reproduce the results, you should just be able to start training. You should notice right away after a few epochs if the results are similar, as the results posted to results.txt should match the image on the repo home page. You can use |
@glenn-jocher I am also trying to reproduce the results. I have the following graphs for precision and recall. The last 100 epochs behavior is not in line with the behavior you have on the web. I am getting mAP=0.20 on the training set as compared to 0.30 claimed by you. Any comments. I see you turned off the Cuda flag in detect.py. Any special reason for this? it is pretty slow on CPU. I am trying to reduce the classes for my experiments. You have 61 classes and label for them. I want to reduce it to "10" classes. I see that you have def xview_classes2indices(classes): # remap xview classes 11-94 to 0-61
indices = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11, 12, 13, 14,
15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1, 29, 30, 31, 32, 33, 34,
35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46, 47, 48, 49, -1, 50, 51, -1, 52, -1,
-1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]
return [indices[int(c)] for c in classes] My understanding is to change all indices of unnecessary classes to "-1" and they will be filtered out. Am I on the right track or I am have to do more? |
@abidmalikwaterloo you're free to set the CUDA flag as you like. The graphs looks good, your specific results may vary as I was making changes to the repository after uploading those results to try to optimize it. Yes, if you want to use custom classes and data you will need to redefine those sections of the code that are relevant like the one you highlighted. There are many ways to do this. The purpose of the function you see there is to use arbitrary class numbers. You do not need to use this if your classes are ordered simply, such as 0, 1, 2, 3 etc. In xview the classes skip numbers, i.e. 5, 6, 17, 20, etc. |
@glenn-jocher I am playing with parameters but unable to get the mAP =0.16 on the data using validation set ( images not included in the training). I am using 791 images for training and 85 for validation. The max mAP I get is 0.09. Do you have any specific parameter values that I can use to get the mAP close to 0.16? |
@PiggyGenius Were you able to get mAP = 0.16? What parameters did you use for your architecture? |
Be advised that the https://github.com/ultralytics/xview-yolov3 repository is not under active development anymore. We recommend you use https://github.com/ultralytics/yolov3 instead, our main YOLOv3 repository. |
@abidmalikwaterloo I am also trying to train on a subset of the data for around 9-10 classes. Could you please tell me if you were successful in doing it and how? |
@sawhney-medha please be advised that the https://github.com/ultralytics/xview-yolov3 repository is not under active development anymore. We recommend you use https://github.com/ultralytics/yolov3 instead, our main YOLOv3 repository. |
This issue is stale because it has been open 30 days with no activity. Remove Stale label or comment or this will be closed in 5 days. |
Hi, I want to use resized xview images,. like, decrease their resolution first and then use your model with it. I think, you are cropping the patches from the original images which are of around 3k x 3k. I want to do the same with 1000 x 1000 sized images. Please help. Thanks |
@im-tanyasuri you can achieve this by resizing the images using any image processing library such as OpenCV or PIL before feeding them into the model. You can use the |
Hi,
I am working on a similar project, xview and yolo, and I would like to reproduce your results. I have a few questions:
Finally if you can mention/explain anything that you think could help someone reproduce your results it would be really helpful !
The text was updated successfully, but these errors were encountered: