Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

13.7% mAP #7

Open
choasup opened this issue Jan 4, 2017 · 13 comments
Open

13.7% mAP #7

choasup opened this issue Jan 4, 2017 · 13 comments

Comments

@choasup
Copy link

choasup commented Jan 4, 2017

I follow your instruction,but I only get 13.7% mAP on the test2007. If I change the test source,I can get 75% mAP on the trainval.

@Harick1
Copy link
Owner

Harick1 commented Jan 7, 2017

@choasup I don't know what's your problem.

Here is my model.
link: https://pan.baidu.com/s/1jHAN6xK.
password: kvee

You can try.

@AlexofNTHU
Copy link

AlexofNTHU commented Nov 17, 2017

Sorry guys, I made mistake in the last layer interpretation between this repo and the another repo converted from the original darknet yolo. The follwing statistics I posted is meaningless.

@yeahkun
May I ask how is you AP of each class?
I download your model and run it with this repo.
The results are far from 56 mAP. Can you tell me how high you have achieved in each class?

AP for class #0 = 0.257931
AP for class #1 = 0.300354
AP for class #2 = 0.353628
AP for class #3 = 0.173619
AP for class #4 = 0.253685
AP for class #5 = 0.169883
AP for class #6 = 0.374064
AP for class #7 = 0.560878
AP for class #8 = 0.228058
AP for class #9 = 0.310909
AP for class #10 = 0.318698
AP for class #11 = 0.510346
AP for class #12 = 0.437706
AP for class #13 = 0.455426
AP for class #14 = 0.424733
AP for class #15 = 0.208002
AP for class #16 = 0.356841
AP for class #17 = 0.392591
AP for class #18 = 0.365005
AP for class #19 = 0.393977

@blueardour
Copy link

@AlexofNTHU Hi, may I ask which script you use to computing the AP? Could you share an example?

@Citroning
Copy link

@blueardour @AlexofNTHU
Excuse me, could you please tell me how to compute AP, Recall, etc in caffe-yolo ? Thank you so much !

@guiyang882
Copy link

after train 300K iters, I get 0.58 mAP on VOC07.

@AlexofNTHU
Copy link

AlexofNTHU commented Dec 1, 2017

@liuguiyangnwpu
I have achieved 0.58 mAP. Please ignore this post!

I saw your another post
what he provided is a conversion between darknet and caffe. I run the conversion and perform the mAP evaluation through the shell script provided by this repo. The mAP wasn't as good as the paper. Also, I performed training from scratching using this repo, the original learning stops at 32000 iteration, as the following settings in the solver. However, you mentioned that you get 0.58mAP by 300k iteration which is almost 10 times of the original solver. Is it a typo? I rarely benefit from training for more than 50k iterations. And, in the implementation of this repo, the CNN is quite different from the original yolo in that the basenet is GoogLent. Without carefully finetuning the hyper parameters, the mAP might still underperform! If you really perform 300 iterations, this might explain why I haven'et achieve 0.58 mAP yet. However, I did run the trained model provided by @yeahkun in this post, using the mAP evaluation shell script provided in this repo. Still, the mAP was significantly lower than 0.58.

lr_policy: "multifixed"
stagelr: 0.001
stagelr: 0.01
stagelr: 0.001
stagelr: 0.0001
stageiter: 520
stageiter: 16000
stageiter: 24000
stageiter: 32000
max_iter: 32000

@guiyang882
Copy link

@AlexofNTHU This is my solver.prototxt

net: "gnet_train.prototxt"
test_iter: 4952
test_interval: 10000
test_initialization: false
display: 200
average_loss: 200
lr_policy: "multifixed"
stagelr: 0.001
stagelr: 0.01
stagelr: 0.001
stagelr: 0.0001
stageiter: 520
stageiter: 16000
stageiter: 24000
stageiter: 32000
max_iter: 300000
momentum: 0.9
weight_decay: 0.0005
snapshot: 2000
snapshot_prefix: "./models/gnet_yolo"
solver_mode: GPU

This is my test.sh

#!/usr/bin/env sh

CAFFE_HOME=../..

PROTO=./gnet_test.prototxt
MODEL=$1
ITER=1651
GPU_ID=0

$CAFFE_HOME/build/tools/test_detection \
    --model=$PROTO --iterations=$ITER \
    --weights=$MODEL --gpu=$GPU_ID

The test result is

I1201 01:56:00.598809 12009 detection_loss_layer.cpp:195] loss: 1.66142 class_loss: 0.407502 obj_loss: 0.644707 noobj_loss: 0.00239748 coord_loss: 0.527043 area_loss: 0.0797678
I1201 01:56:00.598836 12009 detection_loss_layer.cpp:198] avg_iou: 0.652168 avg_obj: 0.25002 avg_no_obj: 0.0194172 avg_cls: 0.999876 avg_pos_cls: 0.609646
I1201 01:56:00.604326 12009 test_detection.cpp:223] iter_loss: 1.66142
I1201 01:56:00.604370 12009 test_detection.cpp:253] Running Iteration 1650
I1201 01:56:00.604393 12009 test_detection.cpp:256] Total time: 69276.3 ms.
I1201 01:56:00.701052 12009 test_detection.cpp:286]     Test net output #1: eval_det = 0.55406
I1201 01:56:00.701107 12009 test_detection.cpp:289] Loss: 1.19133
I1201 01:56:00.701126 12009 test_detection.cpp:290] Model: ./models/gnet_yolo_iter_184000.caffemodel

@guiyang882
Copy link

@AlexofNTHU another question, how to get the AP on each class, could you show me your code ?
and The caffe-yolo model is Inception 1 model, not the original darknet yolov1 ?

@AlexofNTHU
Copy link

@liuguiyangnwpu
Inside test_detection.cpp, in the for loop around line 281, simply print AP[] is what you want.
Yes, the caffe-yolo model is inception one not the original darknet yolov1.
As to the original paper, the top-5 of the basenet of yolo is almost equivalent to inception v1.
However, when it is applied to yolo-like detector, the mAP is not that good.
In my opinion, the cause comes from the data augmentation strategy which is the function that this caffe hasn't provided.

for (int j = 0; j < num_class; ++j) { if (!num_gt[j]) { LOG(WARNING) << "Ground trurh label number is 0: " << j; continue; } else { LOG(WARNING) << "Ground trurh label number class=" << j << "is " << num_gt[j]; } if (true_pos.find(j) == true_pos.end()) { LOG(WARNING) << "Missing true_pos for label: " << j; continue; } if (false_pos.find(j) == false_pos.end()) { LOG(WARNING) << "Missing false_pos for label: " << j; continue; } string ap_version = "11point"; vector<float> prec, rec; ComputeAP(true_pos[j], num_gt[j], false_pos[j], ap_version, &prec, &rec, &(APs[j])); // num_gt[j] is the GT number of each class mAP += APs[j]; //LOG(INFO) << " AP for class #" << j << " = " << APs[j]; printf(" AP for class #%i = %f\n", j, APs[j]); }

@guiyang882
Copy link

@AlexofNTHU thank you for you code.
I get the result that:

I1201 07:38:45.410483 12370 test_detection.cpp:253] Running Iteration 1650
I1201 07:38:45.410543 12370 test_detection.cpp:256] Total time: 67962.7 ms.
I1201 07:38:45.430915 12370 test_detection.cpp:283] AP for class #0 = 0.639587
I1201 07:38:45.431941 12370 test_detection.cpp:283] AP for class #1 = 0.672639
I1201 07:38:45.435679 12370 test_detection.cpp:283] AP for class #2 = 0.491131
I1201 07:38:45.438500 12370 test_detection.cpp:283] AP for class #3 = 0.394281
I1201 07:38:45.440626 12370 test_detection.cpp:283] AP for class #4 = 0.259159
I1201 07:38:45.441391 12370 test_detection.cpp:283] AP for class #5 = 0.609456
I1201 07:38:45.448118 12370 test_detection.cpp:283] AP for class #6 = 0.634853
I1201 07:38:45.449467 12370 test_detection.cpp:283] AP for class #7 = 0.720433
I1201 07:38:45.458763 12370 test_detection.cpp:283] AP for class #8 = 0.316017
I1201 07:38:45.459609 12370 test_detection.cpp:283] AP for class #9 = 0.523982
I1201 07:38:45.460531 12370 test_detection.cpp:283] AP for class #10 = 0.49354
I1201 07:38:45.461695 12370 test_detection.cpp:283] AP for class #11 = 0.603251
I1201 07:38:45.462893 12370 test_detection.cpp:283] AP for class #12 = 0.732474
I1201 07:38:45.463590 12370 test_detection.cpp:283] AP for class #13 = 0.62081
I1201 07:38:45.492758 12370 test_detection.cpp:283] AP for class #14 = 0.563406
I1201 07:38:45.498705 12370 test_detection.cpp:283] AP for class #15 = 0.267277
I1201 07:38:45.500614 12370 test_detection.cpp:283] AP for class #16 = 0.529345
I1201 07:38:45.502053 12370 test_detection.cpp:283] AP for class #17 = 0.540388
I1201 07:38:45.503828 12370 test_detection.cpp:283] AP for class #18 = 0.702365
I1201 07:38:45.507661 12370 test_detection.cpp:283] AP for class #19 = 0.550069
I1201 07:38:45.507673 12370 test_detection.cpp:287]     Test net output #1: eval_det = 0.543223
I1201 07:38:45.507697 12370 test_detection.cpp:290] Loss: 1.20107
I1201 07:38:45.507704 12370 test_detection.cpp:291] Model: ./models/gnet_yolo_iter_256000.caffemodel

I think you are right, data augmention is very import, I will change the model into the caffe ssd branch

@AlexofNTHU
Copy link

@liuguiyangnwpu
I have achieved 0.58 finally. You could try to tune some hyper parameters.
However, there are already YOLOv2-caffe and YOLOv2-Pytorch.
We should move on! ^^

@guiyang882
Copy link

@AlexofNTHU 👍🤝

@wct1996
Copy link

wct1996 commented Mar 18, 2019

@liuguiyangnwpu
I have achieved 0.58 finally. You could try to tune some hyper parameters.
However, there are already YOLOv2-caffe and YOLOv2-Pytorch.
We should move on! ^^
@AlexofNTHU
Increasing training iters under the original parameters does not improve the mAP (45%). Would you show the hyper parameters you have changed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants