difference of training process of mmdetection3d and openpcdet. #1968
Unanswered
AndyYuan96
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Sorry for the late reply, could you please offer some detailed information to us, so that we can check the reason? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, recently I have some problem of using mmdetection3d on my own dataset. I have a model trained using openpcdet on my own dataset, and I want to reproduce the model with mmdetection3d. I make sure that the model, augmentation(without gtaug), data , loss are all the same, but I can't produce the performance, So I study the optimizer function in openpcdet, and I write a new optimizer in mmdetection and new lr、momentum hook that generate the same lr and momentum as openpcdet's optimizer(borrowed from fast.ai) when training. I compare the tensorboard log that the lr and momentum are same with openpcdet in every iteration, but I still can't reproduce the performance.
And I find something in tensorboard log, I find that the loss curve of openpcdet‘s loss curve 's fluctuation is bigger than mmdetection3d, at least two times, the loss is recorded every iteration. and for the grad_norm, openpcdet's grad_norm curve don't go up after 2/3 epochs, but mmdetection3d's grad_norm go up after 2/3 epochs。What’s more, I load the init weight generated from openpcdet,rather than mmdetection3d‘s default init weight.
And mmdetection3d's pytorch version is 1.6, and openpcdet's is 1.4.
So I don't why I use same optimizer and lr update rule, and model、loss、preprocess、augmentation are all the same, but the loss curve's fluctuation have large difference, the grad norm curve also have difference. As openpcdet's training process is very simple, but mmdetection3d's training process is more complex, can you guys give me some advice.Thanks.
Beta Was this translation helpful? Give feedback.
All reactions