You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, there!
Environment: Ubuntu 18.04, MXNet-cu92 1.3.1, Gluon-Cv 0.4 (Master)
I'm trying to finetuneing a gluoncv pretrained ResNet50_v1d classification model. But there is something strange going on.
When I finetuned it with 256x256 image resized from some dataset of size 512x512, everything was fine. However, When I tried to finetune it with 384x384 image, the accuracy just wouldn't go up, It kept going up and down.
At first, I thought it has something to do with the mxnet imread and mxnet ResizeAug. so I rewrote my own dataset class with cv2 imread and cv2 resize. Now, the accuracy is going up, but at a much lower rate which is no way compared to the rate when using 256x256 image. It usually took about 3 more epochs to
get the accuracy up by 0.1% .(the total training epoch is 50)
BTW, below are different resize function that I have tried:
I checked they both use Area-based (resampling using pixel area relation) interpretation strategy for resize.
I wonder if it may have something to do with batch_size since I halved the batch_size when using larger image.
Isn't larger image supposed to give better result?
Thank you!
The text was updated successfully, but these errors were encountered:
Isn't larger image supposed to give better result?
No, receptive field is fixed, therefore for image classification, the performance may drop if the input is too big. Note that the global pooling is applied before last FullyConnect layer.
@JWarlock what is the batch size you are using? if batch size per gpu is less than 16 it might cause problems due to batch norm stats being not accurate. Also you need to adjust your learning rate accordingly.
Hi, there!
Environment:
Ubuntu 18.04, MXNet-cu92 1.3.1, Gluon-Cv 0.4 (Master)
I'm trying to finetuneing a gluoncv pretrained
ResNet50_v1d
classification model. But there is something strange going on.When I finetuned it with 256x256 image resized from some dataset of size 512x512, everything was fine. However, When I tried to finetune it with 384x384 image, the accuracy just wouldn't go up, It kept going up and down.
At first, I thought it has something to do with the mxnet
imread
and mxnetResizeAug
. so I rewrote my own dataset class with cv2imread
and cv2resize
. Now, the accuracy is going up, but at a much lower rate which is no way compared to the rate when using 256x256 image. It usually took about 3 more epochs toget the accuracy up by 0.1% .(the total training epoch is 50)
BTW, below are different resize function that I have tried:
image = cv2.resize(image, (self.size, self.size), cv2.INTER_AREA)
I checked they both use Area-based (resampling using pixel area relation) interpretation strategy for resize.
I wonder if it may have something to do with batch_size since I halved the batch_size when using larger image.
Isn't larger image supposed to give better result?
Thank you!
The text was updated successfully, but these errors were encountered: