Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

weird behaviour in stanford car #67

Open
ShihaoShao-GH opened this issue Aug 29, 2022 · 1 comment
Open

weird behaviour in stanford car #67

ShihaoShao-GH opened this issue Aug 29, 2022 · 1 comment

Comments

@ShihaoShao-GH
Copy link

Hi, thanks for your amazing work!

Here I encountered something really weird. I downloaded the tresnet_l_stanford_card_96.41.pth and tried to validate the result in stanford car datasets. It reached 99.69 for validation part and 96.80 for train part, respectively. Most likely the training and validation part were treated inversely.

Can you please check if the order is correct?

@Sondosmohamed1
Copy link

Hi, I have a question: Did you reload the pretrained model, and did it work? Because, for me, I encountered an error when inferring with 'tresnet_l_stanford_card_96.41.pth'. However, when I run 'tresnet_m_COCO_224_84_2.pth', it works.
image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants