-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About reproducing results #1
Comments
I noticed that when generating the dataset, the alt image was lost in accuracy because it was saved as png, so I changed the relevant code |
Hello, I meet the same results, too. Have you reproduced the results? What's more, I have not found OCRNet-HRNet in code |
Excuse me, I met this situation what should I do?python train.py --use-balanced-weights --batch-size 8 --base-size 500 --crop-size 500 --loss-type focal --epochs 200 --eval-interval 1 |
Excuse me, I met this situation what should I do? python /media/hubu/Data1/202131116023006_bky/SensatUrban-BEV-Seg3D-main/preprocess/point_EDA_31.py loading file /media/hubu/Data1/202131116023006_bky/SensatUrban-BEV-Seg3D-main/dataloaders/datasets/data_release/test/birmingham_block_2.ply[00:00<?, ?it/s] |
Thank you so much for your great work
When I train according to the readme process, the model always fails to achieve the same curve as img/single_scale_training.png
My experimental configuration is:
2 x 2080ti and batchsize=4
miou can only be up to 41% in the validation set
In addition, can you also provide pre-trained weights?
thank you very much
The text was updated successfully, but these errors were encountered: