Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么这里gelan的性能要远强于ultralytics中的gelan的性能 #618

Open
Godk02 opened this issue Dec 9, 2024 · 2 comments
Open

Comments

@Godk02
Copy link

Godk02 commented Dec 9, 2024

是因为yolov9用的数据增强与其不一样么

@Godk02 Godk02 changed the title 为什么gelan的效果要远强于ultralytics中的效果 为什么这里gelan的性能要远强于ultralytics中的gelan的性能 Dec 9, 2024
@mpj1234
Copy link

mpj1234 commented Dec 19, 2024

也要看你训练的是重参数辅助训练模型,还是削减后的,我记得ultralytics里面应该只是一个很普通的削减版本

@Godk02
Copy link
Author

Godk02 commented Dec 22, 2024

也要看你训练的是重参数辅助训练模型,还是削减后的,我记得ultralytics里面应该只是一个很普通的削减版本

我发现之前一直用的下面的命令训练的,所以默认用的是DP分布式训练
python train.py --device 4,5,6,7 --sync-bn --batch 16 --data data/xxx.yaml --img 640 --cfg models/detect/xxx.yaml --name xxx --hyp hyp.scratch-high.yaml --min-items 0 --epochs 150 --close-mosaic 15

后面改成了DDP,效果明显差了好多,这是为什么呢?
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --device 0,1,2,3 --sync-bn --batch 8 --data data/xxx.yaml --img 640 --cfg models/detect/xxx.yaml
--name xxx --hyp hyp.scratch-high.yaml --min-items 0 --epochs 150 --close-mosaic 15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants