Replies: 1 comment
-
@Oleg-1978 hello! Training YOLOv8 on multiple GPUs should indeed speed up the process. If you're not seeing the expected decrease in training time, there might be an issue with how the distributed training is set up. For YOLOv8, you should be able to specify multiple GPUs directly in the training command without needing to use Make sure that:
If you continue to face issues, please ensure that your system's CUDA and cuDNN are properly configured, and that PyTorch can access all GPUs. You can check accessible GPUs using If the problem persists, consider opening an issue on the repo with detailed information about your setup and the exact behavior you're observing. The community and the Ultralytics team can then help troubleshoot the issue more effectively. Remember to check the documentation at https://docs.ultralytics.com for any updates or detailed instructions on multi-GPU training. 😊 |
Beta Was this translation helpful? Give feedback.
-
Hi.
I know how train Yolo5 (python -m torch.distributed.run --nproc_per_node 2 ............. device 0,1) - example.
But Yolo8 I have problem:
What I must do to use paralel train on Multi GPU on Yolo8?
Beta Was this translation helpful? Give feedback.
All reactions