Replies: 2 comments 3 replies
-
From the description, I can assume you face a bias problem (underfitting) when the model performance on the training set is not satisfying.
|
Beta Was this translation helpful? Give feedback.
-
2,000 epochs sound pretty much to see the difference between model performance trained on the 2,000 and 25,000 sample datasets. If the difference is irrelevant, it can be the signal for switching to a more complex model architecture (with more hidden layers, for instance, try yolov8l.pt instead of yolov8m.pt and so on). The model performance cannot improve endlessly with epochs number increasing: sooner or later the model will learn all the relevant information from the dataset on the level it can considering its architecture ability and training hyperparameters. Here are brief Tips for Best Training Results for yolov5, I believe they are useful for v8 too. Hope it will help to analyze your problem! |
Beta Was this translation helpful? Give feedback.
-
How can I optimize the training parameters for improved performance on the YOLOv8m model with a dataset of 25,000 photos, 45 classes, and approximately 250,000 annotations? The dataset focuses on characters of car license plates, but despite its size, the accuracy achieved during training is similar to that of a smaller dataset of 2,000 images. Specifically, what are the recommended values for parameters such as the initial learning rate (lr0), final learning rate (lrf), and any other relevant parameters that can enhance the model's performance?
Beta Was this translation helpful? Give feedback.
All reactions