YOLOv8 Model Size #3501
Replies: 1 comment
-
@jeffypie369 hello! Great to hear about your positive experience with YOLOv8! The significant reduction in model size from YOLOv7 to YOLOv8 without explicit quantization is primarily due to architectural optimizations and improvements in YOLOv8. These enhancements are designed to maintain or improve performance while reducing the model's footprint. YOLOv8's architecture has been refined to be more efficient, which can result in a smaller model size without sacrificing accuracy. This efficiency comes from a variety of factors, including the use of more effective layers, operations, and possibly a more compact model design overall. Quantization is not applied by default during training or exporting in YOLOv8. If you're seeing a model size that's already quite small, it's likely due to these architectural improvements rather than any background quantization process. For further details on the architectural differences and design choices in YOLOv8, you can refer to our documentation. 😊 Keep exploring and happy detecting! |
Beta Was this translation helpful? Give feedback.
-
Hi!
I pre-trained two models from scratch on a custom dataset using the architectures of YOLOv7-seg and YOLOv8-seg. YOLOv8 was a clear improvement over YOLOv7 in terms of performance.
I noticed that the trained model size of YOLOv8 (6.7MB) was over 10 times smaller than that of YOLOv7 (75MB). Can anyone provide insight to why this was the case? Was there some quantization happening in the background of the training for YOLOv8? I tried quantizing this YOLOv8 model to OpenVino FP16 and the resulting model size was 6.5MB, so I'm not sure if any quantization took place at all.
P.S. With a model size of 6.7MB, quantization probably isn't even needed at all, but I'm just trying to understand what's going on :)
Beta Was this translation helpful? Give feedback.
All reactions