Replies: 3 comments 3 replies
-
👋 Hello @quitmeyer! Thank you for sharing your fascinating conservation project with Ultralytics 🚀. It sounds like you're doing some amazing work with rainforest insects! For initial inquiries into model size and parameter selection, we recommend exploring the Ultralytics Docs for insights and tips on model training. Also, considering your varied insect sizes, you might want to experiment with different If this is a 🐛 Bug Report, and you encounter issues, please provide a minimum reproducible example to help us identify any potential bugs. For any questions related to training adjustments or GPU management, please share more details about your setup and training logs. Ensuring the correct GPU usage might involve configuring settings for your environment to ensure the dedicated GPU is fully utilized. Feel free to dive into our community spaces for more support:
UpgradeTo ensure optimal compatibility, update to the latest pip install -U ultralytics EnvironmentsYOLO can be run in various environments, preconfigured with necessary dependencies:
StatusIf this badge is green, all Ultralytics CI tests are passing, indicating stable functionality across various platforms. This is an automated response. An Ultralytics engineer will assist you soon as well. 😊 |
Beta Was this translation helpful? Give feedback.
-
@quitmeyer for high-resolution images and a single category, consider using a larger model like YOLO11m-obb for better accuracy. Increase |
Beta Was this translation helpful? Give feedback.
-
I think it's interesting! |
Beta Was this translation helpful? Give feedback.
-
Hi! I am running a project for a conservation survey of rainforest insects.
We train a yolo11-obb model to look at flat images for insects. The images are super big (9000x6000px) and the insects can range from 4000 pixels wide to 60 pixels wide (got small and big bugs). There's only one category for YOLO to look for, "creature" and it basically just works as a "detector" pass that finds insects, crops them out, and sends that cropped image to pyBioCLIP for ID.
This has all been working quite well. But my training parameters are basically just random. I just took your example from the Ultralytics site and tweaked around with it until it worked, and yay I got a model that can detect bugs! I am about to start training a new model though with like 4x as much data (1000 images before, now like 4000). And i figured, "hey maybe i should actually have reasons for how I train this thing!"
My current code is just this.
I picked yolo11s-obb out of a hat. Maybe it would make more sense to choose a m or L model?
Maybe I should bump the imgsz up more?
any other suggestions?
Here's all my computer's stats:
Ultralytics 8.3.4 🚀 Python-3.11.9 torch-2.4.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB)
Setup complete ✅ (12 CPUs, 15.7 GB RAM, 308.8/1860.0 GB disk)
OS Windows-10-10.0.22631-SP0
Environment Windows
Python 3.11.9
Install pip
RAM 15.68 GB
CPU 12th Gen Intel Core(TM) i5-12450H
CUDA 11.8
numpy ✅ 1.26.4>=1.23.0
matplotlib ✅ 3.9.2>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.4.1+cu118>=1.8.0
torchvision ✅ 0.19.1+cu118>=0.9.0
tqdm ✅ 4.66.5>=4.64.0
psutil ✅ 6.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.8>=2.0.0
torch ✅ 2.4.1+cu118!=2.4.0,>=1.8.0; sys_platform == "win32"
My batch size i had to put down to 12 to stop getting out of memory errors. Also my laptop says it has 2 GPU's? (which i am a bit confused about), also the one GPU (that seems to be a crappier built in one?) seems to be active all the time, while the big RTX doesn't seem to have much activity, BUT the GPU1 has all its memory used vs GPU0
Beta Was this translation helpful? Give feedback.
All reactions