datasets/segment/ #8528
Replies: 21 comments 58 replies
-
That's great but how do we then check the auto annotated labels and correct them? |
Beta Was this translation helpful? Give feedback.
-
Hello, Should i train the model with negative samples? if It is yes, how should it be to labeling? |
Beta Was this translation helpful? Give feedback.
-
the file paths for custom data are still the same with detection, right? i mean i must create 2 folders that 1 for image and the other for label and inside image and label folder i continue creating train and test folder for both |
Beta Was this translation helpful? Give feedback.
-
Can I segment people in depth images acquired by LiDAR? |
Beta Was this translation helpful? Give feedback.
-
Hello, I see segmentation format for YOLOv8 as, say, |
Beta Was this translation helpful? Give feedback.
-
I have a follow up question about the dataset format. My dataset is organized as shown below. Inside some of my annotation folders, (say for example folder_1/1.txt) I have several label for different objects that belong to the same class. Here's an example 199 0.131608 0.000000 1.000000 0.869366 1.000000 1.000000 0.000000 0.000000 0.131608 0.000000 The problem is , the training code is considering those unique annotations as duplicate and removing them I get the following error message train: WARNING dataset/ What should I do in this case, and is my data structured in folder, and formatted in text correctly? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Does this segmentation code also output a bounding box for target detection? If I have 20 classes, 10 of which are object detection and 10 of which are semantic segmentation, can I do it with this code? What should I do? Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm working on a project where I'm working with chest x-ray images belonging to 3 classes: Pneumonia, COVID-19, and Healthy. I wish to perform instance segmentation on my dataset so that I can identify the lungs using bounding boxes and highlight pixels WITHIN the lungs which belong to a particular class for added precision. However, the YOLO Dataset guide for Segmentation tasks specifies that it is very similar (near identical) to the Detection task. I believe the formatting looks something like this: Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]path: ../datasets/coco8-seg # dataset root dir Classes (80 COCO classes)names: ...77: teddy bear I have adapted this to my own dataset. The result is that the bounding boxes are pretty accurate but the segmentation mask highlights all the pixels within the box rather than highlighting the lungs (my intended motive for added precision). Is there a way we can include binary segmentation mask images in which the lungs are highlighted in white and the background is highlighted in black to guide the segmentation process and make it more accurate? I am using the yolov8n-seg.pt model for the task at hand. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi, is it possible to train the segmentation model for objects which consist of more than 1 piece? If yes, how to construct a dataset for such a model? For my task, I need to show the model that two separated pieces are actually a part of a whole. Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
hi, i have a dataset which i annotated using cvat where i drew rectangle bbox to use it for object detection. Then i exported in the format of COCO 1.0 and convert the format that can be used in Yolov8 format. i have already trained my detection and nothing goes wrong. But now i want to train for segmentation and when i train it says error format not right for segmentation. what format should i export? the COCO 1.0 format is as follows: when converting to yolov8 format i have already specified use_segments = True |
Beta Was this translation helpful? Give feedback.
-
I have a datasets that is used for food detection. I annotated my images using brush for certain parts in the image for segmentation use, and exported it as COCO format. Previously, i have used code to convert coco format to yolov8 format, including writing a yaml file to it. Now, when i use it to convert the mask, it was ok. The problem only exists when i wanted to train. It says the format is not compatible as i wanted to use yolov8n-seg.pt to train, and it says cannot train segment model on a detect dataset. When i checked the annotations exported from Coco, it looks like it doesnt match the correct format for segmentation. how do i ensure it is in the right format? cos in doc of cvat, it says coco also supports segmentation |
Beta Was this translation helpful? Give feedback.
-
Hi, quick question. I read in the documentation that you could specify a list of paths to the images in the data.yaml file, I've tried this myself without success. I'd like to be able to use a common folder for all the images in my dataset, and to specify whether the image will be used in the train or in the sample folder using the data.yaml file. So, I'd imagine having this folder organisation:
And a data.yaml file that looks like this:
Is it possible to organise data in this way with Yolov8? |
Beta Was this translation helpful? Give feedback.
-
Hello good day, I used CVAT to segment and label my dataset. I exported it as a segmentation mask format, it gave me images of the segmented output. How do I change it to .txt files? |
Beta Was this translation helpful? Give feedback.
-
hello from ultralytics.data.converter import convert_coco convert_coco(labels_dir="groupC/annotations", use_segments=True) can you tell me this is example of the original annotation file exported from CVAT to coco format { |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a data set for instance segmentation, but the number of images is too small, is there any method for data enhancement? |
Beta Was this translation helpful? Give feedback.
-
I'm new to YOLO. I would like to convert my datasets for training a YOLOv8 model. My datasets are mainly in ALTO XML and PAGE XML formats (image/xml), the xml file contains polygons, any idea how to convert to YOLO format? Thank you! Example:
EDIT: I think this should do it: |
Beta Was this translation helpful? Give feedback.
-
Thank you @glenn-jocher, I converted my data with |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher, thank you for your help! I trained the model already and works pretty well, right now trying to figure out how to predict with it, I mean how to get back polygons so I forward them to a line recognition HTR (like PyLaia) for text recognition. I need to use the segmentation so I can cut the lines of text from the segmented image. My script:
My |
Beta Was this translation helpful? Give feedback.
-
Yes, you are right, the solution is here https://docs.ultralytics.com/guides/isolating-segmentation-objects/#what-options-are-available-for-saving-the-isolated-objects-after-segmentation. Here is my code that does that (for anyone interested):
The only problem is that the lines in page are random and not top-down in order. Is there a way to have them in order? |
Beta Was this translation helpful? Give feedback.
-
Finally I ended up with the following code, any opinion?
|
Beta Was this translation helpful? Give feedback.
-
hi yolo team, |
Beta Was this translation helpful? Give feedback.
-
datasets/segment/
Learn how Ultralytics YOLO supports various dataset formats for instance segmentation. This guide includes information on data conversions, auto-annotations, and dataset usage.
https://docs.ultralytics.com/datasets/segment/
Beta Was this translation helpful? Give feedback.
All reactions