You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I’m training a panoptic segmentation fpn model from detectron2 on a custom dataset following COCO format. After reading other issues like #1691, I managed to register, train and evaluate the model but there are still some things I think I’m not understanding in the theory and also due to unexpected behaviors during evaluation.
In my case, I’m trying to get accurate predictions of circular elements embodied in another one of variable size and shape (something like a gruyer cheese where the holes are the things in the dataset and the cheese and surrounding area out of it are the stuffs). So, I have 2 stuff categories and 3 different thing categories (different types of holes inside the “cheese”). I registered my dataset with “register_coco_panoptic_separated” following tips in issue #1691 and creating corresponding files (panoptic and semantic masks and panoptic and instances jsons). After register, thing_dataset_id_to_contiguous_id is automatically load in the dataset metadata but stuff_dataset_id_to_contiguous_id, which is necessary for evaluation, is not. Therefore, I tried setting the latter manually with different options but I ended up using the same ids as in the semantic segmentation masks (i.e., 0s for thing categories and 1 and 2 for the “cheese” and surrounding area stuff categories respectively). I don’t even know if this is correct (because of all the “contiguous ids” thing) as there is not much explanation about it in the description of metadata for datasets of detectron2. With this configuration though, panoptic evaluation worked but a problem appeared. I actually tried with both COCOEvaluator and COCOPanopticEvaluator, as I’m not really sure if the latter works for custom datasets. The main problem of the results I’m getting with the panoptic evaluator is that panoptic metrics (PQ, SQ and RQ) are always 0.000% for stuff categories even if I try different training parameters or use the whole dataset or a subset of it. As the visualized predictions for stuff categories are actually accurate, I guess the problem is somewhere in the data for the evaluation, the stuff_dataset_id_to_contiguous_id manual configuration, or the evaluator. If the last one is the case here, or in general, is there any suggestion to evaluate panoptic segmentation on custom datasets? Is it even possible?
I’m probably misunderstanding something as I’m relatively new to this, but I can`t find much help or examples out there on this topic.
Thanks a lot in advance. Here’s a snippet of my code:
#my dataset categories start at index 100 as I wasn´t sure if there were problems starting from 1
stuffs=[“things”, “cheese”, “out of cheese”] # I added a category of “things” here as seen in an example from issue #1691
stuffs_ids=[100, 101]
stuff_dataset_id_to_contiguous_id={102: 0,
103: 0,
104: 0,
100: 1,
101: 2})
tings=[“first thing”, “second thing”, “third thing”]
things_ids=[102, 103, 104]
thing_dataset_id_to_contiguous_id={102: 0, 103: 1, 104: 2},
#register both train and test datasets like this:
register_coco_panoptic_separated(dataset_name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json)
dicts=DatasetCatalog.get("dataset_name_separated")
MetadataCatalog.get(dataset_name).set(thing_classes=tings, stuff_classes=stuffs, stuff_dataset_id_to_contiguous_id=stuffs_ids)
metadata=MetadataCatalog.get(“dataset_name_separated”)
I am facing a similar issue. What is weird in my case is that if I set the cfg.TEST.EVAL_PERIOD to be a small number (say 10), then Stuff is computed, whereas if I set it to 100 it tells me that my Stuff # categories is 0.
@idol-94 @fsemerar would you be willing to share your .json & .png files and your code? I'm trying to evaluate my model performance but haven't been able to do so. I suspect there could be an issue with my dataset I'm not seeing. Thank you!
Hi! I’m training a panoptic segmentation fpn model from detectron2 on a custom dataset following COCO format. After reading other issues like #1691, I managed to register, train and evaluate the model but there are still some things I think I’m not understanding in the theory and also due to unexpected behaviors during evaluation.
In my case, I’m trying to get accurate predictions of circular elements embodied in another one of variable size and shape (something like a gruyer cheese where the holes are the things in the dataset and the cheese and surrounding area out of it are the stuffs). So, I have 2 stuff categories and 3 different thing categories (different types of holes inside the “cheese”). I registered my dataset with “register_coco_panoptic_separated” following tips in issue #1691 and creating corresponding files (panoptic and semantic masks and panoptic and instances jsons). After register, thing_dataset_id_to_contiguous_id is automatically load in the dataset metadata but stuff_dataset_id_to_contiguous_id, which is necessary for evaluation, is not. Therefore, I tried setting the latter manually with different options but I ended up using the same ids as in the semantic segmentation masks (i.e., 0s for thing categories and 1 and 2 for the “cheese” and surrounding area stuff categories respectively). I don’t even know if this is correct (because of all the “contiguous ids” thing) as there is not much explanation about it in the description of metadata for datasets of detectron2. With this configuration though, panoptic evaluation worked but a problem appeared. I actually tried with both COCOEvaluator and COCOPanopticEvaluator, as I’m not really sure if the latter works for custom datasets. The main problem of the results I’m getting with the panoptic evaluator is that panoptic metrics (PQ, SQ and RQ) are always 0.000% for stuff categories even if I try different training parameters or use the whole dataset or a subset of it. As the visualized predictions for stuff categories are actually accurate, I guess the problem is somewhere in the data for the evaluation, the stuff_dataset_id_to_contiguous_id manual configuration, or the evaluator. If the last one is the case here, or in general, is there any suggestion to evaluate panoptic segmentation on custom datasets? Is it even possible?
I’m probably misunderstanding something as I’m relatively new to this, but I can`t find much help or examples out there on this topic.
Thanks a lot in advance. Here’s a snippet of my code:
#my dataset categories start at index 100 as I wasn´t sure if there were problems starting from 1
stuffs=[“things”, “cheese”, “out of cheese”] # I added a category of “things” here as seen in an example from issue #1691
stuffs_ids=[100, 101]
stuff_dataset_id_to_contiguous_id={102: 0,
103: 0,
104: 0,
100: 1,
101: 2})
tings=[“first thing”, “second thing”, “third thing”]
things_ids=[102, 103, 104]
thing_dataset_id_to_contiguous_id={102: 0, 103: 1, 104: 2},
#register both train and test datasets like this:
register_coco_panoptic_separated(dataset_name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json)
dicts=DatasetCatalog.get("dataset_name_separated")
MetadataCatalog.get(dataset_name).set(thing_classes=tings, stuff_classes=stuffs, stuff_dataset_id_to_contiguous_id=stuffs_ids)
metadata=MetadataCatalog.get(“dataset_name_separated”)
#set config
cfg = get_cfg()
cfg.MODEL.DEVICE = 'cpu'
cfg.OUTPUT_DIR =
cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
cfg.DATASETS.TRAIN = ("dataset_name _separated")
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 1
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.00025
cfg.SOLVER.MAX_ITER = 1125
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 3
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
#train model
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
#set model predictor for inference
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.01
cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.01
cfg.TEST.DETECTIONS_PER_IMAGE = 500
cfg.DATASETS.TEST=("dataset_val_name_separated", )
predictor = DefaultPredictor(cfg)
eval_results example:
The text was updated successfully, but these errors were encountered: