Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting average precision class values from detectron2 model evaluation #5101

Open
25benjaminli opened this issue Oct 3, 2023 · 8 comments
Labels
enhancement Improvements or good new features

Comments

@25benjaminli
Copy link

25benjaminli commented Oct 3, 2023

🚀 Feature

A clear and concise description of the feature proposal.

When using the inference_on_dataset function from detectron2.evaluation, it only provides the overall mAP50 value, and does not provide class AP50 values. However, it does provide individual AP values. It would also be helpful if a similar feature could be added for precision and recall. If there is currently a method to do this, please let me know.

I spent a long time trying to create my own AP, Precision, and Recall evaluators which turned out to be very slow, difficult to use, and may not be completely precise. I hope others will not have to go through the trouble of making it from scratch.

If it were to be implemented, the precision, recall, and AP values with their corresponding classes, would be included as keys in the ordered dictionary generated from inference_on_dataset.

I am currently testing something with fastrcnn, but I do not know if the type of model I am using makes a difference for the inference_on_dataset function. To be clear, I am using the COCO format.

@25benjaminli 25benjaminli added the enhancement Improvements or good new features label Oct 3, 2023
@olivia632
Copy link

I wanted to contribute on this

@miyumiyumiyuki
Copy link

Please let me know if there is a script to calculate the individual AP50 values for AP50

@carandraug
Copy link

The values returned by inference_on_dataset are whatever evaluator.evaluate() return, so his is very specific to what evaluator you're using. If you're using COCOEvaluator, you can patch it to return per class "AP50" values this way:

diff --git a/detectron2/evaluation/coco_evaluation.py b/detectron2/evaluation/coco_evaluation.py
index fe8142c..5b6db0a 100644
--- a/detectron2/evaluation/coco_evaluation.py
+++ b/detectron2/evaluation/coco_evaluation.py
@@ -362,12 +362,13 @@ class COCOEvaluator(DatasetEvaluator):
         precisions = coco_eval.eval["precision"]
         # precision has dims (iou, recall, cls, area range, max dets)
         assert len(class_names) == precisions.shape[2]
+        assert coco_eval.params.iouThrs[0] == 0.5
 
         results_per_category = []
         for idx, name in enumerate(class_names):
             # area range index 0: all area ranges
             # max dets index -1: typically 100 per image
-            precision = precisions[:, :, idx, 0, -1]
+            precision = precisions[0, :, idx, 0, -1]
             precision = precision[precision > -1]
             ap = np.mean(precision) if precision.size else float("nan")
             results_per_category.append(("{}".format(name), float(ap * 100)))
@@ -380,12 +381,12 @@ class COCOEvaluator(DatasetEvaluator):
             results_2d,
             tablefmt="pipe",
             floatfmt=".3f",
-            headers=["category", "AP"] * (N_COLS // 2),
+            headers=["category", "AP50"] * (N_COLS // 2),
             numalign="left",
         )
-        self._logger.info("Per-category {} AP: \n".format(iou_type) + table)
+        self._logger.info("Per-category {} AP50: \n".format(iou_type) + table)
 
-        results.update({"AP-" + name: ap for name, ap in results_per_category})
+        results.update({"AP50-" + name: ap for name, ap in results_per_category})
         return results

@miyumiyumiyuki
Copy link

I was able to get data for AP50. I'm really thankful to you.

@bohui-lv
Copy link

Hello, thanks for your clear code, but for me, it seems didn't work, I think It may be like this because I installed the compiled detectron2 directly, If I install the compiled detectron2 directly, do I have to recompile after making changes?

The values returned by inference_on_dataset are whatever evaluator.evaluate() return, so his is very specific to what evaluator you're using. If you're using COCOEvaluator, you can patch it to return per class "AP50" values this way:

diff --git a/detectron2/evaluation/coco_evaluation.py b/detectron2/evaluation/coco_evaluation.py
index fe8142c..5b6db0a 100644
--- a/detectron2/evaluation/coco_evaluation.py
+++ b/detectron2/evaluation/coco_evaluation.py
@@ -362,12 +362,13 @@ class COCOEvaluator(DatasetEvaluator):
         precisions = coco_eval.eval["precision"]
         # precision has dims (iou, recall, cls, area range, max dets)
         assert len(class_names) == precisions.shape[2]
+        assert coco_eval.params.iouThrs[0] == 0.5
 
         results_per_category = []
         for idx, name in enumerate(class_names):
             # area range index 0: all area ranges
             # max dets index -1: typically 100 per image
-            precision = precisions[:, :, idx, 0, -1]
+            precision = precisions[0, :, idx, 0, -1]
             precision = precision[precision > -1]
             ap = np.mean(precision) if precision.size else float("nan")
             results_per_category.append(("{}".format(name), float(ap * 100)))
@@ -380,12 +381,12 @@ class COCOEvaluator(DatasetEvaluator):
             results_2d,
             tablefmt="pipe",
             floatfmt=".3f",
-            headers=["category", "AP"] * (N_COLS // 2),
+            headers=["category", "AP50"] * (N_COLS // 2),
             numalign="left",
         )
-        self._logger.info("Per-category {} AP: \n".format(iou_type) + table)
+        self._logger.info("Per-category {} AP50: \n".format(iou_type) + table)
 
-        results.update({"AP-" + name: ap for name, ap in results_per_category})
+        results.update({"AP50-" + name: ap for name, ap in results_per_category})
         return results

@carandraug
Copy link

Hello, thanks for your clear code, but for me, it seems didn't work,

How does it seems didn't work?

[...] If I install the compiled detectron2 directly, do I have to recompile after making changes?

No. But add a print somewhere to make sure that you made changes to the files being executed.

@bohui-lv
Copy link

How does it seems didn't work?
I add the AP50, but the output didn't change.

No. But add a print somewhere to make sure that you made changes to the files being executed.
Yes, I tried rebuilding detectron2 after that, and then it worked.
Thanks again

@cneupane
Copy link

The values returned by inference_on_dataset are whatever evaluator.evaluate() return, so his is very specific to what evaluator you're using. If you're using COCOEvaluator, you can patch it to return per class "AP50" values this way:

diff --git a/detectron2/evaluation/coco_evaluation.py b/detectron2/evaluation/coco_evaluation.py
index fe8142c..5b6db0a 100644
--- a/detectron2/evaluation/coco_evaluation.py
+++ b/detectron2/evaluation/coco_evaluation.py
@@ -362,12 +362,13 @@ class COCOEvaluator(DatasetEvaluator):
         precisions = coco_eval.eval["precision"]
         # precision has dims (iou, recall, cls, area range, max dets)
         assert len(class_names) == precisions.shape[2]
+        assert coco_eval.params.iouThrs[0] == 0.5
 
         results_per_category = []
         for idx, name in enumerate(class_names):
             # area range index 0: all area ranges
             # max dets index -1: typically 100 per image
-            precision = precisions[:, :, idx, 0, -1]
+            precision = precisions[0, :, idx, 0, -1]
             precision = precision[precision > -1]
             ap = np.mean(precision) if precision.size else float("nan")
             results_per_category.append(("{}".format(name), float(ap * 100)))
@@ -380,12 +381,12 @@ class COCOEvaluator(DatasetEvaluator):
             results_2d,
             tablefmt="pipe",
             floatfmt=".3f",
-            headers=["category", "AP"] * (N_COLS // 2),
+            headers=["category", "AP50"] * (N_COLS // 2),
             numalign="left",
         )
-        self._logger.info("Per-category {} AP: \n".format(iou_type) + table)
+        self._logger.info("Per-category {} AP50: \n".format(iou_type) + table)
 
-        results.update({"AP-" + name: ap for name, ap in results_per_category})
+        results.update({"AP50-" + name: ap for name, ap in results_per_category})
         return results

Thank you very much. This worked great.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvements or good new features
Projects
None yet
Development

No branches or pull requests

7 participants
@carandraug @cneupane @25benjaminli @olivia632 @miyumiyumiyuki @bohui-lv and others