Conversation
|
We already discussed a plan one week ago : (1) remove The first thing you had to do was having a test, yet I don't see any test. |
|
@gucifer It misses tests, from unitary to ddp. |
| __all__ = ["MeanAveragePrecision"] | ||
|
|
||
|
|
||
| def _iou(y: torch.Tensor, y_pred: torch.Tensor, crowd: List) -> torch.Tensor: |
There was a problem hiding this comment.
Does this function tested ?
There was a problem hiding this comment.
Compare it against pycocotools iou function in test?
| y_ignore = y_img["ignore"][y_ind] if "ignore" in y_img else torch.zeros(len(y_ind)) | ||
| y_area = y_img["area"][y_ind] if "area" in y_img else (y_img["bbox"][:, 2] * y_img["bbox"][:, 3])[y_ind] | ||
|
|
||
| ious = _iou(y_bbox, sorted_y_pred_bbox, crowd).to(self._device) |
There was a problem hiding this comment.
I don’t see why y_pred’s boxes should be sorted by score. Moreover, that is not mentioned in the iou method.
There was a problem hiding this comment.
iou method is generic, for 'a' y_bbox and 'b' y_pred_bbox it return a matrix of size 'a x b' containing all the respective ious. We are passing it sorted list because in eval_image function we are sorting y_preds according to confidence as well for greedy matching.
There was a problem hiding this comment.
Maybe it should be done in the method instead of here ?
|
I move to draft instead of PR. A lot of work remaining. Maybe it could worth to discuss, implement and test on our side to create a new clean PR. We will see after GSoC. |
| @reinit__is_reduced | ||
| def update(self, outputs: Tuple[Dict, Dict]) -> None: | ||
|
|
||
| for output in outputs: |
There was a problem hiding this comment.
outputs is defined as a tuple but seems considered as a list.
|
|
||
| y_pred_img, y_img = output | ||
|
|
||
| if y_img["image_id"] != y_pred_img["image_id"]: |
| if y_img["image_id"] != y_pred_img["image_id"]: | ||
| raise ValueError("Ground Truth and Predictions should be for the same image.") | ||
|
|
||
| if y_img["image_id"] in self.image_ids: |
| y_id, y_area, y_ignore, y_crowd = y | ||
| y_pred_id, y_pred_area, y_pred_score = y_pred | ||
|
|
||
| if len(y_id) == 0 and len(y_pred_id) == 0: |
|
Close as done in #2901 |
Fixes #{520}
Description:
Created a COCO Style Implementation for mAP metric. Both of them are calculated at once since both require similar calculation and have similar use case. The overall implementation is complicated so any advice to simplify it is appreciated.
Check list: