Skip to content

unexpected performance on ssd_resnet_50_fpn_coco  #78

@captainst

Description

@captainst

Hi there,
I am using tf_trt_models on jetson NANO with JP 4.2.3, tensorflow 1.14.0.

In detection.py, there is an entry in "MODEL" dict, ssd_resnet_50_fpn_coco. Following the example in detection.ipynb seems to convert successfully to tensorRT model. But the Benchmark gives an Average runtime: 0.57 seconds.
It is werid since ssd_inception_v2_coco gives 0.087 seconds on jetson NANO, that is almost 7 times faster than ssd_resnet_50_fpn_coco. From the model zoo page (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md), these two models should not give such a huge difference in inference time (ssd_inception_v2_coco@42ms vs ssd_resnet_50_fpn_coco@76ms).

Another issue is on Faster-Rcnn models. I figured out that inside function build_detection_graph, the line
config.model.faster_rcnn.second_stage_post_processing.score_threshold = score_threshold
should be changed to
config.model.faster_rcnn.second_stage_post_processing.batch_non_max_suppression.score_threshold = score_threshold
in order to convert a faster-rcnn model. However, the Benchmark gives an Average runtime > 1 second! for faster_rcnn_inception_v2_coco. That's really slow.

Has anybody encoutered similar problems ?

Many thanks !

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions