diff --git a/faq/index.html b/faq/index.html index 890e238..726ae4c 100644 --- a/faq/index.html +++ b/faq/index.html @@ -72,7 +72,7 @@

Computer Vision Model Leaderboard

Frequently Asked Questions

- +

What is mAP?

Mean Average Precision (mAP) is a metric used to evaluate the object detection models. It is the average of the precision-recall curves at different IoU thresholds. @@ -95,7 +95,7 @@

What is F1 Score?

Precision measures how many of the detected objects are correct. If the model found a box on some cars in an image, but classified 20% as bicycles, precision is 80%, regardless of how many were found. What if you want high recall and high precision? F1 Score simply combines the two into a single metric. With a high F1 score, you can be sure that model produced both high precision and recall in its results. - +

Here is the formula for F1 Score, where P is precision and R is recall:

diff --git a/methodology/index.html b/methodology/index.html index 94b7ae1..72984ea 100644 --- a/methodology/index.html +++ b/methodology/index.html @@ -87,7 +87,7 @@

Methodology

The Roboflow 100 benchmark was designed to measure model performance across domains. If you are interested in learning more about domain-specific model benchmarking, refer to the Roboflow 100 website.

- +
diff --git a/static/common.css b/static/common.css index 4b47172..a6c09e4 100644 --- a/static/common.css +++ b/static/common.css @@ -2,4 +2,4 @@ a.no-decoration { text-decoration: none; color: inherit; background-color: transparent; -} \ No newline at end of file +}