You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While testing metrics for pyspark evaluation, I've noticed that the ranking metrics like NDCG seems to be using binary relevances only, while python evaluation has a parameter to chose between binary, exponential or raw relevances. The snippet below shows that behavior (it will only consider which items are relevant, but not accessing their relevances):
.agg(expr("collect_list("+self.col_item+") as ground_truth"))
.select(self.col_user, "ground_truth")
Is it possible to use exponencial or raw relevances in spark evaluation currently or am I wrong in this analysis?
The text was updated successfully, but these errors were encountered:
lgabs
changed the title
[ASK] Is custom relevance used in RankingMetric class for pyspark evaluation?
[ASK] Is binary relevance the only option in RankingMetric class for pyspark evaluation?
Apr 18, 2024
While testing metrics for pyspark evaluation, I've noticed that the ranking metrics like NDCG seems to be using binary relevances only, while python evaluation has a parameter to chose between binary, exponential or raw relevances. The snippet below shows that behavior (it will only consider which items are relevant, but not accessing their relevances):
recommenders/recommenders/evaluation/spark_evaluation.py
Lines 292 to 295 in c2ea583
Is it possible to use exponencial or raw relevances in spark evaluation currently or am I wrong in this analysis?
The text was updated successfully, but these errors were encountered: