Skip to content

Commit

Permalink
feat: changing context_relevancy to context_precision (#157)
Browse files Browse the repository at this point in the history
Right now this change is not a breaking change but `context_relevancy`
will be deprecated in 0.1.0.
  • Loading branch information
jjmachan authored Sep 26, 2023
1 parent 7a12846 commit ed479d4
Show file tree
Hide file tree
Showing 14 changed files with 191 additions and 186 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ os.environ["OPENAI_API_KEY"] = "your-openai-key"
dataset: Dataset

results = evaluate(dataset)
# {'ragas_score': 0.860, 'context_relevancy': 0.817,
# {'ragas_score': 0.860, 'context_precision': 0.817,
# 'faithfulness': 0.892, 'answer_relevancy': 0.874}
```

Expand All @@ -93,7 +93,7 @@ Ragas measures your pipeline's performance against different dimensions:

1. **Faithfulness**: measures the information consistency of the generated answer against the given context. If any claims are made in the answer that cannot be deduced from context is penalized. It is calculated from `answer` and `retrieved context`.

2. **Context Relevancy**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.
2. **Context Precision**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.

3. **Context Recall**: measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context. It is calculated from `ground truth` and `retrieved context`.

Expand Down
10 changes: 5 additions & 5 deletions docs/guides/quickstart-azure-openai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@
"\n",
"Ragas provides you with a few metrics to evaluate the different aspects of your RAG systems namely\n",
"\n",
"1. metrics to evaluate retrieval: offers `context_relevancy` and `context_recall` which give you the measure of the performance of your retrieval system. \n",
"1. metrics to evaluate retrieval: offers `context_precision` and `context_recall` which give you the measure of the performance of your retrieval system. \n",
"2. metrics to evaluate generation: offers `faithfulness` which measures hallucinations and `answer_relevancy` which measures how to-the-point the answers are to the question.\n",
"\n",
"The harmonic mean of these 4 aspects gives you the **ragas score** which is a single measure of the performance of your QA system across all the important aspects.\n",
Expand All @@ -126,7 +126,7 @@
"\n",
"1. **Faithfulness**: measures the information consistency of the generated answer against the given context. If any claims are made in the answer that cannot be deduced from context is penalized. It is calculated from `answer` and `retrieved context`.\n",
"\n",
"2. **Context Relevancy**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.\n",
"2. **Context Precision**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.\n",
"\n",
"3. **Context Recall**: measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context. It is calculated from `ground truth` and `retrieved context`.\n",
"\n",
Expand Down Expand Up @@ -183,7 +183,7 @@
"outputs": [],
"source": [
"from ragas.metrics import (\n",
" context_relevancy,\n",
" context_precision,\n",
" answer_relevancy,\n",
" faithfulness,\n",
" context_recall,\n",
Expand All @@ -193,9 +193,9 @@
"# list of metrics we're going to use\n",
"metrics = [\n",
" faithfulness,\n",
" answer_relevancy\n",
" answer_relevancy,\n",
" context_recall,\n",
" context_relevancy,\n",
" context_precision,\n",
" harmfulness,\n",
"]"
]
Expand Down
Loading

0 comments on commit ed479d4

Please sign in to comment.