Skip to content

Commit

Permalink
Context Recall (#96)
Browse files Browse the repository at this point in the history
## What
Context recall estimation using annotated answers as ground truth

## Why
Context recall was a highly requested feature, as it is one of the main
pain points where pipeline error occurs in RAG systems

## How
Introduced a simple paradigm similar to faithfulness

---------

Co-authored-by: jjmachan <[email protected]>
  • Loading branch information
shahules786 and jjmachan authored Aug 24, 2023
1 parent ec2a34b commit 5cf4975
Show file tree
Hide file tree
Showing 11 changed files with 803 additions and 463 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,9 +91,11 @@ Ragas measures your pipeline's performance against different dimensions

2. **Context Relevancy**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized.

3. **Answer Relevancy**: refers to the degree to which a response directly addresses and is appropriate for a given question or context. This does not take the factuality of the answer into consideration but rather penalizes the present of redundant information or incomplete answers given a question.
3. **Context Recall**: measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context.

4. **Aspect Critiques**: Designed to judge the submission against defined aspects like harmlessness, correctness, etc. You can also define your own aspect and validate the submission against your desired aspect. The output of aspect critiques is always binary.
4. **Answer Relevancy**: refers to the degree to which a response directly addresses and is appropriate for a given question or context. This does not take the factuality of the answer into consideration but rather penalizes the present of redundant information or incomplete answers given a question.

5. **Aspect Critiques**: Designed to judge the submission against defined aspects like harmlessness, correctness, etc. You can also define your own aspect and validate the submission against your desired aspect. The output of aspect critiques is always binary.

The final `ragas_score` is the harmonic mean of individual metric scores.

Expand Down
188 changes: 159 additions & 29 deletions docs/integrations/langchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,17 @@
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8333f65e",
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "markdown",
"id": "842e32dc",
Expand All @@ -35,7 +46,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "4aa9a986",
"metadata": {},
"outputs": [],
Expand All @@ -51,23 +62,23 @@
"\n",
"llm = ChatOpenAI()\n",
"qa_chain = RetrievalQA.from_chain_type(\n",
" llm, retriever=index.vectorstore.as_retriever(), return_source_documents=True\n",
" llm, retriever=index.vectorstore.as_retriever(), return_source_documents=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "b0ebdf8d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'New York City was named in honor of the Duke of York, who would become King James II of England. King Charles II appointed the Duke as proprietor of the former territory of New Netherland, including the city of New Amsterdam, when England seized it from Dutch control.'"
"'New York City got its name in 1664 when it was renamed after the Duke of York, who later became King James II of England. The city was originally called New Amsterdam by Dutch colonists and was renamed New York when it came under British control.'"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -90,7 +101,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "e67ce0e0",
"metadata": {},
"outputs": [],
Expand All @@ -103,7 +114,16 @@
" \"What is the significance of the Statue of Liberty in New York City?\",\n",
"]\n",
"\n",
"queries = [{\"query\": q} for q in eval_questions]"
"eval_answers = [\n",
" \"8,804,000\", # incorrect answer\n",
" \"Queens\", # incorrect answer\n",
" \"New York City's economic significance is vast, as it serves as the global financial capital, housing Wall Street and major financial institutions. Its diverse economy spans technology, media, healthcare, education, and more, making it resilient to economic fluctuations. NYC is a hub for international business, attracting global companies, and boasts a large, skilled labor force. Its real estate market, tourism, cultural industries, and educational institutions further fuel its economic prowess. The city's transportation network and global influence amplify its impact on the world stage, solidifying its status as a vital economic player and cultural epicenter.\",\n",
" \"New York City got its name when it came under British control in 1664. King Charles II of England granted the lands to his brother, the Duke of York, who named the city New York in his own honor.\",\n",
" 'The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom.',\n",
"]\n",
"\n",
"examples = [{\"query\": q, \"ground_truths\": [eval_answers[i]]} \n",
" for i, q in enumerate(eval_questions)]"
]
},
{
Expand All @@ -126,18 +146,63 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 10,
"id": "8f89d719",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = qa_chain({\"query\": eval_questions[4]})\n",
"result[\"result\"]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "81fa9c47",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The borough of Brooklyn (Kings County) has the highest population in New York City.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result = qa_chain(examples[1])\n",
"result[\"result\"]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1d9266d4",
"metadata": {},
"outputs": [],
"source": [
"from ragas.langchain.evalchain import RagasEvaluatorChain\n",
"from ragas.metrics import faithfulness, answer_relevancy, context_relevancy\n",
"from ragas.metrics import faithfulness, answer_relevancy, context_relevancy, context_recall\n",
"\n",
"# create evaluation chains\n",
"faithfulness_chain = RagasEvaluatorChain(metric=faithfulness)\n",
"answer_rel_chain = RagasEvaluatorChain(metric=answer_relevancy)\n",
"context_rel_chain = RagasEvaluatorChain(metric=context_relevancy)"
"context_rel_chain = RagasEvaluatorChain(metric=context_relevancy)\n",
"context_recall_chain = RagasEvaluatorChain(metric=context_recall)"
]
},
{
Expand All @@ -152,17 +217,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 17,
"id": "5ede32cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1.0"
"0.5"
]
},
"execution_count": 6,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -172,6 +237,28 @@
"eval_result[\"faithfulness_score\"]"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "94b5544e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"eval_result = context_recall_chain(result)\n",
"eval_result[\"context_recall_score\"]"
]
},
{
"cell_type": "markdown",
"id": "f11295b5",
Expand All @@ -184,7 +271,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 24,
"id": "1ce7bff1",
"metadata": {},
"outputs": [
Expand All @@ -199,31 +286,73 @@
"name": "stderr",
"output_type": "stream",
"text": [
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:38<00:00, 38.77s/it]\n"
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:57<00:00, 57.41s/it]\n"
]
},
{
"data": {
"text/plain": [
"[{'faithfulness_score': 1.0},\n",
" {'faithfulness_score': 0.5},\n",
" {'faithfulness_score': 0.75},\n",
" {'faithfulness_score': 1.0},\n",
" {'faithfulness_score': 1.0},\n",
" {'faithfulness_score': 1.0}]"
]
},
"execution_count": 7,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# run the queries as a batch for efficiency\n",
"predictions = qa_chain.batch(queries)\n",
"predictions = qa_chain.batch(examples)\n",
"\n",
"# evaluate\n",
"print(\"evaluating...\")\n",
"r = faithfulness_chain.evaluate(queries, predictions)\n",
"r = faithfulness_chain.evaluate(examples, predictions)\n",
"r"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "55299f14",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"evaluating...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:54<00:00, 54.21s/it]\n"
]
},
{
"data": {
"text/plain": [
"[{'context_recall_score': 0.9333333333333333},\n",
" {'context_recall_score': 0.0},\n",
" {'context_recall_score': 1.0},\n",
" {'context_recall_score': 1.0},\n",
" {'context_recall_score': 1.0}]"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# evaluate context recall\n",
"print(\"evaluating...\")\n",
"r = context_recall_chain.evaluate(examples, predictions)\n",
"r"
]
},
Expand All @@ -244,15 +373,15 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 48,
"id": "e75144c5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"using existing dataset: NYC test\n"
"Created a new dataset: NYC test\n"
]
}
],
Expand All @@ -274,9 +403,10 @@
" dataset = client.create_dataset(\n",
" dataset_name=dataset_name, description=\"NYC test dataset\"\n",
" )\n",
" for q in eval_questions:\n",
" for e in examples:\n",
" client.create_example(\n",
" inputs={\"query\": q},\n",
" inputs={\"query\": e[\"query\"]},\n",
" outputs={\"ground_truths\": e[\"ground_truths\"]},\n",
" dataset_id=dataset.id,\n",
" )\n",
"\n",
Expand All @@ -297,7 +427,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 27,
"id": "3a6decc6",
"metadata": {},
"outputs": [],
Expand All @@ -322,27 +452,27 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 49,
"id": "25f7992f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"View the evaluation results for project '2023-08-22-19-28-17-RetrievalQA' at:\n",
"https://smith.langchain.com/projects/p/2133d672-b69a-4091-bc96-a4e39d150db5?eval=true\n"
"View the evaluation results for project '2023-08-24-03-36-45-RetrievalQA' at:\n",
"https://smith.langchain.com/projects/p/9fb78371-150e-49cc-a927-b1247fdb9e8d?eval=true\n"
]
}
],
"source": [
"from langchain.smith import RunEvalConfig, run_on_dataset\n",
"\n",
"evaluation_config = RunEvalConfig(\n",
" custom_evaluators=[faithfulness_chain, answer_rel_chain, context_rel_chain],\n",
" custom_evaluators=[faithfulness_chain, answer_rel_chain, context_rel_chain, context_recall_chain],\n",
" prediction_key=\"result\",\n",
")\n",
"\n",
" \n",
"result = run_on_dataset(\n",
" client,\n",
" dataset_name,\n",
Expand Down
16 changes: 16 additions & 0 deletions docs/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,22 @@ dataset: Dataset
results = context_rel.score(dataset)
```

### `Context Recall`
measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context.

```python
from ragas.metrics.context_recall import ContextRecall
context_recall = ContextRecall()
# Dataset({
# features: ['contexts','ground_truths'],
# num_rows: 25
# })
dataset: Dataset

results = context_recall.score(dataset)
```


### `AnswerRelevancy`

This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.
Expand Down
Loading

0 comments on commit 5cf4975

Please sign in to comment.