Skip to content

Commit

Permalink
docs: minor corrections (#747)
Browse files Browse the repository at this point in the history
fixes: #665 #746
  • Loading branch information
shahules786 authored Mar 12, 2024
1 parent 746d723 commit 01b2889
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/howtos/applications/data_preparation.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This tutorial assumes that you have the 4 required data points from your RAG pip
1. Question: A set of questions.
2. Contexts: Retrieved contexts corresponding to each question. This is a `list[list]` since each question can retrieve multiple text chunks.
3. Answer: Generated answer corresponding to each question.
4. Ground truths: Ground truths corresponding to each question. This is also a `list[list]` since each question may have multiple ground truths.
4. Ground truths: Ground truths corresponding to each question. This is a `str` which corresponds to the expected answer for each question.


## Example dataset
Expand All @@ -24,7 +24,7 @@ data_samples = {
'answer': ['The first superbowl was held on January 15, 1967', 'The most super bowls have been won by The New England Patriots'],
'contexts' : [['The Super Bowl....season since 1966,','replacing the NFL...in February.'],
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
'ground_truth': [['The first superbowl was held on January 15, 1967'], ['The New England Patriots have won the Super Bowl a record six times']]
'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)
```
2 changes: 1 addition & 1 deletion docs/howtos/customisations/bring-your-own-llm-or-embs.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Ragas uses LLMs and Embeddings for both evaluation and test set generation. By default, the LLM and Embedding models of choice are OpenAI models.

- [Evaluations](#evaluations)
- [Testset Generation](#testset-generation)
- [Testset Generation](#test-set-generation)


:::{note}
Expand Down

0 comments on commit 01b2889

Please sign in to comment.