Skip to content

Commit

Permalink
docs: Refactor Code for Syntax Highlighting and URL Updates (#1634)
Browse files Browse the repository at this point in the history
  • Loading branch information
suekou authored Nov 7, 2024
1 parent 02e7a46 commit effa4ab
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 6 deletions.
4 changes: 2 additions & 2 deletions docs/extra/components/choose_evaluator_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
pip install langchain-aws
```

then you have to set your AWS credentials and configurations
Then you have to set your AWS credentials and configurations

```python
config = {
Expand All @@ -43,7 +43,7 @@
}
```

define you LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.
Define your LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.

```python
from langchain_aws import ChatBedrockConverse
Expand Down
6 changes: 4 additions & 2 deletions docs/howtos/integrations/_llamaindex.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ You will need an testset to evaluate your `QueryEngine` against. You can either
Let's see how that works with Llamaindex

# load the documents
```python
from llama_index.core import SimpleDirectoryReader

documents = SimpleDirectoryReader("./nyc_wikipedia").load_data()
```

Now lets init the `TestsetGenerator` object with the corresponding generator and critic llms
Now lets init the `TestsetGenerator` object with the corresponding generator and critic llms


```python
Expand Down Expand Up @@ -171,7 +173,7 @@ Now that we have a `QueryEngine` for the `VectorStoreIndex` we can use the llama
In order to run an evaluation with Ragas and LlamaIndex you need 3 things

1. LlamaIndex `QueryEngine`: what we will be evaluating
2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://github.com/explodinggradients/ragas/blob/main/docs/metrics.md)
2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/)
3. Questions: A list of questions that ragas will test the `QueryEngine` against.

first lets generate the questions. Ideally you should use that you see in production so that the distribution of question with which we evaluate matches the distribution of questions seen in production. This ensures that the scores reflect the performance seen in production but to start off we'll be using a few example question.
Expand Down
13 changes: 11 additions & 2 deletions docs/howtos/integrations/llamaindex.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,16 @@
"id": "096e5af0",
"metadata": {},
"source": [
"# load the documents\n",
"# load the documents"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "396085d5",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core import SimpleDirectoryReader\n",
"\n",
"documents = SimpleDirectoryReader(\"./nyc_wikipedia\").load_data()"
Expand Down Expand Up @@ -298,7 +307,7 @@
"In order to run an evaluation with Ragas and LlamaIndex you need 3 things\n",
"\n",
"1. LlamaIndex `QueryEngine`: what we will be evaluating\n",
"2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://github.com/explodinggradients/ragas/blob/main/docs/metrics.md)\n",
"2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/)\n",
"3. Questions: A list of questions that ragas will test the `QueryEngine` against. "
]
},
Expand Down

0 comments on commit effa4ab

Please sign in to comment.