Skip to content

Commit

Permalink
fix: some fixes and readme (#26)
Browse files Browse the repository at this point in the history
* added info about data

* fix some errors

* fix emojis

* fix blue

* char

* quickstart

* download spacy if not found

* finish quickstart

* fix linting issues

* update badges

---------

Co-authored-by: Jithin James <[email protected]>
  • Loading branch information
jjmachan and Jithin James authored May 14, 2023
1 parent fb17f9d commit ddc5d76
Show file tree
Hide file tree
Showing 5 changed files with 223 additions and 675 deletions.
27 changes: 15 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,36 +7,37 @@
</p>

<p align="center">
<a href="https://github.com/beir-cellar/beir/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/beir-cellar/beir.svg">
<a href="https://github.com/explodinggradients/ragas/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/explodinggradients/ragas.svg">
</a>
<a href="https://www.python.org/">
<img alt="Build" src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple">
</a>
<a href="https://github.com/beir-cellar/beir/blob/master/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/beir-cellar/beir.svg?color=green">
<a href="https://github.com/explodinggradients/ragas/blob/master/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/explodinggradients/ragas.svg?color=green">
</a>
<a href="https://colab.research.google.com/drive/1HfutiEhHMJLXiWGT8pcipxT5L2TpYEdt?usp=sharing">
<img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg">
</a>
<a href="https://github.com/beir-cellar/beir/">
<a href="https://github.com/explodinggradients/ragas/">
<img alt="Downloads" src="https://badges.frapsoft.com/os/v1/open-source.svg?v=103">
</a>
</p>

<h4 align="center">
<p>
<a href="#beers-installation">Installation</a> |
<a href="#beers-quick-example">Quick Example</a> |
<a href="https://huggingface.co/BeIR">Hugging Face</a>
<a href="#Installation">Installation</a> |
<a href="#quickstart">Quick Example</a> |
<a href="#metrics">Metrics List</a> |
<a href="https://huggingface.co/explodinggradients">Hugging Face</a>
<p>
</h4>

ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. RAG denotes a class of LLM applications that use external data to augment the LLM’s context. There are existing tools and frameworks that help you build these pipelines but evaluating it and quantifying your pipeline performance can be hard.. This is were ragas (RAG Assessment) comes in

ragas provides you with the tools based on the latest research for evaluating LLM generated text to give you insights about your RAG pipeline. ragas can be integrated with your CI/CD to provide continuous check to ensure performance.

## Installation 🛡
## 🛡 Installation

```bash
pip install ragas
Expand All @@ -47,7 +48,7 @@ git clone https://github.com/explodinggradients/ragas && cd ragas
pip install -e .
```

## Quickstart 🔥
## 🔥 Quickstart

This is a small example program you can run to see ragas in action!
```python
Expand All @@ -74,11 +75,13 @@ e = Evaluation(
results = e.eval(ds["ground_truth"], ds["generated_text"])
print(results)
```
If you want a more in-depth explanation of core components, check out our quick-start notebook
If you want a more in-depth explanation of core components, check out our [quick-start notebook](./examples/quickstart.ipynb)
## 🧰 Metrics

### ✏️ Character based

Character based metrics focus on analyzing text at the character level.

- **Levenshtein distance** the number of single character edits (additional, insertion, deletion) required to change your generated text to ground truth text.
- **Levenshtein** **ratio** is obtained by dividing the Levenshtein distance by sum of number of characters in generated text and ground truth. This type of metrics is suitable where one works with short and precise texts.

Expand All @@ -92,7 +95,7 @@ N-gram based metrics as name indicates uses n-grams for comparing generated answ

- **BLEU** (BiLingual Evaluation Understudy)

It measures precision by comparing  clipped n-grams in generated text to ground truth text. These matches do not consider the ordering of words.
It measures precision by comparing  clipped n-grams in generated text to ground truth text. These matches do not consider the ordering of words.

### 🪄 Model Based

Expand Down
Loading

0 comments on commit ddc5d76

Please sign in to comment.