Skip to content

Commit

Permalink
Merge branch 'main' into distributed
Browse files Browse the repository at this point in the history
  • Loading branch information
alex-dixon committed Nov 22, 2024
2 parents b2b416a + 36ca5ee commit 7bc69f8
Show file tree
Hide file tree
Showing 12 changed files with 454 additions and 54 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ To install `ell` and `ell studio`, you can use pip. Follow these steps:
2. Run the following command to install the `ell-ai` package from PyPI:

```bash
pip install ell-ai
pip install ell-ai[all]
```

3. Verify the installation by checking the version of `ell`:
Expand Down
8 changes: 5 additions & 3 deletions docs/src/core_concepts/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,18 @@ Configuration
ell provides various configuration options to customize its behavior.

.. autofunction:: ell.init
:no-index:

This ``init`` function is a convenience function that sets up the configuration for ell. It is a thin wrapper around the ``Config`` class, which is a Pydantic model.

You can modify the global configuration using the ``ell.config`` object which is an instance of ``Config``:

.. autopydantic_model:: ell.Config
:members:
:exclude-members: default_client, registry, store
:exclude-members: default_client, registry, store, providers
:model-show-json: false
:model-show-validator-members: false
:model-show-config-summary: false
:model-show-field-summary: false
:model-show-validator-summary: false
:model-show-field-summary: true
:model-show-validator-summary: false
:no-index:
210 changes: 210 additions & 0 deletions docs/src/core_concepts/evaluation.rst.partial
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
Evaluations
===========

Evaluations in ELL provide a powerful framework for assessing and analyzing Language Model Programs (LMPs). This guide covers the core concepts and features of the evaluation system.

Basic Usage
----------

Here's a simple example of creating and running an evaluation:

.. code-block:: python

import ell
from ell import Evaluation

@ell.simple(model="gpt-4")
def my_lmp(input_text: str):
return f"Process this: {input_text}"

# Define a metric function
def accuracy_metric(datapoint, output):
return float(datapoint["expected_output"].lower() in output.lower())

# Create an evaluation
eval = Evaluation(
name="basic_evaluation",
n_evals=10,
metrics={"accuracy": accuracy_metric}
)

# Run the evaluation
results = eval.run(my_lmp, n_workers=10)

Core Components
-------------

Evaluation Configuration
~~~~~~~~~~~~~~~~~~~~~~~

The ``Evaluation`` class accepts several key parameters:

- ``name``: A unique identifier for the evaluation
- ``n_evals``: Number of evaluations to run
- ``metrics``: Dictionary of metric functions
- ``dataset``: Optional dataset for evaluation
- ``samples_per_datapoint``: Number of samples per dataset point (default: 1)

Metrics
~~~~~~~

Metrics are functions that assess the performance of your LMP. They can be:

1. Simple scalar metrics:

.. code-block:: python

def length_metric(_, output):
return len(output)

2. Structured metrics:

.. code-block:: python

def detailed_metric(datapoint, output):
return {
"length": len(output),
"contains_keyword": datapoint["keyword"] in output,
"response_time": datapoint["response_time"]
}

3. Multiple metrics:

.. code-block:: python

metrics = {
"accuracy": accuracy_metric,
"length": length_metric,
"detailed": detailed_metric
}

Dataset Handling
~~~~~~~~~~~~~~

Evaluations can use custom datasets:

.. code-block:: python

dataset = [
{
"input": {"question": "What is the capital of France?"},
"expected_output": "Paris"
},
{
"input": {"question": "What is the capital of Italy?"},
"expected_output": "Rome"
}
]

eval = Evaluation(
name="geography_quiz",
dataset=dataset,
metrics={"accuracy": accuracy_metric}
)

Parallel Execution
~~~~~~~~~~~~~~~~

Evaluations support parallel execution for improved performance:

.. code-block:: python

# Run with 10 parallel workers
results = eval.run(my_lmp, n_workers=10, verbose=True)

Results and Analysis
------------------

Result Structure
~~~~~~~~~~~~~~

Evaluation results include:

- Metric summaries (mean, std, min, max)
- Individual run details
- Execution metadata
- Error tracking

Accessing Results
~~~~~~~~~~~~~~~

.. code-block:: python

# Get mean accuracy
mean_accuracy = results.metrics["accuracy"].mean()

# Get standard deviation
std_accuracy = results.metrics["accuracy"].std()

# Access individual runs
for run in results.runs:
print(f"Run ID: {run.id}")
print(f"Success: {run.success}")
print(f"Duration: {run.end_time - run.start_time}")

Advanced Features
---------------

Evaluation Types
~~~~~~~~~~~~~~

ELL supports different types of evaluations:

- ``METRIC``: Numerical performance metrics
- ``ANNOTATION``: Human or model annotations
- ``CRITERION``: Pass/fail criteria

Version Control
~~~~~~~~~~~~~

Evaluations support versioning:

- Version numbers
- Commit messages
- History tracking
- Multiple runs per version

Error Handling
~~~~~~~~~~~~

Robust error handling and reporting:

- Automatic error capture
- Failed run management
- Success status tracking
- Detailed error messages

ELL Studio Integration
--------------------

The evaluation system integrates with ELL Studio, providing:

- Visual evaluation management
- Result visualization
- Run comparisons
- Filtering and search
- Metric summaries
- Version control interface

Best Practices
------------

1. **Metric Design**
- Keep metrics focused and specific
- Use appropriate return types
- Handle edge cases

2. **Dataset Management**
- Use representative data
- Include edge cases
- Maintain dataset versioning

3. **Performance Optimization**
- Use appropriate worker counts
- Monitor resource usage
- Cache results when possible

4. **Version Control**
- Use meaningful commit messages
- Track major changes
- Maintain evaluation history
8 changes: 4 additions & 4 deletions examples/evals/summaries.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import ell.lmp.function


dataset: List[ell.evaluation.Datapoint] = [
dataset = [
{
"input": { # I really don't like this. Forcing "input" without typing feels disgusting.
"text": "The Industrial Revolution was a period of major industrialization and innovation that took place during the late 1700s and early 1800s. It began in Great Britain and quickly spread throughout Western Europe and North America. This revolution saw a shift from an economy based on agriculture and handicrafts to one dominated by industry and machine manufacturing. Key technological advancements included the steam engine, which revolutionized transportation and manufacturing processes. The textile industry, in particular, saw significant changes with the invention of spinning jennies, water frames, and power looms. These innovations led to increased productivity and the rise of factories. The Industrial Revolution also brought about significant social changes, including urbanization, as people moved from rural areas to cities for factory work. While it led to economic growth and improved living standards for some, it also resulted in poor working conditions, child labor, and environmental pollution. The effects of this period continue to shape our modern world."
Expand Down Expand Up @@ -126,7 +126,7 @@ def length_criterion(_, output):
eval_list = ell.evaluation.Evaluation(
name="test_list",
dataset=dataset,
criteria=[score_criterion, length_criterion],
metrics=[score_criterion, length_criterion],
)

# Example using a dictionary of criteria (as before)
Expand All @@ -139,8 +139,8 @@ def length_criterion(_, output):
# Run evaluation with list-based criteria
print("EVAL WITH GPT-4o (list-based criteria)")
results = eval_list.run(summarizer, n_workers=4, verbose=False).results
print("Mean critic score:", results.metrics["score"].mean())
print("Mean length of completions:", results.metrics["length"].mean())
print("Mean critic score:", results.metrics["score_criterion"].mean())
print("Mean length of completions:", results.metrics["length_criterion"].mean())

# Run evaluation with dict-based criteria
print("EVAL WITH GPT-4o (dict-based criteria)")
Expand Down
Loading

0 comments on commit 7bc69f8

Please sign in to comment.