Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flexible evaluation #405

Open
XianzheMa opened this issue Apr 28, 2024 · 1 comment
Open

Flexible evaluation #405

XianzheMa opened this issue Apr 28, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@XianzheMa
Copy link
Collaborator

Currently the evaluation is only performed during the pipeline execution. However we later might want to evaluate the models produced in this pipeline on more datasets, and we shouldn't need to re-execute the entire pipeline as training takes a long time.

After having executed a pipeline, the models are stored in the model store. We should be able to use modyn client to directly launch an evaluation request against a specific model (with request including model id) and a specific evaluation dataset (with request including a description on the evaluation dataset).

@XianzheMa XianzheMa added the enhancement New feature or request label Apr 28, 2024
@XianzheMa XianzheMa self-assigned this Apr 28, 2024
@robinholzi
Copy link
Collaborator

We now have most of the points ready? Just the client integration is missing. I'd say this isn't needed as one can simply create a pydantic model and use it instead through the cli entrypoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants