-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new models nvidia, gte, linq #1436
Add new models nvidia, gte, linq #1436
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great changes! Can you submit results?
What do you mean, results on one arbitrary task from English MTEB? |
85b0838
to
f994db6
Compare
On some tasks from leaderboard to make sure that implementation matching |
f994db6
to
0cbe9b6
Compare
@Samoed, I computed scores on tasks as in your previous pull request with the addition of models (#1319) Classification
Clustering
PairClassification
Reranking
Retrieval
STS
Summarization
I see a big difference in the results for the gte-Qwen models. Perhaps this is related to the prompts, gte uses prompt only for query, but in MTEB prompt is used for both query and document. Of the models I'm adding, the Linq-Embed-Mistral model has the least difference in scores. I think it can be merged without any changes. For other models, I think it is worth checking what the results are when you use the prompt only for query. @Samoed, what do you think? |
Great! The results on classification for NV-Embed show a significant gap as well. I think a wrapper can be created for |
Hey guys! what's the status on this PR? Getting metadata objects merged for most of these models would be a great leap forward for the leaderboard :D |
@x-tabdeveloping, Hello! Sorry for the delay, I'll look into it in the next few days to see why there's such a big inconsistency in the results. |
It sounds like the implementations work, so it might be ideal to merge these (due to #1515) and then move the inconsistencies (which we want to resolve) to an issue? |
@KennethEnevoldsen I think that would be reasonable. What do you think @Samoed ? |
I think that’s fine, but it should display a warning if someone tries to run these models |
@AlexeyVatolin Can you add a warning then? Also file an issue on this so that people know this is something to be fixed? |
Added a warning for gte and NV-embed models regarding instructions used in both query and docs. Also raised #1600 for investigating the inconsistencies per the discussion above. @Samoed or @KennethEnevoldsen would you mind taking a look and see if we can merge this? |
As far as metadata goes, the rest of it seems quite legit to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* feat: add new arctic v2.0 models (#1574) * feat: add new arctic v2.0 models * chore: make lint * 1.24.0 Automatically generated by python-semantic-release * fix: Add namaa MrTydi reranking dataset (#1573) * Add dataset class and file requirements * pass tests * make lint changes * adjust meta data and remove load_data --------- Co-authored-by: Omar Elshehy <[email protected]> * Update tasks table * 1.24.1 Automatically generated by python-semantic-release * fix: Eval langs not correctly passed to monolingual tasks (#1587) * fix SouthAfricanLangClassification.py * add check for langs * lint * 1.24.2 Automatically generated by python-semantic-release * feat: Add ColBert (#1563) * feat: add max_sim operator for IR tasks to support multi-vector models * docs: add doc for Model2VecWrapper.__init__(...) * feat: add ColBERTWrapper to models & add ColBERTv2 * fix: resolve issues * fix: resolve issues * Update README.md Co-authored-by: Roman Solomatin <[email protected]> * Update README.md Co-authored-by: Isaac Chung <[email protected]> * Update README.md Co-authored-by: Isaac Chung <[email protected]> * Update mteb/evaluation/evaluators/RetrievalEvaluator.py Co-authored-by: Isaac Chung <[email protected]> * Update README.md Co-authored-by: Isaac Chung <[email protected]> * README.md: rm subset * doc: update example for Late Interaction * get colbert running without errors * fix: pass is_query to pylate * fix: max_sim add pad_sequence * feat: integrate Jinja templates for ColBERTv2 and add model prompt handling * feat: add revision & prompt_name * doc: pad_sequence * rm TODO jina colbert v2 * doc: warning: higher resource usage for MaxSim --------- Co-authored-by: sam021313 <[email protected]> Co-authored-by: Roman Solomatin <[email protected]> Co-authored-by: Isaac Chung <[email protected]> * 1.25.0 Automatically generated by python-semantic-release * doc: colbert add score_function & doc section (#1592) * doc: colbert add score_function & doc section * doc: Update README.md Co-authored-by: Kenneth Enevoldsen <[email protected]> * doc: Update README.md Co-authored-by: Isaac Chung <[email protected]> --------- Co-authored-by: sam021313 <[email protected]> Co-authored-by: Kenneth Enevoldsen <[email protected]> Co-authored-by: Isaac Chung <[email protected]> * Feat: add support for scoring function (#1594) * add support for scoring function * lint * move similarity to wrapper * remove score function * lint * remove from InstructionRetrievalEvaluator * Update mteb/evaluation/evaluators/RetrievalEvaluator.py Co-authored-by: Kenneth Enevoldsen <[email protected]> * remove score function from README.md --------- Co-authored-by: Kenneth Enevoldsen <[email protected]> * Add new models nvidia, gte, linq (#1436) * Add new models nvidia, gte, linq * add warning for gte-Qwen and nvidia models re: instruction used in docs as well --------- Co-authored-by: isaac-chung <[email protected]> * Leaderboard: Refined plots (#1601) * Added embedding size guide to performance-size plot, removed shading on radar chart * Changed plot names to something more descriptive * Made plots failsafe * fix: Leaderboard refinements (#1603) * Added explanation of aggregate measures * Added download button to result tables * Task info gets sorted by task name * Added custom, shareable links for each benchmark * Moved explanation of aggregate metrics to the summary tab * 1.25.1 Automatically generated by python-semantic-release * Feat: Use similarity scores if available (#1602) * Use similarity scores if available * lint * Add NanoBEIR Datasets (#1588) * add NanoClimateFeverRetrieval task, still requires some debugging * move task to correct place in init file * add all Nano datasets and results * format code * Update mteb/tasks/Retrieval/eng/tempCodeRunnerFile.py Co-authored-by: Roman Solomatin <[email protected]> * pin revision to commit and add datasets to benchmark.py * create new benchmark for NanoBEIR * add revision when loading datasets * lint --------- Co-authored-by: Roman Solomatin <[email protected]> Co-authored-by: isaac-chung <[email protected]> * Update tasks table * Feat: Evaluate missing languages (#1584) * init * fix tests * update mock retrieval * update tests * use subsets instead of langs * Apply suggestions from code review Co-authored-by: Isaac Chung <[email protected]> * fix tests * add to readme * rename subset in readme --------- Co-authored-by: Isaac Chung <[email protected]> * Add IBM Granite Embedding Models (#1613) * add IBM granite embedding models * lint formatting * add adapted_from and superseded_by to ModelMeta * fix: disable co2_tracker for API models (#1614) * 1.25.2 Automatically generated by python-semantic-release * fix: set `use_instructions` to True in models using prompts (#1616) feat: set `use_instructions` to True in models using prompts * 1.25.3 Automatically generated by python-semantic-release * update RetrievalEvaluator.py * update imports * update imports and metadata * fix tests * fix tests * fix output path for retrieval * fix similarity function --------- Co-authored-by: Daniel Buades Marcos <[email protected]> Co-authored-by: github-actions <[email protected]> Co-authored-by: Omar Elshehy <[email protected]> Co-authored-by: Omar Elshehy <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Sam <[email protected]> Co-authored-by: sam021313 <[email protected]> Co-authored-by: Isaac Chung <[email protected]> Co-authored-by: Kenneth Enevoldsen <[email protected]> Co-authored-by: Alexey Vatolin <[email protected]> Co-authored-by: Márton Kardos <[email protected]> Co-authored-by: KGupta10 <[email protected]> Co-authored-by: Aashka Trivedi <[email protected]>
Added two gte, nvidia and linq models to model registry
Checklist
make test
.make lint
.Adding a model checklist
mteb.get_model(model_name, revision)
andmteb.get_model_meta(model_name, revision)
For testing I took the examples from the huggingface model card and compared the results with the results from the mteb model registry. In all new models, the text distance scores matches in at least 3 figures