Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new models nvidia, gte, linq #1436

Merged
merged 3 commits into from
Dec 16, 2024

Conversation

AlexeyVatolin
Copy link
Contributor

Added two gte, nvidia and linq models to model registry

Checklist

  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding a model checklist

  • I have filled out the ModelMeta object to the extent possible
  • I have ensured that my model can be loaded using
    • mteb.get_model(model_name, revision) and
    • mteb.get_model_meta(model_name, revision)
  • I have tested the implementation works on a representative set of tasks.

For testing I took the examples from the huggingface model card and compared the results with the results from the mteb model registry. In all new models, the text distance scores matches in at least 3 figures

Copy link
Collaborator

@Samoed Samoed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great changes! Can you submit results?

mteb/models/nvidia_models.py Outdated Show resolved Hide resolved
mteb/models/nvidia_models.py Show resolved Hide resolved
@AlexeyVatolin
Copy link
Contributor Author

Great changes! Can you submit results?

What do you mean, results on one arbitrary task from English MTEB?

@Samoed
Copy link
Collaborator

Samoed commented Nov 11, 2024

On some tasks from leaderboard to make sure that implementation matching

mteb/models/gte_models.py Outdated Show resolved Hide resolved
@AlexeyVatolin
Copy link
Contributor Author

@Samoed, I computed scores on tasks as in your previous pull request with the addition of models (#1319)

Classification

source AmazonCounterfactualClassification EmotionClassification ToxicConversationsClassification
Linq-Embed-Mistral Leaderboard 84.43 51.82 71.29
Linq-Embed-Mistral Pull request 84.94 56.45 71.82
NV-Embed-v1 Leaderboard 95.12 91.7 92.6
NV-Embed-v1 Pull request 71.03 79.26 78.96
NV-Embed-v2 Leaderboard 94.28 93.38 92.74
NV-Embed-v2 Pull request 79.28 64.79 76.3
gte-Qwen1.5-7B-instruct Leaderboard 83.16 54.53 78.75
gte-Qwen1.5-7B-instruct Pull request 81.78 54.91 77.25
gte-Qwen2-1.5B-instruct Leaderboard 83.99 61.37 82.66
gte-Qwen2-1.5B-instruct Pull request 82.42 65.66 84.54

Clustering

source ArxivClusteringS2S RedditClustering
Linq-Embed-Mistral Leaderboard 47.3 61.52
Linq-Embed-Mistral Pull request 47.61 60.94
NV-Embed-v1 Leaderboard 49.59 63.2
NV-Embed-v1 Pull request 48.31 52.29
NV-Embed-v2 Leaderboard 51.26 71.1
NV-Embed-v2 Pull request 46.98 55.58
gte-Qwen1.5-7B-instruct Leaderboard 51.45 73.37
gte-Qwen1.5-7B-instruct Pull request 53.57 80.12
gte-Qwen2-1.5B-instruct Leaderboard 45.01 55.82
gte-Qwen2-1.5B-instruct Pull request 44.61 51.36

PairClassification

source SprintDuplicateQuestions TwitterSemEval2015
Linq-Embed-Mistral Leaderboard 96.11 81.52
Linq-Embed-Mistral Pull request 94.66 77.09
NV-Embed-v1 Leaderboard 95.94 79
NV-Embed-v1 Pull request 95.93 71.6
NV-Embed-v2 Leaderboard 97.02 81.11
NV-Embed-v2 Pull request 96.99 73.33
gte-Qwen1.5-7B-instruct Leaderboard 96.07 79.36
gte-Qwen1.5-7B-instruct Pull request 20.53 37.15
gte-Qwen2-1.5B-instruct Leaderboard 95.32 79.64
gte-Qwen2-1.5B-instruct Pull request 29.5 42.26

Reranking

source SciDocsRR AskUbuntuDupQuestions
Linq-Embed-Mistral Leaderboard 86.4 66.82
Linq-Embed-Mistral Pull request 84.52 62.36
NV-Embed-v1 Leaderboard 87.26 67.5
NV-Embed-v1 Pull request 86.29 65.27
NV-Embed-v2 Leaderboard 87.59 67.46
NV-Embed-v2 Pull request 85.45 64.94
gte-Qwen1.5-7B-instruct Leaderboard 87.89 66
gte-Qwen1.5-7B-instruct Pull request 57.67 45.19
gte-Qwen2-1.5B-instruct Leaderboard 86.52 64.55
gte-Qwen2-1.5B-instruct Pull request 67.05 48.91

Retrieval

source SCIDOCS SciFact
Linq-Embed-Mistral Leaderboard 21.93 78.32
Linq-Embed-Mistral Pull request 22.08 78.32
NV-Embed-v1 Leaderboard 20.19 78.43
NV-Embed-v1 Pull request 20.07 78.13
NV-Embed-v2 Leaderboard 21.9 80.13
NV-Embed-v2 Pull request 21.67 80.11
gte-Qwen1.5-7B-instruct Leaderboard 27.69 75.31
gte-Qwen1.5-7B-instruct Pull request 26.34 75.8
gte-Qwen2-1.5B-instruct Leaderboard 24.98 78.44
gte-Qwen2-1.5B-instruct Pull request 23.41 77.47

STS

source STS16 STSBenchmark
Linq-Embed-Mistral Leaderboard 87.37 88.81
Linq-Embed-Mistral Pull request 87.25 88.66
NV-Embed-v1 Leaderboard 84.77 86.14
NV-Embed-v1 Pull request 78.2 80.25
NV-Embed-v2 Leaderboard 86.77 88.41
NV-Embed-v2 Pull request 82.79 83.56
gte-Qwen1.5-7B-instruct Leaderboard 86.39 87.35
gte-Qwen1.5-7B-instruct Pull request 85.98 86.86
gte-Qwen2-1.5B-instruct Leaderboard 85.45 86.38
gte-Qwen2-1.5B-instruct Pull request 84.71 84.71

Summarization

source SummEval
Linq-Embed-Mistral Leaderboard 30.98
Linq-Embed-Mistral Pull request 30.39
NV-Embed-v1 Leaderboard 31.2
NV-Embed-v1 Pull request 29.37
NV-Embed-v2 Leaderboard 30.7
NV-Embed-v2 Pull request 30.42
gte-Qwen1.5-7B-instruct Leaderboard 31.46
gte-Qwen1.5-7B-instruct Pull request 31.22
gte-Qwen2-1.5B-instruct Leaderboard 31.17
gte-Qwen2-1.5B-instruct Pull request 30.5

I see a big difference in the results for the gte-Qwen models. Perhaps this is related to the prompts, gte uses prompt only for query, but in MTEB prompt is used for both query and document. Of the models I'm adding, the Linq-Embed-Mistral model has the least difference in scores. I think it can be merged without any changes. For other models, I think it is worth checking what the results are when you use the prompt only for query. @Samoed, what do you think?

@Samoed
Copy link
Collaborator

Samoed commented Nov 16, 2024

Great! The results on classification for NV-Embed show a significant gap as well. I think a wrapper can be created for gte-Qwen to add instructions only to the query. However, it's a bit strange that prompts seem to make performance worse on PairClassification.

@x-tabdeveloping
Copy link
Collaborator

Hey guys! what's the status on this PR? Getting metadata objects merged for most of these models would be a great leap forward for the leaderboard :D

@AlexeyVatolin
Copy link
Contributor Author

@x-tabdeveloping, Hello! Sorry for the delay, I'll look into it in the next few days to see why there's such a big inconsistency in the results.

@KennethEnevoldsen
Copy link
Contributor

It sounds like the implementations work, so it might be ideal to merge these (due to #1515) and then move the inconsistencies (which we want to resolve) to an issue?

@x-tabdeveloping
Copy link
Collaborator

@KennethEnevoldsen I think that would be reasonable. What do you think @Samoed ?

@Samoed
Copy link
Collaborator

Samoed commented Dec 4, 2024

I think that’s fine, but it should display a warning if someone tries to run these models

@x-tabdeveloping x-tabdeveloping mentioned this pull request Dec 4, 2024
2 tasks
@x-tabdeveloping
Copy link
Collaborator

@AlexeyVatolin Can you add a warning then? Also file an issue on this so that people know this is something to be fixed?

@isaac-chung
Copy link
Collaborator

Added a warning for gte and NV-embed models regarding instructions used in both query and docs. Also raised #1600 for investigating the inconsistencies per the discussion above.

@Samoed or @KennethEnevoldsen would you mind taking a look and see if we can merge this?

@x-tabdeveloping
Copy link
Collaborator

As far as metadata goes, the rest of it seems quite legit to me

Copy link
Collaborator

@Samoed Samoed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@isaac-chung isaac-chung merged commit 95d5ae5 into embeddings-benchmark:main Dec 16, 2024
10 checks passed
Samoed added a commit that referenced this pull request Dec 22, 2024
* feat: add new arctic v2.0 models (#1574)

* feat: add new arctic v2.0 models

* chore: make lint

* 1.24.0

Automatically generated by python-semantic-release

* fix: Add namaa MrTydi reranking dataset (#1573)

* Add dataset class and file requirements

* pass tests

* make lint changes

* adjust meta data and remove load_data

---------

Co-authored-by: Omar Elshehy <[email protected]>

* Update tasks table

* 1.24.1

Automatically generated by python-semantic-release

* fix: Eval langs not correctly passed to monolingual tasks (#1587)

* fix SouthAfricanLangClassification.py

* add check for langs

* lint

* 1.24.2

Automatically generated by python-semantic-release

* feat: Add ColBert (#1563)

* feat: add max_sim operator for IR tasks to support multi-vector models

* docs: add doc for Model2VecWrapper.__init__(...)

* feat: add ColBERTWrapper to models & add ColBERTv2

* fix: resolve issues

* fix: resolve issues

* Update README.md

Co-authored-by: Roman Solomatin <[email protected]>

* Update README.md

Co-authored-by: Isaac Chung <[email protected]>

* Update README.md

Co-authored-by: Isaac Chung <[email protected]>

* Update mteb/evaluation/evaluators/RetrievalEvaluator.py

Co-authored-by: Isaac Chung <[email protected]>

* Update README.md

Co-authored-by: Isaac Chung <[email protected]>

* README.md: rm subset

* doc: update example for Late Interaction

* get colbert running without errors

* fix: pass is_query to pylate

* fix: max_sim add pad_sequence

* feat: integrate Jinja templates for ColBERTv2 and add model prompt handling

* feat: add revision & prompt_name

* doc: pad_sequence

* rm TODO jina colbert v2

* doc: warning: higher resource usage for MaxSim

---------

Co-authored-by: sam021313 <[email protected]>
Co-authored-by: Roman Solomatin <[email protected]>
Co-authored-by: Isaac Chung <[email protected]>

* 1.25.0

Automatically generated by python-semantic-release

* doc: colbert add score_function & doc section (#1592)

* doc: colbert add score_function & doc section

* doc: Update README.md

Co-authored-by: Kenneth Enevoldsen <[email protected]>

* doc: Update README.md

Co-authored-by: Isaac Chung <[email protected]>

---------

Co-authored-by: sam021313 <[email protected]>
Co-authored-by: Kenneth Enevoldsen <[email protected]>
Co-authored-by: Isaac Chung <[email protected]>

* Feat: add support for scoring function (#1594)

* add support for scoring function

* lint

* move similarity to wrapper

* remove score function

* lint

* remove from InstructionRetrievalEvaluator

* Update mteb/evaluation/evaluators/RetrievalEvaluator.py

Co-authored-by: Kenneth Enevoldsen <[email protected]>

* remove score function from README.md

---------

Co-authored-by: Kenneth Enevoldsen <[email protected]>

* Add new models nvidia, gte, linq (#1436)

* Add new models nvidia, gte, linq
* add warning for gte-Qwen and nvidia models re: instruction used in docs as well
---------
Co-authored-by: isaac-chung <[email protected]>

* Leaderboard: Refined plots (#1601)

* Added embedding size guide to performance-size plot, removed shading on radar chart

* Changed plot names to something more descriptive

* Made plots failsafe

* fix: Leaderboard refinements (#1603)

* Added explanation of aggregate measures

* Added download button to result tables

* Task info gets sorted by task name

* Added custom, shareable links for each benchmark

* Moved explanation of aggregate metrics to the summary tab

* 1.25.1

Automatically generated by python-semantic-release

* Feat: Use similarity scores if available (#1602)

* Use similarity scores if available

* lint

* Add NanoBEIR Datasets (#1588)

* add NanoClimateFeverRetrieval task, still requires some debugging
* move task to correct place in init file
* add all Nano datasets and results
* format code
* Update mteb/tasks/Retrieval/eng/tempCodeRunnerFile.py
Co-authored-by: Roman Solomatin <[email protected]>
* pin revision to commit and add datasets to benchmark.py
* create new benchmark for NanoBEIR
* add revision when loading datasets
* lint
---------
Co-authored-by: Roman Solomatin <[email protected]>
Co-authored-by: isaac-chung <[email protected]>

* Update tasks table

* Feat: Evaluate missing languages (#1584)

* init
* fix tests
* update mock retrieval
* update tests
* use subsets instead of langs
* Apply suggestions from code review
Co-authored-by: Isaac Chung <[email protected]>
* fix tests
* add to readme
* rename subset in readme
---------
Co-authored-by: Isaac Chung <[email protected]>

* Add IBM Granite Embedding Models (#1613)

* add IBM granite embedding models
* lint formatting
* add adapted_from and superseded_by to ModelMeta

* fix: disable co2_tracker for API models (#1614)

* 1.25.2

Automatically generated by python-semantic-release

* fix: set `use_instructions` to True in models using prompts (#1616)

feat: set `use_instructions` to True in models using prompts

* 1.25.3

Automatically generated by python-semantic-release

* update RetrievalEvaluator.py

* update imports

* update imports and metadata

* fix tests

* fix tests

* fix output path for retrieval

* fix similarity function

---------

Co-authored-by: Daniel Buades Marcos <[email protected]>
Co-authored-by: github-actions <[email protected]>
Co-authored-by: Omar Elshehy <[email protected]>
Co-authored-by: Omar Elshehy <[email protected]>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sam <[email protected]>
Co-authored-by: sam021313 <[email protected]>
Co-authored-by: Isaac Chung <[email protected]>
Co-authored-by: Kenneth Enevoldsen <[email protected]>
Co-authored-by: Alexey Vatolin <[email protected]>
Co-authored-by: Márton Kardos <[email protected]>
Co-authored-by: KGupta10 <[email protected]>
Co-authored-by: Aashka Trivedi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants