Skip to content

Conversation

@narangvivek10
Copy link
Collaborator

Fixes #41

@narangvivek10 narangvivek10 self-assigned this Dec 12, 2025
@narangvivek10 narangvivek10 added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Dec 12, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 12, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@narangvivek10
Copy link
Collaborator Author

/ok to test e0f7b03

@narangvivek10
Copy link
Collaborator Author

/ok to test 61fc3b7

@narangvivek10 narangvivek10 marked this pull request as ready for review December 12, 2025 02:20
@narangvivek10 narangvivek10 requested review from a team as code owners December 12, 2025 02:20
*/
package com.nvidia.cuvs.lucene.benchmarks;

import static com.nvidia.cuvs.lucene.benchmarks.Utils.cleanup;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than having yet another set of benchmarks in here, I'd rather us put these in cuvs-bench so that we have a single way of running and reproducing these. One of the cuvs engineers is currently working to make cuvs-bench more pluggle and generalized so that we can plug these in more easily. We want to avoid having many different codes for running benchmarks sitting around. It becomes confusing to users and introduces more ways for us to find (and have to explain) various deltas between expectations and reality... just altogether easier for us to have a single way to run them and a single source of truth.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, if these are meant to be treated more like microbenchmarks for profiling / perf tuning, then that's completely different. In that case, I'd put these in a directory called bench/ instead of benchmarks/ to match cuvs, raft and other repositories.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are meant to be microbenchmarks. Renaming the directory to bench.

Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Giving this a ci-codeowners approval, the update-version.sh changes look small and non-controversial.

@jameslamb jameslamb removed the request for review from bdice December 23, 2025 18:54
@narangvivek10
Copy link
Collaborator Author

/ok to test a87e7f7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Setup Java Microbenchmark Harness

3 participants