Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: working on benchmarks with Finch as backend #772

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .github/workflows/codspeed.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,9 @@ jobs:
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
run: pytest benchmarks/ --codspeed
run: SPARSE_BACKEND=Numba pytest benchmarks/ --codspeed

- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
run: SPARSE_BACKEND=Finch pytest benchmarks/ --codspeed
17 changes: 17 additions & 0 deletions benchmarks/conftest.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import sparse

import pytest


Expand All @@ -6,6 +8,21 @@ def seed(scope="session"):
return 42


def get_backend_id(param):
backend = param
return f"{backend=}"


@pytest.fixture(params=[sparse._BACKEND], autouse=True, ids=get_backend_id)
DeaMariaLeon marked this conversation as resolved.
Show resolved Hide resolved
def backend(request):
return request.param


@pytest.fixture
def min_size(scope="session"):
return 100


@pytest.fixture
def max_size(scope="session"):
return 2**26
14 changes: 12 additions & 2 deletions benchmarks/test_benchmark_coo.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,22 @@ def format_id(format):


@pytest.mark.parametrize("format", ["coo", "gcxs"])
def test_matmul(benchmark, sides, format, seed, max_size, ids=format_id):
def test_matmul(benchmark, sides, seed, format, backend, min_size, max_size, ids=format_id):
# if backend == sparse._BackendType.Finch:
# pytest.skip()
DeaMariaLeon marked this conversation as resolved.
Show resolved Hide resolved

m, n, p = sides

if m * n >= max_size or n * p >= max_size:
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size:
if m * n >= max_size or n * p >= max_size or m * n * DENSITY <= min_size or n * p * DENSITY <= min_size:

pytest.skip()

rng = np.random.default_rng(seed=seed)
x = sparse.random((m, n), density=DENSITY, format=format, random_state=rng)
y = sparse.random((n, p), density=DENSITY, format=format, random_state=rng)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the tests on file test_benchmark_coo.py were meant to fail with Finch.
Finch sparse.random (used to build x & y, line 25 and 26) is different, because it doesn't have the format argument that is used with Numba.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtsokol How hard would it be to add this to finch?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can probably use sparse.asarray(..., format=<format>)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we just need a format=nothing default and then a reformatting line inside finch.random?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That'd suffice -- if the format isn't what the format arg demands, reformat.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a way of specifying format currently?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a way of specifying format currently?

asarray supports a format arg: https://github.com/willow-ahrens/finch-tensor/blob/25d5de0c6b0c75120a06c0b1c2ec1568216c71f8/src/finch/tensor.py#L647

if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
Comment on lines +28 to +29
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
f = operator.matmul
if hasattr(sparse, "compiled"):
f = sparse.compiled(f)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
def f(x, y):
return x @ y
if hasattr(sparse, "compiled"):
f = sparse.compiled(f)


x @ y # Numba compilation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x @ y # Numba compilation
f(x, y) # Compilation


@benchmark
Expand Down Expand Up @@ -52,6 +58,10 @@ def elemwise_args(request, seed, max_size):
@pytest.mark.parametrize("f", [operator.add, operator.mul])
def test_elemwise(benchmark, f, elemwise_args):
x, y = elemwise_args

if hasattr(sparse, "compiled"):
f = sparse.compiled(f)
DeaMariaLeon marked this conversation as resolved.
Show resolved Hide resolved

f(x, y)

@benchmark
Expand Down
Loading