Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: batch check updates #41

Merged

Conversation

booniepepper
Copy link
Contributor

@booniepepper booniepepper commented Oct 30, 2023

Description

  • feat(client): run sync batch_check with ThreadPoolExecutor for parallelism
  • feat(client): use asyncio semaphore for streaming batch check

References

Generated by openfga/sdk-generator#228

Review Checklist

  • I have clicked on "allow edits by maintainers".
  • I have added documentation for new/changed functionality in this PR or in a PR to openfga.dev [Provide a link to any relevant PRs in the references section above]
  • The correct base branch is being used, if not main
  • I have added tests to validate that the change in functionality is working as expected

@booniepepper booniepepper requested a review from a team as a code owner October 30, 2023 21:01
@booniepepper booniepepper force-pushed the feat/python-sync-parallel-batch-check branch from b304d82 to 0ddc84d Compare October 30, 2023 21:35
@booniepepper booniepepper changed the title feat/python sync parallel batch check feat: sync parallel batch check Oct 30, 2023
@booniepepper
Copy link
Contributor Author

TODO: Do some performance testing on this vs the async client

@booniepepper booniepepper force-pushed the feat/python-sync-parallel-batch-check branch from 0ddc84d to d27eb1b Compare November 1, 2023 21:12
@booniepepper booniepepper reopened this Nov 1, 2023
@booniepepper
Copy link
Contributor Author

booniepepper commented Nov 2, 2023

https://github.com/booniepepper/openfga-python-batch-check-bench

"core" branch has tests against api.us1.fga.dev and the "local" branch expects fga server running on local host (i.e. from docker)

Here are some excerpts of test output using https://github.com/sharkdp/hyperfine:

Benchmark 1: async
  Time (mean ± σ):     505.1 ms ± 170.1 ms    [User: 359.1 ms, System: 99.6 ms]
  Range (min … max):   404.0 ms … 1967.3 ms    100 runs

Benchmark 2: async+semaphore
  Time (mean ± σ):     443.2 ms ±  19.0 ms    [User: 329.3 ms, System: 88.6 ms]
  Range (min … max):   403.7 ms … 493.1 ms    100 runs

Benchmark 3: sync+threadpool
  Time (mean ± σ):     431.1 ms ±  15.6 ms    [User: 319.9 ms, System: 87.8 ms]
  Range (min … max):   408.7 ms … 495.2 ms    100 runs

So with my changes here I see about a 20% performance gain on OpenFgaClient.batch_check(...) performance

Edit: The benchmark was flawed at this time. It was using CheckRequest instead of ClientCheckRequest which failed to actually call the underlying check() functions

@booniepepper booniepepper changed the title feat: sync parallel batch check feat: batch check updates Nov 2, 2023
@booniepepper booniepepper force-pushed the feat/python-sync-parallel-batch-check branch from fc3bd65 to 095c3dd Compare November 2, 2023 19:19
@rhamzeh rhamzeh added this pull request to the merge queue Nov 2, 2023
Merged via the queue into openfga:main with commit d8f2d42 Nov 2, 2023
4 checks passed
@booniepepper
Copy link
Contributor Author

For posterity benchmarks lead to these results:

Benchmark: async (current)
  Time (mean ± σ):      1.096 s ±  0.051 s    [User: 0.752 s, System: 0.171 s]
  Range (min … max):    1.029 s …  1.482 s    1000 runs

Benchmark: async+semaphore+gather
  Time (mean ± σ):      1.022 s ±  0.044 s    [User: 0.743 s, System: 0.170 s]
  Range (min … max):    0.957 s …  1.277 s    1000 runs

Benchmark: sync+threadpool
  Time (mean ± σ):     962.3 ms ±  38.3 ms    [User: 785.2 ms, System: 208.9 ms]
  Range (min … max):   896.5 ms … 1087.3 ms    100 runs

Where "current" was from the commit prior to commits in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants