-
Notifications
You must be signed in to change notification settings - Fork 4.4k
feat(pubsub): support batch mode in WriteToPubSub transform #36027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Add support for batch mode execution in WriteToPubSub transform, which previously only worked in streaming mode. Update documentation and add tests to verify batch mode functionality with and without attributes.
…reaming Remove DirectRunner-specific override for WriteToPubSub since it now works by default for both modes. Add DataflowRunner-specific override framework with placeholder for future streaming optimizations. Implement buffering DoFn for efficient PubSub writes in both modes. Update tests to verify behavior without checking exact call arguments since data is protobuf-serialized
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #36027 +/- ##
============================================
+ Coverage 56.79% 56.80% +0.01%
Complexity 3385 3385
============================================
Files 1220 1220
Lines 185081 185147 +66
Branches 3508 3508
============================================
+ Hits 105110 105167 +57
- Misses 76646 76655 +9
Partials 3325 3325
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! A few comments but I think this approach leaves the code in a more readable state.
timer_start = time.time() | ||
for future in futures: | ||
remaining = self.FLUSH_TIMEOUT_SECS - (time.time() - timer_start) | ||
future.result(remaining) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is there a flush timeout? Completing processing without waiting for all of the messages to be consumed by pubsub could lead to data loss
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the timeout exception, which should trigger Dataflow to retry for batch jobs. The idea is to avoid any stuckness when publishing the messages.
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
Assigning reviewers: R: @jrmccluskey for label python. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
Add support for batch mode execution in WriteToPubSub transform, which previously only worked in streaming mode. Update documentation and add tests to verify batch mode functionality with and without attributes.
Fixes #35990
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.