Skip to content

Conversation

liferoad
Copy link
Contributor

@liferoad liferoad commented Sep 1, 2025

Add support for batch mode execution in WriteToPubSub transform, which previously only worked in streaming mode. Update documentation and add tests to verify batch mode functionality with and without attributes.

Fixes #35990


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Add support for batch mode execution in WriteToPubSub transform, which previously only worked in streaming mode. Update documentation and add tests to verify batch mode functionality with and without attributes.
…reaming

Remove DirectRunner-specific override for WriteToPubSub since it now works by default for both modes. Add DataflowRunner-specific override framework with placeholder for future streaming optimizations. Implement buffering DoFn for efficient PubSub writes in both modes.

Update tests to verify behavior without checking exact call arguments since data is protobuf-serialized
Copy link

codecov bot commented Sep 3, 2025

Codecov Report

❌ Patch coverage is 83.33333% with 11 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.80%. Comparing base (a4ad5cd) to head (3b7e445).
⚠️ Report is 13 commits behind head on master.

Files with missing lines Patch % Lines
...ache_beam/runners/dataflow/ptransform_overrides.py 65.21% 8 Missing ⚠️
sdks/python/apache_beam/io/gcp/pubsub.py 92.10% 3 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master   #36027      +/-   ##
============================================
+ Coverage     56.79%   56.80%   +0.01%     
  Complexity     3385     3385              
============================================
  Files          1220     1220              
  Lines        185081   185147      +66     
  Branches       3508     3508              
============================================
+ Hits         105110   105167      +57     
- Misses        76646    76655       +9     
  Partials       3325     3325              
Flag Coverage Δ
python 81.00% <83.33%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@scwhittle scwhittle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! A few comments but I think this approach leaves the code in a more readable state.

timer_start = time.time()
for future in futures:
remaining = self.FLUSH_TIMEOUT_SECS - (time.time() - timer_start)
future.result(remaining)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is there a flush timeout? Completing processing without waiting for all of the messages to be consumed by pubsub could lead to data loss

Copy link
Contributor Author

@liferoad liferoad Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the timeout exception, which should trigger Dataflow to retry for batch jobs. The idea is to avoid any stuckness when publishing the messages.

@liferoad liferoad marked this pull request as ready for review September 3, 2025 13:57
@liferoad liferoad requested a review from scwhittle September 3, 2025 14:01
Copy link
Contributor

github-actions bot commented Sep 3, 2025

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

Copy link
Contributor

github-actions bot commented Sep 3, 2025

Assigning reviewers:

R: @jrmccluskey for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: WriteToPubSub Sink breaks in batch mode
2 participants