-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delegated delivery reupload #853
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question on where the needs_apple_delivery
column lives. But otherwise looks good to me.
@@ -0,0 +1,5 @@ | |||
class AddFetchCountToPodcastContainer < ActiveRecord::Migration[7.0] | |||
def change | |||
add_column :apple_podcast_containers, :source_fetch_count, :integer, default: 0, null: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 this seems useful to keep track of.
@@ -0,0 +1,29 @@ | |||
class AddNeedsAppleDeliveryToEpisode < ActiveRecord::Migration[7.0] | |||
def change | |||
add_column :episodes, :needs_apple_delivery, :boolean, default: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious - why track on the Episode
vs the Apple::Episode
? Or I guess that's not a table ... so the Apple::PodcastContainer
? It seems like you've already got some logic in there to check if it needs re-delivery. Do you need to set this flag when that container doesn't exist yet or something?
(I'm thinking that querying for Episode.where(needs_apple_delivery: true)
wouldn't exactly be accurate, since that returns shows that don't do delegated delivery. Where Apple::PodcastContainer.where(needs_apple_delivery: true)
would.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really good point, let's move this off the episode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks good to me! Always good to have tooling to force re-uploading.
apple_mark_for_reupload! | ||
podcast&.publish! | ||
Rails.logger.tagged("Episode#publish!") do | ||
apple_mark_for_reupload! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will cause media files to be reuploaded to Apple every time episode.publish!
is called? (We call that method every time you save the episode via a controller, or any callback comes in).
Will #858 eliminate the need to do that every time?
(Also wondering if when 858 lands, and we probe for changed media versions from the publish job ... do we need to mark_for_reupload
at all here? Maybe it becomes a manual rails-console control to force reuploading then?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intent here was to capture the fluctuations in media, and yes #858 should handle changes in media. But you're right, this needs to be removed to preserve the current/existing behavior.
Currently if the episode is synced_with_apple?
then we skip it completely in the publishing flow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, when 858 lands, and we probe for changed media versions from the publish job, then everything would be handled from the publishing flow. So this can be removed at that time
# update the feeder episode to indicate that delivery is no longer needed | ||
eps.each do |ep| | ||
Rails.logger.info("Marking episode as no longer needing delivery", {episode_id: ep.feeder_episode.id}) | ||
ep.feeder_episode.apple_has_delivery! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be a race condition?
- Media version A comes in, fires a publish job
- We get partway through the job, media version B comes in, fires another publish job
- We get here, mark delivered = true
I'm not sure that's actually a problem - probing for new media versions via #858 is the real fix for the why-did-the-user-swap-media-files-so-immediately conditions.
And this explicit control to force re-uploading is handy in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So long as the next publish job runs, I think we'll be ok. Upon entering the next upload flow, #858 will probe the current version (as recorded as metadata on the podcast container) and if it's changed then update the [media metadata](https://github.com/PRX/feeder.prx.org/blob/main/app/models/apple/podcast_container.rb#L285-L292 including the version).
It's worth thinking about beefing up the transactional guarantees around the publishing job and at these state variables. The explicitness of a "apple_needs_delivery?" boolean is related to the underlying state machinery--there are several configurations that can prompt a re-delivery. Whatever those are, "apple_needs_delivery?" also means that we have to step through an entire media upload / delivery flow before we can mark that again as false
. So we don't have to piece together the underlying state, just overwrite it, and examine it again.
closes #749
This adds a column to the episode to enable more explicit state managements around episode + media delivery. This will allow the episode media to be reuploaded without disconnecting the container (as in #745 ).