-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
Level: AdvancedMost developers should anticipate frictionMost developers should anticipate frictionPriority: NormalCruising speedCruising speedSize: L3–5 days3–5 days
Description
Trying to capture the general architecture we talked about for stream recording:
- Feeder database is the source of truth for what streams we're recording when
- Feeder writes "stream recording config" to some bulletproof global location when changed
- Could be as simple as a JSON file in
s3://prx-feedor some other location - Or dynamodb global table
- Indicates what streams to record, what redundancy level, what start/stop timestamps, what regions etc etc
- Could be as simple as a JSON file in
- Some "stream recording orchestrator" lambda lives in
feeder.ymlCFN stack- Triggered at some interval (10 minutes before every hour or something?) by CW cron event
- Wakes up, reads the config from (2), launches Oxbow
- Lives in both regions (us-east-1, us-west-2)
- Has its own "temp" S3 bucket to store stream recordings
- After oxbow completes, sends a callback to Feeder SQS rails worker
- Indicate what stream, what hour this is, where the file is, etc
- Uses the "temp" bucket in (4)
- After getting the callback, Feeder (rails) decides how to process the file
- Probably messaging Porter to trim the file to just the hour
- Probably generating a waveform
- NOTE: this does assume we'll lose the temp full-recording file, and won't be able to re-trim. Is that ok?
Metadata
Metadata
Assignees
Labels
Level: AdvancedMost developers should anticipate frictionMost developers should anticipate frictionPriority: NormalCruising speedCruising speedSize: L3–5 days3–5 days