Skip to content

Stream recording architecture #1375

@cavis

Description

@cavis

Trying to capture the general architecture we talked about for stream recording:

  1. Feeder database is the source of truth for what streams we're recording when
  2. Feeder writes "stream recording config" to some bulletproof global location when changed
    • Could be as simple as a JSON file in s3://prx-feed or some other location
    • Or dynamodb global table
    • Indicates what streams to record, what redundancy level, what start/stop timestamps, what regions etc etc
  3. Some "stream recording orchestrator" lambda lives in feeder.yml CFN stack
    • Triggered at some interval (10 minutes before every hour or something?) by CW cron event
    • Wakes up, reads the config from (2), launches Oxbow
    • Lives in both regions (us-east-1, us-west-2)
    • Has its own "temp" S3 bucket to store stream recordings
  4. After oxbow completes, sends a callback to Feeder SQS rails worker
    • Indicate what stream, what hour this is, where the file is, etc
    • Uses the "temp" bucket in (4)
  5. After getting the callback, Feeder (rails) decides how to process the file
    • Probably messaging Porter to trim the file to just the hour
    • Probably generating a waveform
    • NOTE: this does assume we'll lose the temp full-recording file, and won't be able to re-trim. Is that ok?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions