Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MVP for user-contributed document audio #435

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

sammcvicker
Copy link

@sammcvicker sammcvicker commented Nov 18, 2024

Opening this as a draft to share the code; not ready to merge.

  • feat(schema): add new mutations and input types
  • feat(graphql): implement new mutation resolvers
  • feat(db) added queries and implemented database_sql.rs functions.

TODO

  • Couldn't get sqlx-types.json to build

@sammcvicker sammcvicker changed the title sm/document user audio rust impl implement rust backend for document user audio Nov 18, 2024
@CharlieMcVicker CharlieMcVicker marked this pull request as ready for review November 22, 2024 22:39
Copy link
Contributor

@GracefulLemming GracefulLemming left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Sam, congrats on your first PR with DAILP! I noticed you mentioned this is not ready to merge yet. That being said, it looks like you are off to a really strong start! When you are ready to get this merged, Ill take a second pass to see how things look.

Also, I will let you and Charlie know when some of the build issues you have been running into are resolved. Also, feel free to ask questions in this PR relating to this work or send them to me via Charlie in Teams as you have been doing.

Welcome to the team! I am excited to continue working with you.

types/src/database_sql.rs Show resolved Hide resolved
- Add curateDocumentAudio and CurateDocumentAudioInput schemas
- Add attachAudioToDocument and AttachAudioToDocumentInput schemas
- Add CurateDocumentAudioInput and AttachAudioToDocumentInput
- Implement curate_document_audio and attach_audio_to_document
@sammcvicker sammcvicker force-pushed the sm/document-user-audio-rust-impl branch from dc24c43 to 1e4df05 Compare December 6, 2024 17:25
@CharlieMcVicker CharlieMcVicker changed the base branch from cm/document-user-audio-migration to main December 6, 2024 19:42
Copy link
Collaborator

@CharlieMcVicker CharlieMcVicker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments and commentary on changes we've made today

Comment on lines 609 to 614
Ok(context
.data::<DataLoader<Database>>()?
.load_one(dailp::DocumentId(
document_id.ok_or_else(|| anyhow::format_err!("Document not found"))?
))
.await?.ok_or_else(|| anyhow::format_err!("Document not found"))?)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have different error messages for these two branches. You could consider moving the first ok_or_else to the line where you assign let document_id = ..., or maybe even to the inside of the update_document_audio_visability function (does it make sense to return None? why would that not be an error?). I think update_word_audio_visability should match if you take that route.

Comment on lines +97 to +98
export const BookmarksTabItem = (props: { doc: DocumentFieldsFragment }) => {
const docFullPath = props.doc.chapters?.[0]?.path
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we were migrating queries that relied on AnnotatedDoc.audioRecording it seemed nice to clean up this n+1 query.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tysm!

const docAudio = doc?.audioRecording
const docAudio = doc?.ingestedAudioTrack
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best move here seemed to be to just rely directly on ingestedAudioTrack (an alias of the old audioRecording field). This code largely seems like it will not work with contributed audio, and per conversation with @GracefulLemming is not in use in production.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good for this PR. I think in the future we can also remove audio here completely.

path
# Data about a document needed to render various high level components, such as
# a DocumentHeader. Contrast with the fields resolved on DocumentContents query.
fragment DocumentFields on AnnotatedDoc {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love a better name for this. It isn't quite DocumentMetadata since that is already a special named object. But it is a subset of fields useful for rendering lots of little components that don't need everything in DocumentContentsQuery

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @CharlieMcVicker but I think this is an excellent change otherwise!

Comment on lines -394 to +368
id
title
slug
isReference
date {
year
}
bookmarkedOn {
formattedDate
}
sources {
name
link
}
audioRecording {
resourceUrl
startTime
endTime
}
translatedPages {
image {
url
}
}
chapters {
id
path
}
...DocumentFields
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:feelsgood:

Comment on lines -428 to +375
id
title
slug
isReference
date {
year
}
bookmarkedOn {
formattedDate
}
sources {
name
link
}
audioRecording {
resourceUrl
startTime
endTime
}
translatedPages {
image {
url
}
}
chapters {
id
path
}
...DocumentFields
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, this large set of fields was requested, but only ID was used. It seems to work now by fetching the DocumentFields fragment and passing the data directly to the child component rendering each bookmark.

Comment on lines +454 to +462
fragment BookmarkedDocument on AnnotatedDoc {
id
title
slug
bookmarkedOn {
formattedDate
}
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fragment could survive with only slug and bookmarkedOn, since slug is our key for documents in GraphCache. That said, keeping title and id feels saner to me.

Comment on lines -60 to -64
type NullPick<T, F extends keyof NonNullable<T>> = Pick<
NonNullable<T>,
F
> | null

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😄

Comment on lines 493 to 505
{p.doc.audioRecording && ( // TODO Implement sticky audio bar
<div id="document-audio-player" className={css.audioContainer}>
<span>Document Audio:</span>
<AudioPlayer
style={{ flex: 1 }}
audioUrl={p.doc.audioRecording.resourceUrl}
showProgress
/>
{p.doc.audioRecording && !isMobile && (
<div>
<a href={p.doc.audioRecording?.resourceUrl}>
<Button>Download Audio</Button>
</a>
</div>
)}
</div>
)}
{p.doc.editedAudio.length > 0 && // TODO Implement sticky audio bar
p.doc.editedAudio.map((audio, index) => (
<div
id="document-audio-player"
className={css.audioContainer}
key={index}
>
<span>Document Audio:</span>
<AudioPlayer
style={{ flex: 1 }}
audioUrl={audio.resourceUrl}
showProgress
/>
{!isMobile && (
<div>
<a href={audio.resourceUrl}>
<Button>Download Audio</Button>
</a>
</div>
)}
</div>
))}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, we are rendering the same document audio section over and over for each piece of edited audio.

Comment on lines -446 to +444
doc: Pick<Dailp.AnnotatedDoc, "slug" | "title" | "id"> & {
date: NullPick<Dailp.AnnotatedDoc["date"], "year">
bookmarkedOn: NullPick<Dailp.AnnotatedDoc["bookmarkedOn"], "formattedDate">
audioRecording?: NullPick<
Dailp.AnnotatedDoc["audioRecording"],
"resourceUrl"
>
}
doc: Dailp.DocumentFieldsFragment
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎊

Copy link

netlify bot commented Dec 16, 2024

👷 Deploy request for dailp pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit 0ff0204

@CharlieMcVicker CharlieMcVicker changed the title implement rust backend for document user audio MVP for document audio Dec 16, 2024
@CharlieMcVicker CharlieMcVicker changed the title MVP for document audio MVP for user-contributed document audio Dec 16, 2024
@CharlieMcVicker
Copy link
Collaborator

@GracefulLemming update from today:

@sammcvicker and I paired ~2 1/2 hours to get recording audio into a MVP/working state. We found two sets of mocks on figma. The second set I had not reviewed and seemed to have some unanswered questions jotted in the mocks. We decided to focus on functionality and getting some good abstractions for upload/record between word and document oriented components. Aside from UX decisions, the last bits of functionality to add to the frontend are:

  • Editors' ability to curate (hide/show to readers) contributed audio
  • Contributor names and contributed on dates along side the audio player

On the first point, I don't quite see from the mocks any advice on this flow, so I think following in this MVP/functionality first spirit we will replicate the checkbox feeling from the word audio contribution UX.

On the second, we might consider if this valuable metadata and context should also be provided in other places the audio player is used, eg. word audio.

@GracefulLemming
Copy link
Contributor

@CharlieMcVicker @sammcvicker thank you for the updates! Here are some thoughts:

  1. I agree we should replicate what we do for word audio. Let's full send this.
  2. Also agree names will go on the base level audio component. To cue you in a little more: @nole2701 is currently working on implementing more info on user profiles. After that, we will be updating the audio component to include names.
  3. You're working on a fork so you don't have access to environment variables needed to pass CI. You should already have the credentials you need on you, just save them in the variables section on your fork. I'm happy to pass more info in other channels if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants