You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm working on integrating odas_ros into our ros4hri pipeline.
As you can see, to each human, we would like to assign a <voice_id> and the audio source to improve the speech recognition. To my understanding, the current version of the sound source separation seems to output only an AudioFrame which contains up to 4 different sources but it does not match them to those that are tracked. Is that correct? If so, are you planning to develop it in the near future?
Thank you.
The text was updated successfully, but these errors were encountered:
Hello, I'm working on integrating odas_ros into our ros4hri pipeline.
As you can see, to each human, we would like to assign a <voice_id> and the audio source to improve the speech recognition. To my understanding, the current version of the sound source separation seems to output only an AudioFrame which contains up to 4 different sources but it does not match them to those that are tracked. Is that correct? If so, are you planning to develop it in the near future?
Thank you.
The text was updated successfully, but these errors were encountered: