Skip to content

Commit e9bf293

Browse files
authored
Typo
1 parent 458abd4 commit e9bf293

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

Pyannote_plays_and_Whisper_rhymes_v_2_0.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@
2323
"In the following, I use [**`pyannote-audio`**](https://github.com/pyannote/pyannote-audio), a speaker diarization toolkit by Hervé Bredin, to identify the speakers, and then match it with the transcriptions of Whispr, linked to the video. The input can be YouTube or an video/audio file (also on Google Drive). I try it on a [Customer Support Call](https://youtu.be/hpZFJctBUHQ). Check the result [**here**](https://majdoddin.github.io/dyson.html).\n",
2424
"\n",
2525
"To make it easier to match the transcriptions to diarizations by speaker change, Sarah Kaiser [suggested](https://github.com/openai/whisper/discussions/264#discussioncomment-3825375) runnnig the pyannote.audio first and then just running whisper on the split-by-speaker chunks. \n",
26-
"For sake of performance (and transcription quality?), we attach the audio segments into a single audio file with a silent spacer as a seperator, and run whisper on it. Enjoy it!\n",
26+
"For sake of performance (and transcription quality?), we attach the audio segments into a single audio file with a silent spacer as a separator, and run whisper on it. Enjoy it!\n",
2727
"\n",
28-
"(For sake of performance , I also tried attaching the audio segments into a single audio file with a silent -or beep- spacer as a seperator, and run whisper on it see it on [colab](https://colab.research.google.com/drive/1HuvcY4tkTHPDzcwyVH77LCh_m8tP-Qet?usp=sharing). It [works](https://majdoddin.github.io/lexicap.html) on some audio, and fails on some (Dyson's Interview). The problem is, whisper does not reliably make a timestap on a spacer. See the discussions [#139](https://github.com/openai/whisper/discussions/139) and [#29](https://github.com/openai/whisper/discussions/29))\n",
28+
"(For sake of performance , I also tried attaching the audio segments into a single audio file with a silent -or beep- spacer as a separator, and run whisper on it see it on [colab](https://colab.research.google.com/drive/1HuvcY4tkTHPDzcwyVH77LCh_m8tP-Qet?usp=sharing). It [works](https://majdoddin.github.io/lexicap.html) on some audio, and fails on some (Dyson's Interview). The problem is, whisper does not reliably make a timestap on a spacer. See the discussions [#139](https://github.com/openai/whisper/discussions/139) and [#29](https://github.com/openai/whisper/discussions/29))\n",
2929
"\n",
3030
"The Markdown form used below is from [@ArthurFDLR](https://github.com/ArthurFDLR/whisper-youtube/). "
3131
]

0 commit comments

Comments
 (0)