Skip to content

Closed Captions

Kesava Krishnan Madavan edited this page Feb 21, 2024 · 3 revisions

Table of Contents

Introduction

This documentation discusses how one can use the transcription with Webex Web Meetings SDK V3 which uses the Voicea plugin. With this upgrade, one can also list and let the user choose a language for Spoken Language as well as the Caption Language.

Prerequisites

To start seeing the transcription, you need the following prerequisites:

  • The meeting host should have the Webex Assistant enabled
  • The user should have joined the meeting

Start Transcription

Once the above-said prerequisites are fulfilled, one can start transcription by calling the following API,

await meeting.startTranscription(options);
Param Name Param Type Mandatory? Description Value
options Object No This is the config object that can be used to give various config while starting a transcription {
  spokenLanguage?: String
}

Set Caption & Spoken Languages

When the transcription is started, the following event is fired to let the application know that the transcription will be starting. This event callback will also receive a payload with the list of supported Spoken and Caption Languages.

meeting.on('meeting:receiveTranscription:started', (payload) => {
      console.log(payload.captionLanguages);
      console.log(payload.spokenLanguages);
});

The captionLanguages and spokenLanguages are two arrays that contain the language code for the supported languages. The Language Codes are in-match with the ISO 639 language code.

Start Receiving Transcription

Once the meeting:receiveTranscription:started is received, the next event that one should listen to is the following event. This event will ideally receive the caption information and it can be subscribed to as follows,

meeting.on('meeting:caption-received', (payload) => {
      //use payload to display captions
});

Sample Payload

{
    "captions": [
        {
            "id": "88e1b0c9-7483-b865-f0bd-a685a5234943",
            "isFinal": true,
            "text": "Hey, everyone.",
            "currentSpokenLanguage": "en",
            "timestamp": "1:22",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Kesava Krishnan Madavan"
            }
        },
        {
            "id": "e8fd9c60-1782-60c0-92e5-d5b22c80df2b",
            "isFinal": true,
            "text": "That's awesome.",
            "currentSpokenLanguage": "en",
            "timestamp": "1:26",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Kesava Krishnan Madavan"
            }
        },
        {
            "id": "be398e11-cf08-92e7-a42d-077ecd60aeea",
            "isFinal": true,
            "text": "आपका नाम क्या है?",
            "currentSpokenLanguage": "hi",
            "timestamp": "1:55",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Kesava Krishnan Madavan"
            }
        },
        {
            "id": "84adc1a7-b3c3-5a49-0588-aa787b1437eb",
            "isFinal": true,
            "translations": {
                "en": "What is your name?"
            },
            "text": "आपका नाम क्या है?",
            "currentSpokenLanguage": "hi",
            "timestamp": "2:11",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Kesava Krishnan Madavan"
            }
        },
        {
            "id": "84c89387-cd5d-ce15-1867-562c0a91155f",
            "isFinal": true,
            "translations": {
                "hi": "तुम्हारा नाम क्या है?"
            },
            "text": "What's your name?",
            "currentSpokenLanguage": "en",
            "timestamp": "2:46",
            "speaker": {
                "speakerId": "8093d335-9b96-4f9d-a6b2-7293423be88a",
                "name": "Kesava Krishnan Madavan"
            }
        }
    ],
    "interimCaptions": {
        "88e1b0c9-7483-b865-f0bd-a685a5234943": [],
        "e8fd9c60-1782-60c0-92e5-d5b22c80df2b": [],
        "be398e11-cf08-92e7-a42d-077ecd60aeea": [],
        "84adc1a7-b3c3-5a49-0588-aa787b1437eb": [],
        "84c89387-cd5d-ce15-1867-562c0a91155f": []
    }
}

Changing the Spoken Language

During the meeting, if the spoken language needs to be changed, this API needs to be called,

const currentSpokenLanguage = await meeting.setSpokenLanguage(selectedLanguage);

In this API, the selectedLanguage will be the language code chosen by the user that was received at the time of starting transcription. When the spoken language is chosen and if the user speaks in that language, the caption is displayed in that language itself. At any point, if the caption language is set by the user to a different language, the user who speaks in this language will see the caption in a different language.

Changing the Caption Language

During the meeting, if the caption language needs to be changed, this API needs to be called,

await meeting.setCaptionLanguage(selectedLanguage);

In this API, the selectedLanguage will be the language code chosen by the user that was received at the time of starting transcription. When the caption language is chosen, no matter in what language the user speaks, it will be translated to this language

Stop Receiving Transcription

If a user wants to stop receiving transcription, this API shall be called,

meeting.stopTranscription();
Clone this wiki locally