Captions are provided for live multimedia.
The purpose of this checkpoint is to ensure that equivalent alternatives are provided for any multimedia presentation, including live multimedia. These equivalent alternatives must be synchronized with the presentation typically through computer assisted real time (CART) captioning, in which an operator listens to the call and repeats the content out loud. At that point, a voice recognition software package trained to their voice creates the live captions.
The checkpoint also allows for equivalent facilitation when open or closed captions cannot be provided. Equivalent facilitation for captioning can be provided via a separate communication channel, such as a third-party transcription service or by having meeting participants use a group chat to transcribe the discussion.
To comply with this checkpoint, you must meet the following technique.
- Live captions: Create captions for live multimedia.
Note: The example presented in the technique is not exhaustive. It is meant to illustrate the spirit of this checkpoint.
Live captions: Create captions for live multimedia.
General example 1
The IBM CFO delivers a quarterly presentation to IBM stockholders broadcast live on IBM.com and a real-time captioning service provides the captions for people who are deaf or hard of hearing.
For additional information, refer to the WCAG 2.0 examples of creating captions for live synchronized media (link resides outside of ibm.com) and Success Criterion 1.2.4 examples of creating live captions (link resides outside of ibm.com) .
Required unit tests for general development technique 1
Manually perform the following unit tests.
- Locate the live multimedia object.
- Before the broadcast, verify that you have a process to provide real-time captioning of audio and video content for the live broadcast.
During the broadcast, monitor the broadcast to ensure that the delivery of captions actually happens in real-time.
Although you do not have to implement the recommended techniques in order to comply with this checkpoint, you should review them because they can improve the accessibility and usability of the application.
The recommended development techniques for this checkpoint are as follows:
- Ensure your captions adhere to the standard captioning guidelines.
- Provide a note saying "No sound is used in this clip" for video-only clips.
- Use SMIL 1.0 to provide captions for all languages included in the audio tracks.
- Use SMIL 2.0 to provide captions for all languages included in the audio tracks.
- Provide a pop-up text window for a short audio presentation.
Recommended development example 1
Captions should adhere to the following guidelines:
- Provide information at the beginning of the movie (or with the link to the movie) that describes the type of captioning used (open or closed), and the language of the captions.
- Add captions to the bottom of the movie or object. Ensure that the captions do not block any important action in the movie. The captions should enhance the movie, not detract from it.
- Ensure that the captions are no more than three lines at one time.
- Follow sentence capitalization and proper punctuation standards and use italics to emphasize words or sentences.
- Include all spoken text in the captions and synchronize the captions with the video and spoken text.
- Use high contrasting colors and easy-to-read, sans-serif fonts for the captions.
- Put the speaker name before the caption each time a speaker changes.
- Include information about background sounds, music, and other relevant information in parentheses. However, do not describe someone's physical appearance or race unless those aspects are germane to the understanding of the video material.
Note: Examples for recommended techniques 2-6 are still under development by the WCAG 2.0 Working Group.
©2013 IBM Corporation
Last updated January 1, 2013.
W3C Recommendation 11 December 2008: http://www.w3.org/TR/WCAG20/ (link resides outside of ibm.com)
Copyright 1994-2009 W3C (Massachusetts Institute of Technology, European Research Consortium for Informatics and Mathematics, Keio University), All Rights Reserved.