Captions are provided for all live audio content in synchronized media. (Level AA)

Rationale

People who are deaf or hearing-impaired, work in noisy environments, or turn off sounds to avoid disturbing others need access to the auditory information in real-time (live) presentations.  Live captions do this by identifying who is speaking, displaying the dialogue and noting non-speech information conveyed through sound, such as sound effects, music, laughter, and location as it occurs. Another benefit of captioning live presentations is that the captioning process can often produce a transcript that people who are blind or visually impaired can later read using synthesized speech or people who missed the live presentation can use for review.

Live captions must be synchronized with the presentation and be as accurate as possible- typically accomplished through Communication Access Realtime Translation (CART) captioning, in which a voice recognition software package is used to create the live captions. If the live presentation is recorded, the captions (also known as subtitles) should be corrected for accuracy before being made available as a recording (per Checkpoint 1.2.2 - Captions (prerecorded).

This checkpoint also allows for equivalent facilitation when open or closed captions cannot be provided within the live video content. Equivalent facilitation for captioning can be provided via a separate communication channel, such as a third-party transcription service or by having meeting participants use a group chat to transcribe the discussion.

Refer to Understanding SC 1.2.4 for more information (external link to WCAG).

Development Techniques

Review the General techniques as well as other tabs applicable to your technology.  Prioritize the use of technology-specific techniques, and implement the General techniques as needed. You are always required to find, understand and implement accessible code techniques to meet the checkpoint. The documented techniques and supplements are not exhaustive; they illustrate acceptable ways to achieve the spirit of the checkpoint. If numbered, techniques are in order of preference, with recommended techniques listed first. Where used, IBM information that complements the WCAG techniques is indicated as supplemental.

General techniques

Each item in this section represents a technique or combination of techniques that is deemed sufficient for meeting this Checkpoint.

Web (HTML, ARIA, CSS) techniques

Any item in this section represents a technique or combination of techniques deemed sufficient to address particular situations.

Meet G9: Creating captions for live synchronized media AND G87: Providing closed captions using one of the following techniques:

Mobile (iOS) techniques

There are no specific Mobile Native iOS technniques for this checkpoint. However, in addition to the General and Web techniques, the techniques in this section provide additional information and context for hybrid applications.

Hybrid techniques

Meet G87: Providing closed captions with the following:

Hybrid supplements

H95: Using the track element to provide captions

Eclipse techniques

There are no specific Eclipse techniques for this checkpoint. Refer to the General techniques section.

Windows-based (MSAA+IA2) techniques

There are no specific Windows-based (MSAA+IA2) techniques for this checkpoint. Refer to the General techniques section.


Most links in this checklist reside outside ibm.com at the Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008: http://www.w3.org/TR/WCAG20/

Copyright © 1994-2017 World Wide Web Consortium, (Massachusetts Institute of Technology, European Research Consortium for Informatics and Mathematics, Keio University, Beihang University). All Rights Reserved.

Copyright © 2001, 2017 IBM Corporation