Whether you've encountered its unmistakable white text on black background at the gym, in a bar, or on the couch, you're familiar with closed captioning. Here's a brief history of the technology that has provided a (mostly accurate) transcript of television programming for more than 40 years, and made its network debut 35 years ago.
TELEVISION CAPTIONING BEGINS WITH JULIA CHILD
The nation's first captioning agency, the Caption Center, was founded in 1972 at the Boston public television station WGBH. The station introduced open television captioning to rebroadcasts of The French Chef with Julia Child and began captioning rebroadcasts of ABC News programs as well, in an effort to make television more accessible to the millions of Americans who are deaf or hard of hearing.
CLOSED CAPTIONING MAKES ITS DEBUT
Captions on The French Chef were viewable to everyone who watched, which was great for members of the deaf and hard of hearing community, but somewhat distracting for other viewers. So the Caption Center and its partners began developing technology that would display captions only for viewers with a certain device.
"The system, called 'closed captioning,' uses a decoder that enables viewers to see the written dialogue or narration at the bottom of the screens," reported The New York Times in 1974. "On sets without the decoder, the written matter is invisible."
The technology, which converts human-generated captions into electronic code that is inserted into a part of the television signal not normally seen, was refined through demonstrations and experiments funded in part by the Department of Health, Education and Welfare. In 1979, the Federal Communications Commission formed the National Captioning Institute (NCI), a nonprofit organization dedicated to promoting and providing access to closed captioning. The first closed-captioned programs were broadcast on March 16, 1980, by ABC, NBC, and PBS. CBS, which wanted to use its own captioning system called teletext, was the target of protests before agreeing to join its network brethren in using closed captioning a few years later.
CC AND THE LAW
In 1990, a law—the Television Decoder Circuitry Act of 1990—was passed mandating that all televisions 13 inches or larger manufactured for sale in the U.S. contain caption decoders. Sixteen years later, the FCC ruled that all broadcast and cable television programs must include captioning, with some exceptions. The exceptions include ads that run less than five minutes and programs air between 2 a.m. and 6 a.m. According to captions.com, nearly all of the commercials that aired during this year's Super Bowl XLIX were captioned (the cost of captioning a 30-second spot is about $200, which is just a fraction of the approximately $4 million it costs to buy the ad space).
PRERECORDED VS. REAL-TIME CAPTIONING
Real-time captioning, which was introduced in 1982, provides a means for the deaf and hard of hearing community to enjoy live press conferences, local news, and sporting events on television as they happen. Real-time captioning is typically done by court reporters or similarly trained professionals who can type accurately at speeds of up to 250 words per minute. While captioners for prerecorded programs typically use standard keyboards, a real-time captioner requires a steno machine.
HOW A STENO MACHINE WORKS
A steno machine contains 22 keys and uses a code based on phonetics for every word, enabling skilled stenographers to occasionally reach typing speeds of more than 300 words per minute. Words and phrases may be captured by pressing multiple keys at the same time, and with varying force, a process known as chording. Real-time captioners, or stenocaptioners, regularly update their phonetic dictionaries, which translate their phonetic codes into words that are then encoded into the video signal to form closed captions.
REAL-TIME CAPTIONING ISN'T EASY
For live newscasts, closed captioners often receive the script that appears on the teleprompter in advance, but not every anchor follows this script as religiously as Ron Burgundy. Whereas court reporters generally aren't concerned with context and can clean up the first draft of their transcript at a later time, context matters for real-time captioners, who have one shot to accurately record what is being said. Given the speed at which they work, homonyms can prove especially difficult for stenocaptioners, as can unfamiliar or unusual names.
According to Jeff Hutchins, a co-founder of VITAC, one of the nation's leading captioning companies, there's more to being a closed captioner than knowing how to type. "There's a certain pathology to the process that we recognize," he told The New York Times in 2000. "A young lady will come in here, pretty good court reporter, very confident about her abilities, excited that she's going to get into captioning, and she will begin the training process very fired up, excited. Generally we know that in two to four weeks that she is going to be walking around with stooped shoulders, totally dejected, feeling like, 'I'll never get this.'"
Stenocaptioners can make more than $100,000 a year, but the work is stressful. In 2007, Kathy DiLorezno, former president of the National Court Reporters Association, told the Pittsburgh Post-Gazette that the job is akin to "writing naked, because a million people are reading your words. You can't make a mistake."
MISTAKES HAPPEN
While a faulty decoder or poor signal can produce captioning errors, more often than not they are the result of human error, particularly during live programming. Though stenocaptioners prepare for broadcasts by updating their phonetic dictionaries with phonetic symbols for names and places that they expect to hear, even the most prepared and accurate stenocaptioner can make a mistake from time to time. For instance, all it takes is a single incorrect keystroke to type the phonetic codes for two completely different words. Mistakes aren't limited to words, either. In 2005, American Idol displayed the wrong phone number to vote for contestants in the closed captioning of its broadcast. Media companies are experimenting with automatic error-correcting features, voice-to-text technology, and innovative ways to provide captions for multimedia on the Internet. Though captioning continues to become cheaper, faster, and more prevalent than it is today, the occasional mistake will likely always remain.
This post originally appeared in 2009.