04/06/92

CONTACT: Stanford University News Service (650) 723-2558

STENOCAPTIONING DELIVERS LECTURES TO DEAF STUDENTS

STANFORD -- As an undergraduate with a severe-to- profound hearing loss, Ted Chen found class lectures largely useless.

"Basically, I just did the work assigned and read the book," Chen said. "Attending class was primarily a matter of administrative details, like handing in problem sets and getting handouts, rather than an academic experience."

Even Stanford-provided oral interpreters, who sat before him and silently repeated the lectures so that he could lip-read, weren't a complete answer.

"That was better than sitting in class and not understanding anything at all," he said, "but it was still imperfect and very tiring for me. My word recognition rate would go down drastically after a short time."

Now, however, Chen is getting a full measure out of lectures thanks to "Stenocaptioning." Stanford, where he is a first- year law student, is one of the first universities in the nation to use the new technology, which marries old-fashioned court reporting with the latest in computer software.

Stenocaptioning hooks a laptop computer to a stenographer's machine. Special software in the laptop, written by Rapidtext of Irvine, Calif., converts the stenographer's code into English, so that the student can read as the lecturer speaks, at more than 260 words a minute.

Local court reporters volunteer for the service, or are paid by the university at varying rates, usually less than the cost of sign language interpreters, according to Debby Kajiyama, who coordinates the service for Stanford's Disability Resource Center.

"I've gotten quite a few calls from prospective freshmen who are hearing-impaired and want to know more about this," Kajiyama said. "There's a real interest in how they can utilize this technology."

John Interrante, a deaf doctoral student in Stanford's computer systems laboratory, calls stenocaptioning "the greatest thing since sliced bread."

"An interpreter might be able to tell me about 70 or 80 percent of what's said, but she or he can't do a perfect job," he said. "Spoken language has many more words than sign language has signs . . . and many words (especially proper names) are hard to lip-read.

"Stenocaptioning feels like a quantum improvement in comparison, because it lets me understand between 95 and 100 percent of what's said.

"Occasionally, the stenocaptionist keys in a word that the computer doesn't recognize or that the computer thinks sounds like a different word. But it's amazing how good a job the computer does, and it does an even better job after the stenocaptionist enters a few jargon words and proper names into the computer's vocabulary."

Stenocaptioning has other advantages, as well. The computer's screen displays 24 lines of text, so the student can quickly review what has just been said. And the student can take notes from the screen without missing information - something nearly impossible with oral or sign interpreters - or get a printout or computer disk of the entire transcript.

"For the first time, I am actually learning something from classes," Chen said. "I wonder at times how much better my undergraduate grades would have been.

"The only drawback to real-time captioning is that now that I realize how much I missed before, I get much more nervous about skipping classes."

-tmj/stenocaptioning-

920406Arc2304.html


This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to newslibrary@stanford.edu.