Stanford University News Service



CONTACT: Stanford University News Service (650) 723-2558

Adapting technology for disabled people

STANFORD -- J.B. Galan rattles off the sentence, "Please ask Mrs. Wright to write to me right away."

As he speaks, the words appear almost magically in thick black characters on the computer screen projected on the wall in front of him.

Galan, injured six years ago in a diving accident, is paralyzed from the shoulders down.

Despite his serious disability, he excels at using a computer for complex tasks, thanks to Project Archimedes, a research program at Stanford's Center for the Study of Language and Information (CSLI). Galan has worked at the project for the last two years, and writing letters, using the telephone, writing and answering e-mail, surfing the Web and designing Web pages are part of his everyday routine.

Galan, who has the trim body of an athlete, sits in a motorized wheelchair. A black band runs across the top of his head. It contains a device called a head-pointer that allows him to control a computer cursor by moving his head. Also attached to the band is a small microphone, which is hooked up to a voice recognition system. On the wall above a bench filled with computer equipment, a Macintosh screen is projected. Using his voice and the head-pointer, Galan dictates text and switches between programs faster than many computer users.

"He may be the most productive quadriplegic in the country," said Neil Scott, technical director of the project, which is aimed at ensuring that people like Galan are not left behind in the rush to new information technologies.

In the United States alone, 40 million to 50 million individuals have one or more disabilities. As computers become more dominant in the workplace, the ability of disabled individuals to use this technology increasingly will mean the difference between being productive, independent members of society or being dependent on societal support.

The country must develop a viable strategy for including the disabled in technology use, said John Perry, professor of philosophy and director of CSLI. As an example of the problem's complexity, he cited the case of blind computer users.

Blind people were among the first people with disabilities to benefit significantly from computer technology. Using simple machines that could read the words on early text-based computer screens and convert them to synthesized speech, blind people were able to operate computers efficiently. But then, first the Macintosh computer and then Windows for DOS machines replaced the text- based interface with buttons, icons and other graphical devices that made the machines much more difficult for blind computer users to operate.

Perry played a videotape of two blind programmers, Juanita Fischer and Vita Zavoli, singing a song they wrote about the replacement of the text-based "DOS" operating system with the newer "Graphical User Interfaces." To the chorus of the Beatles' song "Yesterday," the pair sang, "Why DOS had to go, I don't know, she wouldn't say; I clicked something wrong and my hard drive went away."

According to project director Betsy Macken, "People are making choices about who to leave out when they design the form of information delivery."

Macken said she became involved in this area of research through an interest in sign language and the realization that everyone, including those with disabilities, can be viewed as elements in an information system. This perspective led Macken and Perry to the realization that many of the problems and opportunities technology poses for individuals with disabilities are connected to a main theme at CSLI: The same information can assume many forms.

"When you look at the problem in this way, then you see that the central theme of the Americans with Disabilities Act - the need to require individualized access - is just informational common sense," Macken said.

Slow communication

One of the problems with current technological devices designed to aid communication, she said, is that they are very slow. People normally speak at a rate of 150 to 250 words per minute, but individuals using even the best aids typically speak at only five to 25 words per minute.

"This low rate of speaking frequently causes nondisabled people to become impatient and begin second guessing what the person is trying to say, at which point communication breaks down," Macken said.

Currently, a number of research and development projects are attempting to find technological solutions to the various problems involved in providing access for people with disabilities. The vast majority, however, are aimed at specific disabilities like blindness or deafness or lack of motor control.

Before joining the Archimedes Project, technical director Scott was one of those trying to solve access problems in a piecemeal way. At California State University at Northridge, he had helped to assemble at least 50 different computer stations to aid people with different disabilities.

"I realized that if you tried to put all these technologies on a single computer, it would be far too expensive for anyone to buy and so heavy that no one could lift it. I determined that there must be a better way."

Project Archimedes complements efforts targeted at specific disabilities by addressing the problem of access at a system level. Its researchers have divided the problem into two parts: communications to and from the computer, and communications to and from the individual. Devices being developed include the following:

  • A simple device, called a Total Access Port, that is a little bigger than a mouse and plugs into the keyboard and mouse ports on a number of different computers; it translates signals that are indistinguishable from those produced by a person tapping keys or moving a mouse.
  • A bigger and more complex device, called a Personal Accessor, that communicates with the Total Access Port. It is customized for each individual and translates the signals from the access devices that he or she prefers into a form compatible with the access port.
  • An improved menuing system, called a Linguistic Form Processor, that is aimed at speeding up the communication process for individuals with severe motor control difficulties.

In addition, researchers are testing various access devices, including voice recognition systems, head-pointers, eye-trackers and a device called CyberLink, which moves a computer cursor based on changes in electromagnetic fields from the brain and forehead muscles. The most useful of these devices will be integrated into the Personal Accessor.

Galan provides an example of how the system works, operating a computer using voice recognition and a head-pointer. The voice recognition system developed by Dragon Systems runs on a separate DOS computer. The Personal Accessor converts his speech into computer text that is relayed to the host computer via the Total Access Port. Similarly, the output from his head-pointer is relayed to the computer. Because the host computer can't tell the difference between this form of input and punching keys on the keyboard or moving a mouse, Galan can use any program that the computer can run.

"Speech recognition has gotten a lot of bad reviews, but, with the proper training, it works very well," he said. "It takes most people about three four-hour sessions to learn how to use it effectively."

Total Access Port

The first commercial version of the Total Access Port, called the Bridge, is manufactured under license by a company set up by Scott - ScottBridge of Palo Alto. It is a specialized version of the access port that dramatically simplifies the addition of speech recognition to any computer by allowing all of the speech recognition functions to be performed by a separate computer while the host machine remains unmodified. Sun and Macintosh versions of the Bridge now are available, and IBM PC and SGI versions will be available in the first quarter of next year.

"Ultimately, we'd like to see this circuitry put on a chipset and included in the computer where it would add very little to the computer's cost," Scott said.

Personal Accessor

The Personal Accessor is a more complex device that is still under development. Current prototypes are the size of small desktop computers, but researchers ultimately would like to fit them into devices about the size of a personal digital assistant like Apple's Newton. One objective is to allow personal accessors to communicate with each other.

"Imagine a deaf man who carries two accessors the size of Sharp Wizards,'" Macken said. "When he needs to communicate with someone, he hands her one of the accessors where she reads the message, 'Hello. I'm deaf. To talk to me, talk into your accessor.' ”

The device would include a speech recognition system that translates her words into text that can be read.

Linguistic Form Processor

Individuals with degenerative diseases like Lou Gehrig's disease have trouble communicating at a normal speed as the illness progresses. For these users, Archimedes researchers are working on what they call a Linguistic Form Processor that would allow even those with very little muscle control to communicate more effectively. The speech synthesizers that such individuals use require them to pick words and phrases out of series of menus to make sentences.

The researchers are building on the current technology of word prediction and are working with CSLI linguists to develop new strategies for predicting phrases and sentences. They also are looking for new input devices to incorporate into the Personal Accessor.


One such device is an eye tracker, which allows an individual to control a computer cursor by moving his eyes. This technology was developed by the Department of the Defense for use with "heads-up displays" for jet pilots.

Elliott Levinthal, professor emeritus of mechanical engineering and an Archimedes Project adviser, was instrumental in getting an eye-tracker for the project. On the most recent visit of Secretary of Defense William Perry to Stanford, Levinthal mentioned to one of Perry's aides that Archimedes would like to test one of the devices. A few days later, two Air Force researchers showed up with the equipment and helped set it up.


Another innovative input device, called the CyberLink, was developed by an Air Force researcher, Andrew Junker, who has since set up a company to produce it. The CyberLink consists of sensors built into a headband that detect and respond to electrical signals originating from muscles and the electrical activity in the brain.

Archimedes researchers are evaluating the device as a potential input for the Personal Accessor. After about 15 hours of practice, Cynthia Adams, whose cerebral palsy has resulted in unclear, inconsistent speech and minimal motor control, can control a paddle in a training exercise reminiscent of the game Pong She also can guide the computer cursor through an on-screen maze.

Perry, Macken and Scott are convinced that the development of an effective method to allow the disabled access to the information superhighway will benefit everyone.

"The classic case is the curb cut, which was put in to allow people with wheelchairs to cross the street," Macken said. "Now people pushing baby carriages, roller bladers - almost everyone uses the curb cuts."



This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to

© Stanford University. All Rights Reserved. Stanford, CA 94305. (650) 723-2300.