Stanford University News Service



CONTACT: Stanford University News Service (650) 723-2558


STANFORD -- At a California hospital that teaches new skills to people who have lost their eyesight, an unexpected problem cropped up four years ago: All sorts of jobs became off limits.

Computer systems built around graphics and icons, first introduced to the market by Apple and heralded by many consumers as "user-friendly," were decidedly unfriendly to those whose vision capacities fell outside the standard range, said Jan McKinley of the Western Blind Rehabilitation Center at the Veterans Administration Medical Center in Palo Alto.

Computer translators, which the center has used since the early '80s to convert information on conventional computer screens to voice or Braille, cannot translate the graphics-based user interfaces used by Macintosh and Windows programs, McKinley said.

"Until the graphic user interface, computer screens had 24 lines and 80 columns of text, with each cell holding one character. Those characters could be translated for the blind into voice, using a synthesizer, or Braille," said Stanford researcher John Perry. "Now, screens have hundreds of thousands of pixels, so just to organize the information to figure out what it says becomes a major problem."

Because these interfaces are spreading like wildfire to computers in the workplace, they jeopardize the job market for the visually impaired. Similar threats face deaf and hearing-impaired people as voice becomes a more integral part of communications systems and other computer-based products, say researchers at Stanford's Center for the Study of Language and Information.

Both problems, they say, are symptomatic of a larger one: how to maintain the broadest possible access to information in an age where information is presented in many forms and is becoming key to individuals' economic and social survival.

The Stanford center, known for its theoretical research on information and artificial intelligence, is tackling computer accessibility at several levels in its new Archimedes Project, headed by philosophy professor Perry and Elizabeth Macken, the center's associate director. After initial explorations with a grant from IBM Corp., a team of researchers in six fields is exploring how to make a clean separation of the information from the forms in which it is presented.

Project Archimedes' goal is to provide leverage for individuals with disabilities through research in three areas:

  • Translating the three-dimensional graphics depicted on computer screens to other forms, such as tactile information or sound, for the blind.
  • Developing hand-held devices to speed communication between deaf and hearing individuals who share a common language - for example, English - but lack shared communication methods such as signing, lip-reading or cued speech.
  • Developing personal "accessors" that allow individuals with differing perceptual and motor abilities to communicate with host computers through an infrared communications link.

Undergraduates in Stanford's Symbolic Systems major - most likely future information systems designers - also will help by fanning out to internships in which they hope to capture ideas from disabled computer users and agencies that work with them.

"Everyone requires help in gaining and effectively using information, not only those individuals who have disabilities," said Perry, who will take over direction of the research center in September. "Handicaps often arise from decisions to design tools exclusively for individuals with the standard mix of perceptual and motor abilities."

Electronics equipment designed from the beginning with disabled people in mind may turn out to be mass-market winners, Perry said. "Curb cuts were added to street corners for wheelchairs, but now cyclists, roller bladers and almost everybody uses them," he said.

Some people with good vision already are jealous of blind users who can tap into mainframes and electronic mail with what they perceive as easier voice commands, said Neil Scott, a center research associate.

What consumers really need are electronic products that give them the option to choose the form in which information is presented to them, said Norman Coombs, a history professor at Rochester Institute of Technology who was at Stanford recently for the first meeting of the Archimedes Project advisory board. Coombs, who is blind, teaches history to deaf students hundreds of miles away through a computer equipped with a voice synthesizer and electronic mail. His wife, who is not disabled, is nevertheless "handicapped," he said, when trying to program their digital clock, bread-baking machine and VCR, all electronic devices "designed for some standard, hypothetical person."

At the heart of the Archimedes Project is translation of information from one form to another. It is work that builds on the center's decade of research on computer translation of natural language.

Translation between modalities is both easier and harder than machine translation of written language, Perry said. Machine language translation has yet to be perfected because of mismatches from one language to another and ambiguities within languages.

Modality translators, on the other hand, would merely be a tool for humans to use, Perry said, "so, we can count on the common sense of a people to know when the word pen means a writing device and when it means a baby's playpen. With machine translation, ambiguities of that sort present a major stumbling block."

The bad news, however, is that "we really don't have theories of meaning for graphics or gestures," that are as good as current theories of language meaning.

Stanford Psychology Professor Barbara Tversky conducts research on the meaning of graphics. Professors Herb Clark of psychology and Tom Wasow of linguistics and philosophy research the meaning of widely used gestures, such as head nodding, and "dis-fluencies," such as use of the word "yeah" in conversational speech.

Research associates Elizabeth Macken and Cathy Haas are looking at American Sign Language and highlighting its differences from English. They envision a computer device that allows people to combine representational forms - graphics, gesturing, print, speaking, sign language. A deaf child's speech or compressed keyboard input could be reproduced, for example, in a hand-held device as voice for a visually impaired grandparent or as expanded text used in combination with gestures for someone with normal vision capabilities.

"The idea is not just to get information but to do it in a rapid and efficient manner, as people do in face-to-face conversation," Macken said.

Another project by Research Associate Scott is to develop a way that people of different perceptual and motor abilities can access information in many computers or computer-based products.

A computer engineer who once spent most of his time adapting computers for disabled students at California State University- Northridge, Scott said that such adaptation of individual machines wastes money and time, and keeps disabled people from using a wide range of products. He is developing a wireless high-speed bi-directional link for computers.

"The link would retrieve information from the video signal that drives the screen display of any computer through an infrared port," he said. With the link, and a separate portable computer known as an accessor, a user could use a pointer, a Morse code tapper, a scanner or speech-recognition device - instead of a mouse and a keyboard - to access any computer. The information output of an accessor might be sound or text on a magnifying screen.

"The goal is to have any disabled person work at any workstation or with any electronic device," Scott said, including industrial controls, automatic teller machines, telephones and home electronic devices like microwaves ovens and digital alarm clocks.

Whether external add-ons or internal components eventually provide varied forms of information, Perry said that translation of graphics information promises to be one of the biggest challenges.

"Once a graphic symbol's meaning to a computer user is understood, it still has to be translated for the blind, he said.

"One idea we have is to use three-dimensional sound from virtual reality research," Perry said, to exploit a "cocktail party effect."

People at cocktail parties shut out the irrelevant sound of conversations at a distance, listen to the person standing next to them, yet still hear their name when someone mentions it across the room.

"Perhaps we can have a computer voice off in one corner of the room," Perry said, "that would remind you: "I am your e-mail. When you want to use me, just say the word."



This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to

© Stanford University. All Rights Reserved. Stanford, CA 94305. (650) 723-2300.