CONTACT: Stanford University News Service (650) 723-2558
STANFORD -- Imagine an electronic instrument that is small and portable, and can faithfully reproduce the crisp notes of a Steinway piano, the sweet sound of a Stradivarius violin or the brilliant tone of a trumpet.
This is John M. Chowning's vision of the future of musical instruments, a future that the Stanford music professor and his fellow researchers at the Center for Computer Research in Music and Acoustics (CCRMA) have played a pivotal role in bringing to the threshold of reality.
Chowning's invention more than 20 years ago of a digital sound reproduction scheme called FM synthesis led to the development of today's most successful electronic keyboards. Now, just as the FM synthesis patent has expired, a new and improved synthesis technique developed by his colleague Julius O. Smith III is providing the basis for a new generation of sophisticated electronic synthesizers. This method, called digital waveguide synthesis, not only reproduces instrumental sounds with greater fidelity but also allows synthesizer players to add performance nuances such as vibrato or variations in bow pressure and speed.
"The question of the future of instruments is an interesting one," Chowning said. "Some people think that totally new instruments will be developed and take over. But I don't think so, because so much of music is tied to repertoire and tradition, which is tied to specific instruments."
Instead, he predicts that the creation, evolution and acceptance of exotic new instruments will take place slowly. Meanwhile, advances in electronics will be used to overcome the undesirable attributes of established instruments, Chowning said.
Take the piano, for example. Despite its wonderful sound, it is heavy, hard to maintain and tune, and can't be acoustically isolated. One of CCRMA's 24 graduate students is currently working on a programmable keyboard, one that can reproduce the feel of the keyboards of different pianos. When this work is combined with high-quality sound reproduction, it will become possible to have a lightweight piano that plays like a grand piano, always stays in tune and can be heard either over loudspeakers or headphones.
The ability to reproduce electronically the sounds of actual instruments like pianos so that they cannot be distinguished from the real thing, and to do so in real time, has not yet been achieved. But CCRMA's research has helped convert this possibility from a pipe dream to a near reality.
"The mysteries surrounding all this are infinitely less today than they were when I began," Chowning observed.
The history of the center's involvement goes back to the 1960s, when Chowning was using the computers at the old Artificial Intelligence Laboratory, located in the hills behind the campus, to make computer music. In the process, he got an idea for a simple way to generate complex sounds with lots of harmonics. It has long been known that it is possible to recreate a complex wave form by adding together a large number of simple waves of different frequencies. The practical problem in implementing this approach was that it required hundreds of individual electronic components called oscillators to produce enough of the simple waves to create a realistic reproduction.
In 1967, Chowning realized that complex sounds could be generated using only two oscillators when the output of one oscillator is connected to the frequency input of a second oscillator. Essentially, the first oscillator generates a pure tone that modulates the frequency of the second oscillator in a way that produces a complex tone, like a string vibrating.
After simulating this circuitry on the computer, "my ear first told me that it was potentially interesting," Chowning said. Then study of basic electronic theory confirmed that it was possible to use this approach to generate brasslike, reedlike, stringlike and drumlike sounds.
It wasn't until four years later, however, that Chowning realized that this approach, called FM synthesis, might be useful to the music industry. So he took the idea to the Office of Technology Licensing (OTL). As a result, the university received a patent in 1977.
That was a period before stereo had gone digital or personal computers had appeared on the scene. "We thought the manufacturers of electronic organs might be interested," Chowning recalled.
So OTL officers approached Hammond and Lowry, the two major electronic organ manufacturers of the day. Lowry wasn't interested. Hammond was very interested, but its engineers did not have enough background in digital electronics to figure out how to implement the technique.
Finally, OTL contacted Yamaha, which was just beginning to market pianos and electronic organs in the United States. Yamaha sent a young engineer named Kazukiyo Ishimura. "I played the sounds and within 10 to 15 minutes he understood what I had done," Chowning recalled.
"That was well before the desktop calculator or the personal computer. The only digital instruments were large computers. But Yamaha already believed that digital technology was an essential part of the future of electrical instruments," Ishimura said on a recent visit. So the company determined that FM synthesis dovetailed with their planning and applied for an exclusive license, which Stanford granted.
When the patent ran out last month, this agreement had generated more than $20 million for the university, making it the second most lucrative licensing agreement in Stanford's history. In recent years, the exclusivity of the agreement has come under some criticism. But Chowning defends it. "I think an exclusive was warranted because Yamaha had to invest so much money to develop the technology that they would never have been able to recoup their investment if the license had been non-exclusive."
FM synthesis was very hard to realize in real-time instruments, Ishimura acknowledged. It took 10 years - much longer than he had envisioned - before Yamaha released their first product based on the patent. They had to wait for the next generation in digital chip technology, called large-scale integration, before chips had enough power to handle the process. It wasn't until 1983 that they released their first really popular product, the DX-7 synthesizer, which quickly became a favorite of a number of rock bands worldwide.
Since then the technology has spread widely. It provides the sound for many of the electronic organs and small keyboards produced today. With the growth of "multimedia" in the computer world, Yamaha FM synthesis chips have found their way into a large number of the sound boards that give personal computers the ability to reproduce voices and music. They also are found in an increasing number of arcade and home video game machines.
"I think it is very nice that the idea was born in Stanford, grew up in Japan and then came back to the United States," Ishimura said.
Essentially, FM synthesis approximates musical sounds by attempting to reproduce the manner in which their strength varies with frequency. A limitation of this technique is that it can only match the sound of an instrument as it is measured at one point in space. Now a new technique, called digital waveguide synthesis, has been developed at CCRMA by Associate Professor (Research) Julius O. Smith III. The new technique substantially improves sound quality by modeling the sound- generating processes that take place in instruments themselves.
"Waveguide synthesis is the key to duplicating the sounds of actual instruments," Chowning said.
Smith, an electrical engineer who began playing in rock bands when he was 14, as a graduate student settled on a project to improve simulation of a violin.
The brute force way to approach this problem, he explained, is to break down each string into hundreds of segments, or samples, and then solve the equations of motion for each of these points on the string, and do so about 44,000 times per second. That produces a realistic simulation of how the string vibrates and so produces sound. But it requires performing more calculations per second than the capability of the special digital signal processing (DSP) chips designed for this purpose.
Smith got the idea for a simpler approach in 1985 while listening to a colleague on a shuttle bus going to a computer music conference. The colleague was discussing his work on reverberators, systems in which signals bounce around in a cavity without losing much energy. That train of thought started Smith thinking about an approach that initially ignores the frictional losses in a string and then adds them back at a later point.
By starting with an "ideal" string that, when plucked, vibrates forever, Smith was able to reduce the number of computations that it takes to calculate the position of the string by a factor of 100 to 1,000, making it possible to run the simulation using current DSP chips.
The digital waveguide approach is not limited to stringed instruments. Because the sound propagation in tubular instruments like flutes, clarinets and trumpets is very similar to what happens with a string, by adding a simple simulation of what takes place at the mouthpiece or reed, Smith was able to simulate the sounds of these instruments as well.
In addition, Perry Cook, a CCRMA research associate working with Smith, has modified this technique so that it convincingly reproduces the singing voice. Essentially, the method works for most long-and-thin sound generators, including the human larynx and the string, wind and brass instruments.
Not only does the waveguide approach more closely recreate the sounds of these instruments but, because its underlying algorithms are based on a physical model, it is a straightforward process to add performance nuances such as vibrato and emotional colorations created by varying breath pressure in a woodwind or changing bow speed on a string, Smith said.
Like FM synthesis, the new waveguide synthesis has been patented by Stanford. Unlike the earlier case, however, it is being offered to companies on a non-exclusive basis. So far a half-dozen companies, including Yamaha, have purchased licenses.
Yamaha, in fact, has just begun shipping a radical new synthesizer based on digital waveguide technology. Called the VL1, it costs about $5,000, compared to $3,000 to $4,000 for top FM synthesis-based instruments, and $2,000 for most synthesizers.
"The VL1 fulfills my original vision surprisingly well," Smith said. "They've even added some voices I was not expecting, such as bagpipes and blues harmonica." Smith thinks that the new synthesizer's best voices are its woodwinds and electric strings.
According to a recent review in Billboard magazine, the new synthesizer has set the pro audio and musical instrument industries abuzz. The reviewer characterizes waveguide synthesis as "the most exciting development in synthesizer technology in the past decade."
The VL1's demo tracks illustrate its ability to reproduce performance nuances with effects such as overblowing on a bamboo flute and slapping strings on the electric bass. They also contain examples of a more radical capability of the waveguide approach. It allows the player to create physically impossible hybrid instruments. For instance, it can create the sound of a flute "played" by a violin bow.
Although it is a major advance, the VL1 has not yet reached Smith's original goal of achieving a first-rate violin synthesis. That is partly because it does not adequately simulate the resonating effects that the body of the violin has on the sound of its strings. There are reports that Yamaha soon will be coming out with a specialized string synthesizer, the VP1, that will do just this, but for a cost estimated at $30,000 to $40,000.
This is an archived release.
This release is not available in any other form.
Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to firstname.lastname@example.org.