BY BRUCE GOLDMAN
If you've ever worked in a retail store, you've probably noticed that a long, angry line can suddenly sprout from nowhere at what was a quiet cash register only moments before.
It's a fact of life: Lines happen. And not just in retail, or on your local commuter highway, or on the World Wide Web. You even find them in factories. Despite society's image of a factory as the ultimate embodiment of clock-like precision, the most painstakingly planned production line can get blindsided by profit-gobbling backups.
At an April 28 workshop sponsored by Stanford's Alliance for Innovation in Manufacturing, or AIM (formerly known as the Stanford Integrated Manufacturing Association), several experts attacked the problem of how and why lines happen, and what can be done to prevent them.
The workshop, titled "Variability, Throughput and Cycle Time in Manufacturing and Product Development," was organized by Stanford Professors J. Michael Harrison and James M. Patell of the Graduate School of Business and J. G. "Jim" Dai, a professor of engineering and mathematics at Georgia Institute of Technology. AIM is a campus-based joint venture, initiated by Stanford's Graduate School of Business and School of Engineering and a number of large corporate partners, whose mission is to encourage advances in manufacturing and to disseminate these advances throughout industry and academia.
High utilization plus variability equals long processing times
The production backups that factory managers often witness typically "don't have anything to do with bad attitude, malfeasance, poor corporate strategy or bad organization structure" per se, Harrison told an audience of about 40 attendees. Rather, congestion and delay will occur wherever systems working near full capacity are subject to high variability.
A productive resource be it a worker or a machine may be working as hard as he or she or it can, but this doesn't automatically translate into a fast completion rate of the jobs being processed. That's because the time it takes the resource to complete its task is only part of the total time it takes for a job to get done. The remainder is accounted for by the time the widget spends in a queue, waiting its turn to get worked on. The more heavily utilized workstations that a job has to thread its way through, the more bottlenecks can crop up, Harrison said.
Variability in a manufacturing environment arises from unreliable equipment, unpredictable yields, glitches in human performance, fluctuations in order rates and sizes, and numerous other sources. When a production system whose components are working at close to full capacity is subjected to the stress of such variability, resulting waiting times can become very long compared with actual processing times. "This is a scientific principle," Harrison noted, predicted by a rigorous mathematical treatment known as queuing theory.
Thus, in any production system beset by variability in its many guises, a paradox emerges: Using resources at close to full capacity, far from ensuring an efficient operation, is almost a sure guarantee of time-eating delays. Moreover, these delays aren't meted out equally. While the average ratio of waiting to processing time in a production system overwhelmed by variability and high utilization rates may be 9 to 1, which is bad enough, that ratio may look more like 20 to 1 for some jobs.
"Keep in mind," Harrison reminded the audience, "the delivery time you quote to your customers shouldn't be your average performance, but rather a delivery time you can hope to achieve 95 percent of the time."
Prescriptions to reduce congestion
To alleviate congestion, Harrison recommended a five-pronged approach:
- Eliminate all unnecessary tasks and
artificial constraints on the order in which pieces of a project
are sequenced. Put simply, organize production systems so that some
things can get done while other things are waiting to happen.
- Reduce the load on individual
resources by combining tasks, adding capacity or lowering the order
backlog by, for example, raising prices.
- Reduce variability in the operating
environment whenever possible. Talk your customers into scheduling
their orders in a staggered fashion. Get your error rates down to
avoid having to do the same thing twice.
- Pool resources. Whenever possible,
use standardized parts and machinery. Cross-train your employees so
they can pinch-hit for each other in a crunch. Of course, there are
limits, Harrison acknowledged lawyers and engineers are not
interchangeable, but engineers can hand off some tasks to
- Stay flexible. Be ready to reroute tasks and resources as new information comes in.
Other speakers at the workshop discussed the practical implications of these prescriptions and the difficulty of implementing them in the real world.
"Manufacturing is like Rodney Dangerfield," said Michael P. Kuntz, a senior engineer at Boeing Corp. "It really doesn't get much respect. But that's changing." Kuntz stressed the difficulty of reforming ongoing as opposed to first-time manufacturing operations. "When a new program comes along, they always take the best and brightest from older operations and move them into the new thing. Those people get jazzed up." But the existing operations suffer a brain-drain.
Alas, said Kuntz, "the time frames for the development of solutions you need are wider than your management team can fathom; those solutions may yield results 10 years down the road, when the management team will only be in place for two or three years."
Paul Pickerskill, a manager in the Lean Manufacturing Team at Visteon (a wholly owned parts-making subsidiary of Ford Motor Co.), agreed that slack capacity has value in improving congestion performance. But increasing capacity inevitably costs money, he said, and can be a tough sell to higher management. Practically speaking, it may be smarter to locate sources of variability and reduce them: Make sure that a plant is laid out properly, for example, or cut machine set-up times or schedule preventive maintenance. One of the hidden advantages of the just-in-time manufacturing methods developed in Japan and widely adopted in the United States is that they significantly reduce the variability introduced by long-term forecasting, he said.
Applying queuing theory to product development
Queuing theory has been applied satisfactorily to the factory floor, but the theory applies equally well to information flow as to material flow substitute "in box" for "queue" and "desk" for "workstation." It might take 10 minutes to read and act on a memo, but if it sits in someone's in box for a month, whoever is downstream may learn a painful lesson in applied queuing theory.
Vien Nguyen of Morgan Stanley Dean Witter recounted a detailed study she and several colleagues carried out while she was a postdoctoral student at Stanford in the Graduate School of Business. The study was an attempt to apply these techniques to the area of product development in a relatively large company specializing in high-technology materials. Specifically, the Stanford research team performed a detailed analysis of all the tasks performed by the product development group, the order in which those tasks were carried out and the time it typically took to complete them.
"When we went in to find out how many hours each person spent at each of a number of defined tasks during the course of a year how much time an engineer spent in, say, prototyping or administrative work or support we got resistance," Nguyen said. "They told us, 'This is creative work! Each project is totally different from the others.' But in fact, a lot of tasks are similar from project to project, and so are the sequences in which those tasks are performed."
Another common reaction people had, said Nguyen, was: "You're gonna use these numbers as punitive measures, to get rid of me."
Antidote to management's overoptimism
Once persuaded that tasks can indeed be quantified and that nobody's going to get laid off, harried workers nonetheless don't particularly like logging their task time, Nguyen said, and they may not always do so with perfect accuracy. Thus, resulting estimates of task time may be off by 30 percent. But without this kind of analysis, she said, managerial estimates are more likely to be off by a factor of 3 to 10. ("And always in the same direction!" interjected Harrison, to the mirth of the audience, who appeared quite familiar with management's perennial overoptimism.)
High-tech manufacturers in particular are working hard to get inventory down, said Gerald R. Feigin of i2 Technologies, and for a very good reason. Feigin cited a study by big computer maker indicating that of the $6.7 billion it was carrying in inventory in 1997, about 60 percent $4 billion in non-earning assets was the result of uncertainties, largely due to variations in demand. He suggested that, just as telephone companies smooth demand by having different rates for different calling times, manufacturers could perhaps charge different prices depending on the order's urgency.
Feigin said he hoped participants
would take home at least one lesson from this workshop:
"Uncertainty is bad." On the other hand, it's not so easy to avoid.
As Feigin put it, "Question: How do you get God to laugh? Answer:
Tell Him your plans." SR