11/02/93

CONTACT: Stanford University News Service (650) 723-2558

Researchers propose new method for reducing product defects

STANFORD -- Researchers at Stanford University's School of Engineering have come up with a method for manufacturers to predict defect rates of proposed products while those products are still being designed.

Improved methods are crucial, the researchers said, because worldwide competition has forced automakers and other manufacturers of mass-produced products to achieve extremely low defect rates - rates that have proved difficult to estimate.

The new method - devised by Philip Barkan, professor in the design division of the mechanical engineering department, and graduate student Martin Hinckley - involves using different statistics than are normally used by manufacturers in quality control and design analysis.

Companies such as Ford have saved billions of dollars by using structured "design-for-manufacturability" methods to improve manufacturing efficiency. These techniques and software programs allow design teams to analyze parts to help predict the time they will take to assemble. The methods include guidelines for minimizing the number of parts in order to achieve greater efficiency.

"These structured methods have improved American manufacturers' competitiveness quite a bit," Barkan said, "but we have also seen a number of examples where people have gotten into real trouble by trying to blindly follow the rules. The results can be products with a lot of deficiencies. That's why I'm very excited about a method for predicting defect rates at the design stage."

In the following interview, Barkan explains how companies could go about predicting defect rates before committing themselves to producing a newly designed product.

Q: In a recent article in Manufacturing Review, you and Martin Hinckley discuss costly mistakes manufacturers have made by minimizing the number of parts in their products. What is the goal of your research?

A: The article you refer to is part of an ongoing study by Martin Hinckley, my graduate student, who is a member of the technical staff of Sandia National Laboratories. What's coming out of his dissertation is a tool for predicting product defects - or the likelihood of having a high quality or problem product - while the product is still in the drawing stage. No one has tried to predict defect rates before, but we have been able to tie a link between quality issues and design that can lead to predicting defect rates for competing designs.

The other part of what he has done is show that there is a new kind of statistics for quality control. The field is virtually committed to what we call "Normal" statistics, with a capital N. What he shows is that for many real-factory situations, that's not appropriate. If you use traditional, or Normal, statistics to describe variations in manufacturing, you will predict a far lower rate of failure or error than really does occur. People haven't recognized this, so his statistics are much more powerful for linking a lot of problems that occur on a factory floor.

Q: Are you referring to his use of a Pareto curve, rather than the familiar bell-shaped curve of a Normal distribution to graph the variability involved in tooling parts and assembling a product?

A: Yes. It's what Martin calls convolutions of the Pareto distribution. He can show that a lot of field data correspond much more closely to a Pareto distribution than to a Normal distribution.

Q: Could you explain a Pareto distribution?

A: The Pareto chart and distribution acquire their names from an Italian economist who demonstrated that the zeta function in mathematics could be used to accurately characterize the income distribution of a given country. It is a model for phenomena where the likelihood of an event decreases as the magnitude increases - for example, the higher the salary, the fewer the people who will earn it. In a Normal bell curve the peak of the distribution is near the average value, but with a Pareto distribution you tend to get a peak lower than the average, which says that a few events have a very important influence. Many people are familiar with this phenomenon as the 20-80 rule. That is, 20 percent of the operations in a factory are responsible for 80 percent of the cost, or 20 percent of the parts in an automobile will be responsible for 80 percent of the defects. The underlying mathematics of this relationship is described by what is called the Zipf or zeta function.

Q: Why is it important to plot data this way in manufacturing?

A: In a Normal distribution curve, a data point that seems to be way off - outside the Normal distribution - is treated like an oddball, when, in fact, it fits in our distribution and is regarded as a reasonable occurrence. It's important to recognize that, particularly as you get down to these very low defect rates that today's manufacturers are trying for. You have to start looking at things much more rigorously, and the Pareto distribution more accurately describes the variability you can expect to see in manufacturing.

Q: How did your other students add input to this problem?

A: In our Design for Manufacturability course, we work with a large number of companies - 50 to 60 easily - and the students analyze a large number of their products. The companies give us an enormous amount of data about these products. We didn't have a large amount of defect data, but what we can show is that the data that characterize all the designs the students analyzed were very well represented by the Pareto distribution. This basic statistical model works better as a characterization than Normal statistics.

Q: For what other kind of data have you found this to be true?

A: A number of companies have cooperated with us in providing defect data happening on the actual factory floors. Martin has analyzed that, and in some cases has shown that his predictors work very well, and secondly, the way you describe that experience is much more accurate if you use a Pareto distribution.

Q: Your journal article indicates the original idea for your analysis came from one company's defect data. Is that correct?

A: Yes. Motorola noticed an interesting relationship between the defect rate in the assembly of several of their products and the so-called assembly efficiency of those products. When we saw this relationship, we said, "Well, gee, there must be an underlying reason for it." We've been able to develop our concept around this insight.

Q: Is it unusual for a company to publish information about its experience with defects?

A: It was very unusual. A lot of companies feel that defect information is very sensitive. They don't want to publicly talk about it. But sweeping it under the carpet isn't going to solve the problem either.

Q: How would you define the overall problem?

A: One of the big issues between the United States and Japan lately has been the supplier issue in the balance of trade. There is pressure for the Japanese manufacturers to use more American suppliers. A year ago, Toyota called all its American suppliers to Japan and told them that they were highly dissatisfied with their performance; that on average, the number of defects they were finding in American-made components was 10 to 100 times greater than what they were experiencing with their Japanese suppliers.

We are talking about defects on the order of 10 per million, which is an incredibly small number. There's no way you can get there without addressing three major issues in manufacturing: complexity, variability and human error. We're trying to develop a comprehensive theory that will put all of these factors in perspective.

Q: Why are these variables more critical at lower defect levels?

A: If you want to start getting defects of 10 per million, then it matters whether your extrapolations predict you are going to get 10 defects per million or 500 per million. With our statistics, we get a more realistic assessment. People often have been unable to get these very low defect rates. One reason is that they've been using statistics that project overly optimistic estimates of defect rates.

Q: In your article you give specific examples where design- for-manufacturability rules backfired. One was the instrument panel of a truck.

A: These panels are so complex that they have to be assembled manually by people under very cramped conditions. Current design-for- manufacturability rules suggest that you can improve efficiency by minimizing the number of parts that have to be assembled. We compared the assembly times of the instrument panels of two competing light trucks. We found that the Japanese panel had more screws and a total of roughly 20 percent more parts than the American panel. In spite of this, the imported product could be assembled more easily and in less time. Both required roughly the same number of assembly operations, but the import operations were generally simpler and less time consuming. The import manufacturer appeared less impeded by blind adherence to the rule of minimizing parts.

Q: So the panel with fewer parts was actually more complex to manufacture?

A: Yes. The key part involved was manufactured as a single piece in the U.S. version, but it was complex and difficult to handle, difficult to manufacture and, most important, difficult to install and attach all other parts to - prone to all kinds of errors.

Q: Are you proposing that manufacturers quit using the number of parts and substitute assembly time in their design analyses?

A: Minimizing the number of parts will reduce cost in the majority of cases; however, one of the insights that Martin's dissertation is coming to is that a better measure is based on a combination of total assembly time and the operation count. The new measure encourages improved assembly, reduces the part count and decreases defects. By operations, I mean the different things you have to do when you assemble things. But this has gotten us into another trait or characteristic that needs to be addressed. We call it complexity, which we are trying to define.

Q: Consumers often complain about the difficulty of getting simple parts replaced or fixed in their cars, for example. Does complexity analysis extend to servicing products?

A: One of my former students is actively involved in applying these ideas to serviceability with a number of companies. Some of those are horror stories. One car company, in its zeal to produce a minimum number of parts, actually made it so that in order to change the head lamp, you had to disassemble the front bumper. There are 10,000 to 20,000 parts in an automobile, and a lot of things can get missed. Our car companies are making important progress to overcome this. They have to look carefully at the frequency at which particular parts need to be serviced. A cost- benefit analysis can't be just minimizing parts but must include all these other dimensions of the problem.

Q: How does human error fit into your analysis?

A: If you look at defects, some of them come about due to complexity. Some come about due to variability of parts - the fact that in a large sample, dimensions will vary slightly but significantly within some range. The third point is human error.

Complexity and human error are closely intertwined, but there are many factors that influence human error. In any kind of activity, you can expect human error to occur, statistically speaking, at a certain rate. Studies show that the probability is that you'll have one human error in somewhere between 5,000 to 30,000 operations. Now, clearly, the more complex you make things, the more components, the more opportunities for those human errors to arise. Trying to optimize this thing is another thing we are looking at; that is, trying to resolve the sources of these errors, not be just blindly focused on one area, whether it be process control, variability reduction, or complexity or human error. You have to really understand the relative magnitude of these things and that, as you improve in one area, the significance of the others can loom very important.

Q: What kind of data does a company need to do these analyses?

A: First, they have to look at a number of designs in terms of operation count, operation time, various measures of complexity, and then acquire statistics on the defects that are occurring in the factory to make this correlation between the two. It has been difficult to find companies that have sufficient data to permit this.

Q: At the concept development stage, how would a company have factory defect data?

A: You start with baseline information on existing products made in the same factory. Characterizing a factory depends greatly on the products involved. There could be an order of magnitude of difference between the defect rates that Motorola encounters and predicts from their designs, and those of a disk drive or automobile manufacturer. Products are inherently very different in terms of their complexity.

Once you've got this base, you can predict things from the design end, and you can start to resolve some of these problems. Depending upon the relative magnitude of the errors that are coming from these different sources, you develop an overall comprehensive strategy to improve the quality of your product.

Q: Would this cost a lot of money?

A: It's a matter of time and effort. If you are running 90 miles an hour, sometimes you say you don't have time for it. In fact, there's a big payoff; that's what we've shown.

-kpo-

lid, stip, bus natbus calbus magbus tvbus mfg misc, engsci aero auto manufacture

931102Arc3066.html


This is an archived release.

This release is not available in any other form. Images mentioned in this release are not available online.
Stanford News Service has an extensive library of images, some of which may be available to you online. Direct your request by EMail to newslibrary@stanford.edu.