Process Improvement Using Control Charts

By Bowerman, B.L., O'Connell, R.T., Murphree, E.S.

Edited by Paul Ducham

STATISTICAL PROCESS CONTROL

What is quality?  It is not easy to define quality, and a number of different definitions have been proposed. One definition that makes sense is fitness for use. Here the user of a product or service can be an individual, a manufacturer, a retailer, or the like. For instance, an individual who purchases a High Definition television set or a DVD recorder expects the unit to be defect free and to provide years of reliable, high-performance service. If the TV or DVD recorder performs as desired, it is fit for use. Another definition of quality that makes sense says that quality is the extent to which customers feel that a product or service exceeds their needs and expectations. For instance, if the DVD recorder’s purchaser believes the unit exceeds all the needs and expectations he or she had for the recorder when it was purchased, then the customer is satisfied with the unit’s quality.

  Three types of quality can be considered: quality of design, quality of conformance, and quality of performance. Quality of design has to do with intentional differences between goods and services with the same basic purpose. For instance, all DVD recorders are built to perform the same function—record and play back DVDs. However, DVD recorders differ with respect to various design characteristics—picture sharpness, sound quality, digital effects, ease of use, and so forth. Agiven level of design quality may satisfy some consumers and may not satisfy others. The product design will specify a set of tolerances (specifications) that must be met. For example, the design of a DVD recorder sets forth many specifications regarding electronic and physical characteristics that must be met if the unit is to operate acceptably. Quality of conformance is the ability of a process to meet the specifications set forth by the design. Quality of performance is how well the product or service actually performs in the marketplace. Companies must find out how well customers’ needs are met and how reliable products are by conducting after-sales research.

  The marketing research arm of a company must determine what the customer seeks in each of these dimensions. Consumer research is used to develop a product or service concept—a combination of design characteristics that exceeds the expectations of a large number of consumers. This concept is translated into a design. The design includes specifications that, if met, will satisfy consumer wants and needs. A production process is then developed to meet the design specifications. In order to do this, variables that can control the process must be identified, and the relationships between input variables and final quality characteristics must be understood. The manufacturer expresses quality characteristics as measurable variables that can be tracked and used to monitor and improve the performance of the process. Service call analysis often leads to product or service redesigns in order to improve the product or service concept. It is extremely important that the initial design be a good one so that excessive redesigns and customer dissatisfaction can be avoided.

History of the quality movement  In the 1700s and 1800s, master craftsmen and their apprentices were responsible for designing and building products. Quantities of goods produced were small, and product quality was controlled by expert workmanship. Master craftsmen had a great deal of pride in their work, and quality was not a problem. However, the introduction of mass production in the late 1800s and early 1900s changed things. Production processes became very complex, with many workers (rather than one skilled craftsman) responsible for the final product. Inevitably, product quality characteristics displayed variation. In particular, Henry Ford developed the moving assembly line at Ford Motor Company. As assembly line manufacturing spread, quality became a problem. Production managers were rewarded for meeting production quotas, and quality suffered. To make mass-produced products more consistent, inspectors were hired to check product quality. However, 100 percent inspection proved to be costly, and people started to look for alternatives.

Much of the early work in quality control was done at Bell Telephone (now known as American Telephone and Telegraph or AT&T). The Bell System and Western Electric, the manufacturing arm of Bell Telephone, formed the Inspection Engineering Department to deal with quality problems. In 1924 Walter Shewhart of Bell Telephone Laboratories introduced the concept of statistical quality control—controlling quality of mass-produced goods. Shewhart believed that variation always exists in manufactured products, and that the variation can be studied, monitored, and controlled using statistics. In particular, Shewhart developed a statistical tool called the Control Chart. Such a chart is a graph that can tell a company when a process needs to be adjusted and when the process should be left alone. In the late 1920s Harold F. Dodge and Harold G. Romig, also of Bell Telephone Laboratories, introduced statistical acceptance sampling, a statistical sampling technique that enables a company to accept or reject a quantity of goods (called a lot) without inspecting the entire lot. By the mid-1930s, Western Electric was heavily using statistical quality control (SQC) to improve quality, increase productivity, and reduce inspection costs. However, these statistical methods were not widely adopted outside Bell Telephone.

During World War II statistical quality control became widespread. Faced with the task of producing large quantities of high-quality war matériel, industry turned to statistical methods, failure analysis, vendor certification, and early product design. The U.S. War Department required that suppliers of war matériel employ acceptance sampling, and its use became commonplace. Statistical control charts were also used, although not as widely as acceptance sampling.

In 1946 the American Society for Quality Control (ASQC) was established to encourage the use of quality improvement methods. The organization sponsors training programs, seminars, and publications dealing with quality issues. In spite of the efforts of the ASQC, however, interest in quality in American industry diminished after the war. American business had little competition in the world market—Europe and Japan were rebuilding their shattered economies. Tremendous emphasis was placed on increased production because firms were often unable to meet the demand for their products. Profits were high, and the concern for quality waned. As a result, postwar American managers did not understand the importance of quality and process improvement, and they were not informed about quality improvement techniques.

However, events in Japan took a different turn. After the war, Japanese industrial capacity was crippled. Productivity was very low, and products were of notoriously bad quality. In those days, products stamped “Made in Japan” were generally considered to be “cheap junk.” The man credited with turning this situation around is W. Edwards Deming. Deming, born in 1900, earned a Ph.D. in mathematical physics from Yale University in 1927. He then went to work in a Department of Agriculture–affiliated laboratory. Deming, who had learned statistics while studying physics, applied statistics to experiments conducted at the laboratory. Through this work, Deming was introduced to Walter Shewhart, who explained his theories about using statistical control charts to improve quality and productivity. During World War II, Deming was largely responsible for teaching 35,000 American engineers and technical people how to use statistics to improve the quality of war matériel. After the war, the Allied command sent a group of these engineers to Japan. Their mission was to improve the Japanese communication system. In doing so, the engineers employed the statistical methods they had learned, and Deming’s work was brought to the attention of the Union of Japanese Scientists and Engineers (JUSE). Deming, who had started his own consulting firm in 1946, was asked by the JUSE to help increase Japanese productivity. In July 1950 Deming traveled to Japan and gave a series of lectures titled “Elementary Principles of the Statistical Control of Quality” to a group of 230 Japanese managers. Deming taught the Japanese how to use statistics to determine how well a system can perform, and taught them how to design process improvements to make the system operate better and more efficiently. He also taught the Japanese that the more quality a producer builds into a product, the less it costs. Realizing the serious nature of their economic crisis, the Japanese adopted Deming’s ideas as a philosophy of doing business. Through Deming, the Japanese found that by listening to the wants and needs of consumers and by using statistical methods for process improvement in production, they could export high-quality products to the world market.

  Although American business was making only feeble attempts to improve product quality in the 1950s and 1960s, it was able to maintain a dominant competitive position. Many U.S. companies focused more on marketing and financial strategies than on product and production. But the Japanese and other foreign competitors were making inroads. By the 1970s, the quality of many Japanese and European products (for instance, automobiles, television sets, and electronic equipment) became far superior to their American-made counterparts. Also, rising prices made consumers more quality conscious—people expected high quality if they were going to pay high prices. As a result, the market shares of U.S. firms rapidly decreased. Many U.S. firms were severely injured or went out of business.

  Meanwhile, Deming continued teaching and preaching quality improvement. While Deming was famous in Japan, he was relatively unknown in the United States until 1980. In June 1980 Deming was featured in an NBC television documentary titled “If Japan Can, Why Can’t We?” This program, written and narrated by then–NBC correspondent Lloyd Dobyns, compared Japanese and American industrial productivity and credited Deming for Japan’s success. Within days, demand for Deming’s consulting services skyrocketed. Deming consulted with many major U.S. firms. Among these firms are The Ford Motor Company, General Motors Corporation, and The Procter & Gamble Company. Ford, for instance, began consulting with Deming in 1981. Donald Petersen, who was Ford’s chairman and chief executive officer at the time, became a Deming disciple. By following the Deming philosophy, Ford, which was losing 2 billion dollars yearly in 1980, attempted to create a quality culture. Quality of Ford products was greatly improved, and the company again became profitable. The 1980s saw many U.S. companies adopt a philosophy of continuous improvement of quality and productivity in all areas of their businesses—manufacturing, accounting, sales, finance, personnel, marketing, customer service, maintenance, and so forth. This overall approach of applying quality principles to all company activities is called Total Quality Management (TQM) or total quality control (TQC). It is becoming an important management strategy in American business. Dr. Deming taught seminars on quality improvement for managers and statisticians until his death on December 20, 1993. Deming’s work resulted in widespread changes in both the structure of the world economy and the ways in which American businesses are managed.

The fundamental ideas behind Deming’s approach to quality and productivity improvement are contained in his “14 points.” These are a set of managerial principles that, if followed, Deming believed would enable a company to improve quality and productivity, reduce costs, and compete effectively in the world market. We briefly summarize the 14 points in Table 17.1 on the next page. For more complete discussions of these points, see Bowerman and O’Connell (1996), Deming (1986), Walton (1986), Scherkenbach (1987), or Gitlow, Gitlow, Oppenheim, and Oppenheim (1989). Deming stressed that implementation of the 14 points requires both changes in management philosophy and the use of statistical methods. In addition, Deming believed that it is necessary to follow all of the points, not just some of them.

In 1988 the first Malcolm Baldrige National Quality Awards were presented. These awards, presented by the U.S. Commerce Department, are named for the late Malcolm Baldrige, who was Commerce Secretary during the Reagan administration. The awards were established to promote quality awareness, to recognize quality achievements by U.S. companies, and to publicize successful quality strategies. The Malcolm Baldrige National Quality Award Consortium, formed by the ASQC (now known as the ASQ) and the American Productivity and Quality Center, administers the award. The Baldrige award has become one of the most prestigious honors in American business. Annual awards are given in three categories—manufacturing, service, and small business. Winners include companies such as Motorola Inc., Xerox Corporation Business Products and Systems, the Commercial Nuclear Fuel Division of Westinghouse Electric Corporation, Milliken and Company, Cadillac Division, General Motors Corporation, Ritz Carlton Hotels, and AT&T Consumer Communications.

 Finally, the 1990s saw the adoption of an international quality standards system called ISO 9000. More than 90 countries around the globe have adopted the ISO 9000 series of standards for their companies, as have many multinational corporations (including AT&T, 3M, IBM, Motorola, and DuPont). As a brief introduction to ISO 9000, we quote “Is ISO 9000 for You?” published by CEEM Information Systems:

     What Is ISO 9000? ISO 9000 is a series of international standards for quality assurance management systems. It establishes the organizational structure and processes for assuring   that the production of goods or services meets a consistent and agreed-upon level of quality for a company’s customers.

     The ISO 9000 series is unique in that it applies to a very wide range of organizations and industries encompassing both the manufacturing and service sectors.

Why Is ISO 9000 Important?

ISO 9000 is important for two reasons. First . . . the discipline imposed by the standard for processes influencing your quality management systems can enhance your company’s quality consistency. Whether or not you decide to register your company to ISO 9000 standards, your implementing such discipline can achieve greater efficiency in your quality control systems.

  Second . . . more and more companies, both here at home and internationally, are requiring their suppliers to be ISO 9000 registered. To achieve your full market potential in such industries, registration is becoming essential. Those companies who become registered have a distinct competitive advantage, and sales growth in today’s demanding market climate requires every advantage you can muster.

Clearly, quality has finally become a crucially important issue in American business. The quality revolution now affects every area in business. But the Japanese continue to mount new challenges. For years, the Japanese have used designed statistical experiments to develop new processes, find and remedy process problems, improve product performance, and improve process efficiency. Much of this work is based on the insights of Genichi Taguchi, a Japanese engineer. His methods of experimental design, the so-called Taguchi methods, have been heavily used in Japan since the 1960s. Although Taguchi’s methodology is controversial in statistical circles, the use of experimental design gives the Japanese a considerable advantage over U.S. competitors because it enables them to design a high level of quality into a product before production begins. Some U.S. manufacturers have begun to use experimental design techniques to design quality into their products. It will be necessary for many more U.S. companies to do so in order to remain competitive in the future—a challenge for the 21st century.

table 17.1

PROCESS VARIATION

Statistical process control  Statistical process control (SPC) is a systematic method for analyzing process data (quality characteristics) in which we monitor and study the process variation. The goal is to stabilize the process and to reduce the amount of process variation. The ultimate goal is continuous process improvement. We often use SPC to monitor and improve manufacturing processes. However, SPC is also commonly used to improve service quality. For instance, we might use SPC to reduce the time it takes to process a loan application, or to improve the accuracy of an order entry system.

  Before the widespread use of SPC, quality control was based on an inspection approach. Here the product is first made, and then the final product is inspected to eliminate defective items. This is called action on the output of the process. The emphasis here is on detecting defective product that has already been produced. This is costly and wasteful because, if defective product is produced, the bad items must be (1) scrapped, (2) reworked or reprocessed (that is, fixed), or (3) downgraded (sold off at a lower price). In fact, the cost of bad quality (scrap, rework, and so on) can be tremendously high. It is not unusual for this cost to be as high as 10 to 30 percent or more of a company’s dollar sales.

  In contrast to the inspection approach, SPC emphasizes integrating quality improvement into the process. Here the goal is preventing bad quality by taking appropriate action on the process. In order to accomplish this goal, we must decide when actions on the process are needed. The focus of much of this chapter is to show how such decisions can be made.

Causes of process variation  In order to understand SPC methodology, we must realize that the variations we observe in quality characteristics are caused by different sources. These sources include factors such as equipment (machines or the like), materials, people, methods and procedures, the environment, and so forth. Here we must distinguish between usual process variation and unusual process variation. Usual process variation results from what we call common causes of process variation.

Common causes are sources of variation that have the potential to influence all process observations. That is, these sources of variation are inherent to the current process design.

  Common cause variation can be substantial. For instance, obsolete or poorly maintained equipment, a poorly designed process, and inadequate instructions for workers are examples of common causes that might significantly influence all process output. As an example, suppose that we are filling 16-ounce jars with grape jelly. A 25-year-old, obsolete filler machine might be a common cause of process variation that influences all the jar fills. While (in theory) it might be possible to replace the filler machine with a new model, we might have chosen not to do so, and the obsolete filler causes all the jar fills to exhibit substantial variation.

  Common causes also include small influences that would cause slight variation even if all conditions are held as constant as humanly possible. For example, in the jar fill situation, small variations in the speed at which jars move under the filler valves, slight floor vibrations, and small differences between filler valve settings would always influence the jar fills even when conditions are held as constant as possible. Sometimes these small variations are described as being due to “chance.”

Together, the important and unimportant common causes of variation determine the usual process variability. That is, these causes determine the amount of variation that exists when the process is operating routinely. We can reduce the amount of common cause variation by removing some of the important common causes. Reducing common cause variation is usually a management responsibility. For instance, replacing obsolete equipment, redesigning a plant or process, or improving plant maintenance would require management action.

 In addition to common cause variation, processes are affected by a different kind of variation called assignable cause variation (sometimes also called special cause or specific cause variation).

Assignable causes are sources of unusual process variation. These are intermittent or permanent changes in the process that are not common to all process observations and that may cause important process variation. Assignable causes are usually of short duration, but they can Be Persistent or recurring conditions.

For example, in the jar filling situation, one of the filler valves may become clogged so that some jars are being substantially underfilled (or perhaps are not filled at all). Or a relief operator might incorrectly set the filler so that all jars are being substantially overfilled for a short period of time. As another example, suppose that a bank wishes to study the length of time customers must wait before being served by a teller. If a customer fills out a banking form incorrectly, this might cause a temporary delay that increases the waiting time for other customers. Notice that assignable causes such as these can often be remedied by local supervision—for instance, by a production line foreman, a machine operator, a head bank teller, or the like. One objective of SPC is to detect and eliminate assignable causes of process variation. By doing this, we reduce the amount of process variation. This results in improved quality.

It is important to point out that an assignable cause could be beneficial—that is, it could be an unusual process variation resulting in unusually good process performance. In such a situation, we wish to discover the root cause of the variation, and then we wish to incorporate this condition into the process if possible. For instance, suppose we find that a process performs unusually well when a raw material purchased from a particular supplier is used. It might be desirable to purchase as much of the raw material as possible from this supplier.

  When a process exhibits only common cause variation, it will operate in a stable, or consistent, fashion. That is, in the absence of any unusual process variations, the process will display a constant amount of variation around a constant mean. On the other hand, if assignable causes are affecting the process, then the process will not be stable—unusual variations will cause the process mean or variability to change over time. It follows that

1 When a process is influenced only by common cause variation, the process will be in statistical control.

2 When a process is influenced by one or more assignable causes, the process will not be in statistical control.

  In general, in order to bring a process into statistical control, we must find and eliminate undesirable assignable causes of process variation, and we should (if feasible) build desirable assignable causes into the process. When we have done these things, the process is what we call a stable, common cause system. This means that the process operates in a consistent fashion and is predictable. Since there are no unusual process variations, the process (as currently configured) is doing all it can be expected to do.

When a process is in statistical control, management can evaluate the process capability. That is, it can assess whether the process can produce output meeting customer or producer requirements. If it does not, action by local supervision will not remedy the situation—remember, the assignable causes (the sources of process variation that can be dealt with by local supervision) have already been removed. Rather, some fundamental change will be needed in order to reduce common cause variation. For instance, perhaps a new, more modern filler machine must be purchased and installed. This will require action by management.

Finally, the SPC approach is really a philosophy of doing business. It is an entire firm or organization that is focused on a single goal: continuous quality and productivity improvement. The impetus for this philosophy must come from management. Unless management is supportive and directly involved in the ongoing quality improvement process, the SPC approach will not be successful.

SAMPLING, SUBGROUPING, AND CONTROL CHARTS

In order to find and eliminate assignable causes of process variation, we sample output from the process. To do this, we first decide which process variables—that is, which process characteristics—will be studied. Several graphical techniques (sometimes called prestatistical tools) are used here. Pareto charts help identify problem areas and opportunities for improvement. Cause-and-effect diagrams help uncover sources of process variation and potentially important process variables. The goal is to identify process variables that can be studied in order to decrease the gap between Customer Expectations and process performance.

  Whenever possible and economical, it is best to study a quantitative, rather than a categorical, process variable. For example, suppose we are filling 16-ounce jars with grape jelly, and suppose specifications state that each jar should contain between 15.95 and 16.05 ounces of jelly. If we record the fill of each sampled jar by simply noting that the jar either “meets specifications” (the fill is between 15.95 and 16.05 ounces) or “does not meet the specifications,” then we are studying a categorical process variable. However, if we measure and record the amount of grape jelly contained in the jar (say, to the nearest one-hundredth of an ounce), then we are studying a quantitative process variable. Actually measuring the fill is best because this tells us how close we are to the specification limits and thus provides more information. As we will soon see, this additional information often allows us to decide whether to take action on a process by using a relatively small number of measurements.

  When we study a quantitative process variable, we say that we are employing measurement data. To analyze such data, we take a series of samples (usually called subgroups) over time. Each subgroup consists of a set of several measurements; subgroup sizes between 2 and 6 are often used. Summary statistics (for example, means and ranges) for each subgroup are calculated and are plotted versus time. By comparing plot points, we hope to discover when unusual process variations are taking place.

  Each subgroup is typically observed over a short period of time—a period of time in which the process operating characteristics do not change much. That is, we employ rational subgroups.

Rational Subgroups

Rational subgroups are selected so that, if process changes of practical importance exist, the chance that these changes will occur between subgroups is maximized and the chance that these changes will occur within subgroups is minimized.

In order to obtain rational subgroups, we must determine the frequency with which subgroups will be selected. For example, we might select a subgroup once every 15 minutes, once an hour, or once a day. In general, we should observe subgroups often enough to detect important process changes. For instance, suppose we wish to study a process, and suppose we feel that workers’ shift changes (that take place every eight hours) may be an important source of process variation. In this case, rational subgroups can be obtained by selecting a subgroup during each eight-hour shift. Here shift changes will occur between subgroups. Therefore, if shift changes are an important source of variation, the rational subgroups will enable us to observe the effects of these changes by comparing plot points for different subgroups (shifts). However, in addition, suppose hourly machine resets are made, and we feel that these resets may also be an important source of process variation. In this case, rational subgroups can be obtained by selecting a subgroup during each hour. Here machine resets will occur between subgroups, and we will be able to observe their effects by comparing plot points for different subgroups (hours). If in this situation we selected one subgroup each eight-hour shift, we would not obtain rational subgroups. This is because hourly machine resets would occur within subgroups, and we would not be able to observe the effects of these resets by comparing plot points for different shifts. In general, it is very important to try to identify important sources of variation (potential assignable causes such as shift changes, resets, and so on) before deciding how subgroups will be selected. As previously stated, constructing a cause-and-effect diagram helps uncover these sources of variation.

Once we determine the sampling frequency, we need to determine the subgroup size—that is, the number of measurements that will be included in each subgroup—and how we will actually select the measurements in each subgroup. It is recommended that the subgroup size be held constant. Denoting this constant subgroup size as n, we typically choose n to be from 2 to 6, with n=4 or 5 being a frequent choice. To illustrate how we can actually select the subgroup measurements, suppose we select a subgroup of 5 units every hour from the output of a machine that produces 100 units per hour. We can select these units by using a consecutive, periodic, or random sampling process. If we employ consecutive sampling, we would select 5 consecutive units produced by the machine at the beginning of (or at some time during) each hour. Here production conditions—machine operator, machine setting, raw material batch, and so forth—will be as constant as possible within the subgroup. Such a subgroup provides a “freeze-frame picture” of the process at a particular point in time. Thus the chance of variations occurring within the subgroups is minimized. If we use periodic sampling, we would select 5 units periodically through each hour. For example, since the machine produces 100 units per hour, we could select the 1st, 21st, 41st, 61st, and 81st units produced. If we use random sampling, we would use a random number table to randomly select 5 of the 100 units produced during each hour. If production conditions are really held fairly constant during each hour, then consecutive, periodic, and random sampling will each provide a similar representation of the process. If production conditions vary considerably during each hour, and if we are able to recognize this variation by using a periodic or random sampling procedure, this would tell us that we should be sampling the process more often than once an hour. Of course, if we are using periodic or random sampling every hour, we might not realize that the process operates with considerably less variation during shorter periods (perhaps because we have not used a consecutive sampling procedure). We therefore might not recognize the extent of the hourly variation.

Lastly, it is important to point out that we must also take subgroups for a period of time that is long enough to give potential sources of variation a chance to show up. If, for instance, different batches of raw materials are suspected to be a significant source of process variation, and if we receive new batches every few days, we may need to collect subgroups for several weeks in order to assess the effects of the batch-to-batch variation. A statistical rule of thumb says that we require at least 20 subgroups of size 4 or 5 in order to judge statistical control and in order to obtain reasonable estimates of the process mean and variability. However, practical considerations may require the collection of much more data. We now look at two more concrete examples of subgrouped data.

EXAMPLE 17.1 The Hole Location Case2

A manufacturer produces automobile air conditioner compressor shells. The compressor shell is basically the outer metal housing of the compressor. Several holes of various sizes must be punched into the shell to accommodate hose connections that must be made to the compressor. If any one of these holes is punched in the wrong location, the compressor shell becomes a piece of scrap metal (at considerable cost to the manufacturer). Figure 17.1(a) illustrates a compressor shell (note the holes that have been punched in the housing). Experience with the hole-punching process suggests that substantial changes (machine resets, equipment lubrication, and so forth) can occur quite frequently—as often as two or three times an hour. Because we wish to observe the impact of these changes if and when they occur, rational subgroups are obtained by selecting a subgroup every 20 minutes or so. Specifically, about every 20 minutes five compressor shells are consecutively selected from the process output. For each shell selected, a measurement that helps to specify the location of a particular hole in the compressor shell is made. The measurement is taken by measuring from one of the edges of the compressor shell (called the trim edge) to the bottom of the hole [see Figure 17.1(a)]. Obviously, it is not possible to measure to the center of the hole because you cannot tell where it is! The target value for the measured dimension is 3.00 inches. Of course, the manufacturer would like as little variation around the target as possible. Figure 17.1(b) gives the measurements obtained for 20 subgroups that were selected between 8 A.M. and 2:20 P.M. on a particular day. Here a subgroup consists of the five measurements labeled 1 through 5 in a single row in the table. Notice that Figure 17.1(b) also gives the mean, , and the range, R, of the measurements in each subgroup.

EXAMPLE 17.2 The Hot Chocolate Temperature Case3

Since 1994 a number of consumers have filed and won large claims against national fast-food chains as a result of being scalded by excessively hot beverages such as coffee, tea, and hot chocolate. Because of such litigation, the food service staff at a university dining hall wishes to study the temperature of the hot chocolate dispensed by its hot chocolate machine. The dining hall staff believes that there might be substantial variations in hot chocolate temperatures from meal to meal. Therefore, it is decided that at least one subgroup of hot chocolate temperatures will be observed during each meal—breakfast (6:30 A.M. to 10 A.M.), lunch (11 A.M. to 1:30 P.M.), and dinner (5 P.M. to 7:30 P.M.). In addition, since the hot chocolate machine is heavily used during most meals, the dining hall staff also believes that hot chocolate temperatures might vary substantially from the beginning to the end of a single meal. It follows that the staff will obtain rational subgroups by selecting a subgroup a half hour after the beginning of each meal and by selecting another subgroup a half hour prior to the end of each meal. Specifically, each subgroup will be selected by pouring three cups of hot chocolate over a 10-minute time span using periodic sampling (the second cup will be poured 5 minutes after the first, and the third cup will be poured 5 minutes after the second). The temperature of the hot chocolate will be measured by a candy thermometer (to the nearest degree Fahrenheit) immediately after each cup is poured. Table 17.2 gives the results for 24 subgroups of three hot chocolate temperatures taken at each meal served at the dining hall over a four-day period. Here a subgroup consists of the three temperatures labeled 1 through 3 in a single row in the table. The table also gives the mean, x symbol , and the range, R, of the temperatures in each subgroup).

SubGrouped Data are used to determine when assignable causes of process variation exist. Typically, we analyze subgrouped data by plotting summary statistics for the subgroups versus time. The resulting plots are often called graphs of process performance. For example, the subgroup means and the subgroup ranges of the hole location measurements in Figure 17.1(b) are plotted in time order on graphs of process performance in the Excel output of Figure 17.2. The subgroup means ( values) and ranges (R values) are plotted on the vertical axis, while the time sequence (in this case, the subgroup number) is plotted on the horizontal axis. The values and R values for corresponding subgroups are lined up vertically. The plot points on each graph are connected by line segments as a visual aid. However, the lines between the plot points do not really say anything about the process performance between the observed subgroups. Notice that the subgroup means and ranges vary over time.

  If we consider the plot of subgroup means, very high and very low points are undesirable— they represent large deviations from the target hole location dimension (3 inches). If we consider the plot of subgroup ranges, very high points are undesirable (high variation in the hole location dimensions), while very low points are desirable (little variation in the hole location dimensions). We now wish to answer a very basic question. Is the variation that we see on the graphs of performance due to the usual process variation (that is, due to common causes), or is the variation due to one or more assignable causes (unusual variations)? It is possible that unusual variations have occurred and that action should be taken to reduce the variation in production conditions. It is also possible that the variation in the plot points is caused by common causes and that (given the current configuration of the process) production conditions have been held as constant as possible. For example, do the high points onsymbol the plot in Figure 17.2 suggest that one or more assignable causes have increased the hole location dimensions enough to warrant corrective action? As another example, do the high points on the R plot suggest that excess variability in the hole location dimensions exists and that corrective action is needed? Or does the lowest point on the R plot indicate that an improvement in process performance (reduction in variation) has occurred due to an assignable cause?

We can answer these questions by converting the graphs of performance shown in Figure 17.2 on the previous page into control charts. In general, by converting graphs of performance into control charts, we can (with only a small chance of being wrong) determine whether observed process variations are unusual (due to assignable causes). That is, the purpose of a control chart is to monitor a process so we can take corrective action in response to assignable causes when it is needed. This is called statistical process monitoring. The use of “seat of the pants intuition” has not been found to be a particularly effective way to decide whether observed process performance is unusual. By using a control chart, we can reduce our chances of making two possible errors—(1) taking action when none is needed and (2) not taking action when action is needed.

A control chart employs a center line (denoted CNL) and two control limits—an upper control limit (denoted UCL) and a lower control limit (denoted LCL). The center line represents the average performance of the process when it is in a state of statistical control—that is, when only common cause variation exists. The upper and lower control limits are horizontal lines situated above and below the center line. These control limits are established so that, when the process is in control, almost all plot points will be between the upper and lower limits. In practice, the control limits are used as follows:

1 If all observed plot points are between the LCL and UCL (and if no unusual patterns of points exist—this will be explained later), we have no evidence that assignable causes exist and we assume that the process is in statistical control. In this case, only common causes of process variation exist, and no action to remove assignable causes is taken on the process. If we were to take such action, we would be unnecessarily tampering with the process.

2 If we observe one or more plot points outside the control limits, then we have evidence that the process is out of control due to one or more assignable causes. Here we must take action on the process to remove these assignable causes.

Before discussing how to construct control charts, we must emphasize the importance of documenting a process while the subgroups of data are being collected. The time at which each subgroup is taken is recorded, and the person who collected the data is also recorded. Any process changes (machine resets, adjustments, shift changes, operator changes, and so on) must be documented. Any potential sources of variation that may significantly affect the process output should be noted. If the process is not well documented, it will be very difficult to identify the root causes of unusual variations that may be detected when we analyze the subgroups of data.

figure 17.1

table 17.2

figure 17.2

 

 

 

 

RATIONAL SUBGROUPING

symbol and R charts are the most commonly used control charts for measurement data (such charts are often called variables control charts). Subgroup means are plotted versus time on the symbolchart, while subgroup ranges are plotted on the R chart. The symbolchart monitors the process mean or level (we wish to run near a desired target level). The R chart is used to monitor the amount of variability around the process level (we desire as little variability as possible around the target). Note here that we employ two control charts, and that it is important to use the two charts together. If we do not use both charts, we will not get all the information needed to improve the process. 

Before seeing how to construct symboland R charts, we should mention that it is also possible to monitor the process variability by using a chart for subgroup standard deviations. Such a chart is called an s chart. However, the overwhelming majority of practitioners use R charts rather than s charts. This is partly due to historical reasons. When control charts were developed, electronic calculators and computers did not exist. It was, therefore, much easier to compute a subgroup range than it was to compute a subgroup standard deviation. For this reason, the use of R charts has persisted. Some people also feel that it is easier for factory personnel (some of whom may have little mathematical background) to understand and relate to the subgroup range. In addition, while the standard deviation (which is computed using all the measurements in a subgroup) is a better measure of variability than the range (which is computed using only two measurements), the R chart usually suffices. This is because symbol and R charts usually employ small subgroups—as mentioned previously, subgroup sizes are often between 2 and 6. For such subgroup sizes, it can be shown that using subgroup ranges is almost as effective as using subgroup standard deviations

R charts

If an observed subgroup mean is inside these control limits, we have no evidence to suggest that the process is out of control. However, if the subgroup mean is outside these limits, we conclude that µ and/or σ have changed, and that the process is out of control. The symbol chart limits are illustrated in Figure 17.3.

the process is in control, and thus µ and σ stay constant over time, it follows that µ and σ are the mean and standard deviation of all possible process measurements. For this reason, we call m the process mean and σ the process standard deviation. Since in most real situations we do not know the true values of µ and s, we must estimate these values. If the process is in control, an appropriate estimate of the process mean µ is

x text

Here the control chart constants D4 and D3 also depend on the subgroup size n. Values of D4 and D3 are given in Table 17.3 for subgroup sizes n = 2 through n = 25. We summarize the center lines and control limits for x and R charts in the following box:

R Chart lines

Example 17.3

Control limits such as those computed in Example 17.3 are called trial control limits. Theoretically, control limits are supposed to be computed using subgroups collected while the process is in statistical control. However, it is impossible to know whether the process is in control until we have constructed the control charts. If, after we have set up thesymbol and R charts, we find that the process is in control, we can use the charts to monitor the process.

  If the charts show that the process is not in statistical control (for example, there are plot points outside the control limits), we must find and eliminate the assignable causes before we can calculate control limits for monitoring the process. In order to understand how to find and eliminate assignable causes, we must understand how changes in the process mean and the process variation show up on symbol and R charts. To do this, consider Figures 17.5 and 17.6. These figures illustrate that, whereas a change in the process mean shows up only on the chart, a change in the process variation shows up on both the symbol and R charts. Specifically, Figure 17.5 shows that, when the process mean increases, the sample means plotted on thesymbol chart increase and go out of control. Figure 17.6 shows that, when the process variation (standard deviation, σ) increases,

1 The sample ranges plotted on the R chart increase and go out of control.

2 The sample means plotted on the symbol chart become more variable (because,since σ increases n symbol increases) and go out of control.'

  Since changes in the process mean and in the process variation show up on the symbol chart, we do not begin by analyzing thesymbol chart. This is because, if there were out-of-control sample means on the chart, we would not know whether the process mean or the process variation had changed. Therefore, it might be more difficult to identify the assignable causes of the out-of-control sample means because the assignable causes that would cause the process mean to shift could be very different from the assignable causes that would cause the process variation to increase. For instance, unwarranted frequent resetting of a machine might cause the process level to shift up and down, while improper lubrication of the machine might increase the process variation.

  In order to simplify and better organize our analysis procedure, we begin by analyzing the R chart, which reflects only changes in the process variation. Specifically, we first identify and eliminate the assignable causes of the out-of-control sample ranges on the R chart, and then we analyze the symbol chart. The exact procedure is illustrated in the following example.

EXAMPLE 17.4 The Hole Location Case

Consider the symbol and R charts for the hole location data that are given in Figure 17.4. To develop control limits that can be used for ongoing control, we first examine the R chart. We find two points above the UCL on the R chart. This indicates that excess within-subgroup variability exists at these points. We see that the out-of-control points correspond to subgroups 7 and 17. Investigation reveals that, when these subgroups were selected, an inexperienced, newly hired operator ran the operation while the regular operator was on break. We find that the inexperienced operator is not fully closing the clamps that fasten down the compressor shells during the hole punching operation. This is causing excess variability in the hole locations. This assignable cause can be eliminated by thoroughly retraining the newly hired operator. Since we have identified and corrected the assignable cause associated with the points that are out of control on the R chart, we can drop subgroups 7 and 17 from the data set. We recalculate center lines and control limits by using the remaining 18 subgroups. We first recompute (omitting symbol and R values for subgroups 7 and 17)example 17.4

that are out of control suggest that the process level has shifted when subgroups 1 and 12 were taken. Investigation reveals that these subgroups were observed immediately after start-up at the beginning of the day and immediately after start-up following the lunch break. We find that, if we allow a five-minute machine warm-up period, we can eliminate the process level problem.

  Since we have again found and eliminated an assignable cause, we must compute newly revised center lines and control limits. Dropping subgroups 1 and 12 from the data set, we recompute

17.4 example

that are shown in the MINITABoutput of Figure 17.8.We see that all the points on each chart are inside their respective control limits. This says that the actions taken to remove assignable causes have brought the process into statistical control. However, it is important to point out that, although the process is in statistical control, this does not necessarily mean that the process is capable of producing products that meet the customer’s needs. That is, while the control charts tell us that no assignable causes of process variation remain, the charts do not (directly) tell us anything about how much common cause variation exists. If there is too much common cause variability, the process will not meet customer or manufacturer specifications.We talk more about this later.

  When both the symbol and R charts are in statistical control, we can use the control limits for ongoing process monitoring. New symbol and R values for subsequent subgroups are plotted with respect to these limits. Plot points outside the control limits indicate the existence of assignable causes and the need for action on the process. The appropriate corrective action can often be taken by local supervision. Sometimes management intervention may be needed. For example, if the assignable cause is out-of-specification raw materials, management may have to work with a supplier to improve the situation. The ongoing control limits occasionally need to be updated to include newly observed data. However, since employees often seem to be uncomfortable working with limits that are frequently changing, it is probably a good idea to update center lines and control limits only when the new data would substantially change the limits. Of course, if an important process change is implemented, new data must be collected, and we may need to develop new center lines and control limits from scratch.

  Sometimes it is not possible to find an assignable cause, or it is not possible to eliminate the assignable cause even when it can be identified. In such a case, it is possible that the original (or partially revised) trial control limits are good enough to use; this will be a subjective decision. Occasionally, it is reasonable to drop one or more subgroups that have been affected by an assignable cause that cannot be eliminated. For example, the assignable cause might be an event that very rarely occurs and is unpreventable. If the subgroup(s) affected by the assignable cause have a detrimental effect on the control limits, we might drop the subgroups and calculate revised limits. Another alternative is to collect new data and use them to calculate control limits.

In the following box we summarize the most important points we have made regarding the analysis of symbol and R charts:

Analyzing symbol and R Charts to Establish Process Control

1 Remember that it is important to use both the symbol chart and the R chart to study the process.

2 Begin by analyzing the R chart for statistical control.

a Find and eliminate assignable causes that are indicated by the R chart.

b Revise both the symbol and R chart control limits, dropping data for subgroups corresponding to assignable causes that have been found and eliminated in 2a.

c Check the revised R chart for control.

d Repeat 2a, b, and c as necessary until the R chart shows statistical control.

3 When the R chart is in statistical control, the symbol chart can be properly analyzed.

a Find and eliminate assignable causes that are indicated by thesymbol  chart.

b Revise both thesymbol and R chart control limits, dropping data for subgroups corresponding to assignable causes that have been found and eliminated in 3a.

c Check the revised symbol chart (and the revised R chart) for control.

d Repeat 3a, b, and c (or, if necessary, 2a, b, and c and 3a, b, and c) as needed until both the symbol and R charts show statistical control.

4 When both the symbol and R charts are in control, use the control limits for process monitoring.

a Plot symbol and R points for newly observed subgroups with respect to the established limits.

b If either thesymbolchart or the R chart indicates a lack of control, take corrective action on the process.

5 Periodically update the symbol and R control limits using all relevant data (data that describe the process as it now operates).

6 When a major process change is made, develop new control limits if necessary.

EXAMPLE 17.5 The Hole Location Case

We consider the hole location problem and the revised and R charts shown in Figure 17.8 . Since the process has been brought into statistical control, we may use the control limits in Figure 17.8 to monitor the process. This would assume that we have used an appropriate subgrouping scheme and have observed enough subgroups to give potential assignable causes a chance to show up. In reality, we probably want to collect considerably more than 20 subgroups before setting control limits for ongoing control of the process.

  We assume for this example that the control limits in Figure 17.8 are reasonable. Table 17.4 gives four subsequently observed subgroups of five hole location dimensions. The subgroup means and ranges for these data are plotted with respect to the ongoing control limits in the MINITAB output of Figure 17.9. We see that the R chart remains in control, while the mean for subgroup 24 is above the UCL on the chart. This tells us that an assignable cause has increased the process mean. Therefore, action is needed to reduce the process mean.

EXAMPLE 17.6 The Hot Chocolate Temperature Case

Consider the hot chocolate data given in Table 17.2. In order to set up and R charts for these data, we compute

example 17.6

Since D3 is not given in Table 17.3 for the subgroup size n = 3, the R chart does not have a lower control limit.

  The symbol and R charts for the hot chocolate data are given in the Excel add-in (MegaStat) output of Figure 17.10. We see that the R chart is in good statistical control, while the symbol chart is out of control with three subgroup means above the UCL and with one subgroup mean below the LCL. Looking at the symbol chart, we see that the subgroup means that are above the UCL were observed during lunch (note subgroups 4, 10, and 22). Investigation and process documentation reveal that on these days the hot chocolate machine was not turned off between breakfast and lunch. Discussion among members of the dining hall staff further reveals that, because there is less time between breakfast and lunch than there is between lunch and dinner or dinner and breakfast, the staff often fails to turn off the hot chocolate machine between breakfast and lunch. Apparently, this is the reason behind the higher hot chocolate temperatures observed during lunch. Investigation also shows that the dining hall staff failed to turn on the hot chocolate machine before breakfast on Thursday (see subgroup 19)—in fact, a student had to ask that the machine be turned on. This caused the subgroup mean for subgroup 19 to be far below the chart LCL. The dining hall staff concludes that the hot chocolate machine needs to be turned off after breakfast and then turned back on 15 minutes before lunch (prior experience suggests that it takes the machine 15 minutes to warm up). The staff also concludes that the machine should be turned on 15 minutes before each meal. In order to ensure that these actions are taken, an automatic timer is purchased to turn on the hot chocolate machine at the appropriate times. This brings the process into statistical control. Figure 17.11 shows and R charts with revised control limits calculated using the subgroups that remain after the subgroups for the out-of-control lunches (subgroups 3, 4, 9, 10, 21, and 22) and the out-of-control breakfast (subgroups 19 and 20) are eliminated from the data set.We see that these revised control charts are in statistical control.

example 17.6

Of course, we could also compute the standard deviation of the measurements in each subgroup, and employ the average of the subgroup standard deviations to estimate s. The key is not whether we use ranges or standard deviations to measure the variation within the subgroups. Rather, the key is that we must calculate a measure of variation for each subgroup and then must average the separate measures of subgroup variation in order to estimate the process variation as if the process is in control.

figure 17.3

table 17.3

figure 17.4

figure 17.5

figure 17.7

table 17.4

figure 17.10

figure 17.11

PATTERN ANALYSIS

If we have a process in statistical control, we have found and eliminated the assignable causes of process variation. Therefore, the individual process measurements fluctuate over time with a constant standard deviation σ around a constant mean µ. It follows that we can use the individual process measurements to estimate µ and σ. Doing this lets us determine if the process is capable of producing output that meets specifications. Specifications are based on fitness for use criteria—that is, the specifications are established by design engineers or customers. Even if a process is in statistical control, it may exhibit too much common cause variation (represented by σ ) to meet specifications.  

     As will be shown in Example 17.9 on the next page, one way to study the capability of a process that is in statistical control is to construct a histogram from a set of individual process measurements. The histogram can then be compared with the product specification limits. In addition, we know that if all possible individual process measurements are normally distributed with mean m and standard deviation s, then 99.73 percent of these measurements will be in the interval [µ + 3σ, µ + 3σ].

natural tolerance limit

If the natural tolerance limits are inside the specification limits, then almost all (99.73 percent) of the individual process measurements are produced within the specification limits. In this case we say that the process is capable of meeting specifications. Furthermore, if we use symboland R charts to monitor the process, then as long as the process remains in statistical control, the process will continue to meet the specifications. If the natural tolerance limits are wider than the specification limits, we say that the process is not capable. Here some individual process measurements are outside the specification limits

example 17.8

EXAMPLE 17.9 The Hole Location Case

Again consider the hole punching process for air conditioner compressor shells. Recall that we were able to get this process into a state of statistical control with x bar symbol = 3.0006 and r symbol= .0675 by removing several assignable causes of process variation.

Figure 17.20 gives a relative frequency histogram of the 80 individual hole location measurements used to construct the symbol and R charts of Figure 17.8 . This histogram suggests that the population of all individual hole location dimensions is approximately normally distributed.

example 17.9

tell us that almost all (approximately 99.73 percent) of the individual hole location dimensions produced by the hole punching process are between 2.9135 inches and 3.0877 inches.

Suppose a major customer requires that the hole location dimension must meet specifications of 3.00 ± .05 inches. That is, the customer requires that every individual hole location dimension must be between 2.95 inches and 3.05 inches. The natural tolerance limits, [2.9135, 3.0877], which contain almost all individual hole location dimensions, are wider than the specification limits [2.95, 3.05]. This says that some of the hole location dimensions are outside the specification limits. Therefore, the process is not capable of meeting the specifications. Note that the histogram in Figure 17.20 also shows that some of the hole location dimensions are outside the specification limits.

  Figure 17.21 illustrates the situation, assuming that the individual hole location dimensions are normally distributed. The figure shows that the natural tolerance limits are wider than the specification limits. The shaded areas under the normal curve make up the fraction of product that is outside the specification limits. Figure 17.21 also shows the calculation of the estimated fraction of hole location dimensions that are out of specification. We estimate that 8.55 percent of the dimensions do not meet the specifications.

  Since the process is not capable of meeting specifications, it must be improved by removing common cause variation. This is management’s responsibility. Suppose engineering and management conclude that the excessive variation in the hole locations can be reduced by redesigning the machine that punches the holes in the compressor shells. Also suppose that after a research and development program is carried out to do this, the process is run using the new machine and 20 new subgroups of n = 5 hole location measurements are obtained. The resulting and R charts (not given here) indicate that the process is in control with x bar= 3.0002 and r symbol= .0348. Furthermore, a histogram of the 100 hole location dimensions used to construct the symbol and R charts indicates that all possible hole location measurements are approximately normally distributed. It follows that we estimate that almost all individual hole location dimensions are contained within the new natural tolerance limits

equation

This says that the upper specification limit is 3.33 estimated process standard deviations above x bar Since the upper natural tolerance limit is 3 estimated process standard deviations above x bar , there is a leeway of .33 estimated process standard deviations between the upper natural tolerance limit and the upper specification limit (see Figure 17.22). Because some leeway exists between the natural tolerance limits and the specification limits, the distribution of process measurements (that is, the curve in Figure 17.22) can shift slightly to the right or left (or can become slightly more spread out) without violating the specifications. Obviously, the more leeway, the better.

To understand why process leeway is important, recall that a process must be in statistical control before we can assess the capability of the process. In fact:

In order to demonstrate that a company’s product meets customer requirements, the company must present

1symbol  and R charts that are in statistical control.

2 Natural tolerance limits that are within the specification limits.

However, even if a capable process shows good statistical control, the process mean and/or the process variation will occasionally change (due to new assignable causes or unexpected recurring problems). If the process mean shifts and/or the process variation increases, a process will need some leeway between the natural tolerance limits and the specification limits in order to avoid producing out-of-specification product. We can determine the amount of process leeway (if any exists) by defining what we call the sigma level capability of the process. Sigma Level Capability The sigma level capability of a process is the number of estimated process standard deviations between the estimated process mean, x bar, and the specification limit that is closest to x bar.

 

For instance, in the previous example the lower specification limit (2.95) is 3.36 estimated standard deviations below the estimated process mean,x bar , and the upper specification limit (3.05) is 3.33 estimated process standard deviations above x bar. It follows that the upper specification limit is closest to the estimated process mean x bar , and because this specification limit is 3.33 estimated process standard deviations from x bar , we say that the hole punching process has 3.33 sigma capability. 3.33 sigma capability.

If a process has a sigma level capability of three or more, then there are at least three estimated process standard deviations between x bar and the specification limit that is closest to x bar . It follows that, if the distribution of process measurements is normally distributed, then the process is capable of meeting the specifications. For instance, Figure 17.23(a) on the next page illustrates a process with three sigma capability. This process is just barely capable—that is, there is no process leeway. Figure 17.23(b) on the next page illustrates a process with six sigma capability. This process has three standard deviations of leeway. In general, we see that if a process is capable, the sigma level capability expresses the amount of process leeway. The higher the sigma level capability, the more process leeway. More specifically, for a capable process, the sigma level capability minus three gives the number of estimated standard deviations of process leeway. For example, since the hole punching process has 3.33 sigma capability, this process has 3.33 - 3 = .33 estimated standard deviations of leeway.

  The difference between three sigma and Six Sigma capability is dramatic. To illustrate this, look at Figure 17.23(a), which shows that a normally distributed process with three sigma capability produces 99.73 percent good quality (the area under the distribution curve between the specification limits is .9973). On the other hand, Figure 17.23(b) shows that a normally distributed process with six sigma capability produces 99.9999998 percent good quality. Said another way, if the process mean is centered between the specifications, and if we produce large quantities of product, then a normally distributed process with three sigma capability will produce an average of 2,700 defective products per million, while a normally distributed process with six sigma capability will produce an average of only .002 defective products per million.

In the long run, however, process shifts due to assignable causes are likely to occur. It can be shown that, if we monitor the process by using an symbol chart that employs a typical subgroup size of 4 to 6, the largest sustained shift of the process mean that might remain undetected by the symbol chart is a shift of 1.5 process standard deviations. In this worst case, it can be shown that a normally distributed three sigma capable process will produce an average of 66,800 defective products per million (clearly unacceptable), while a normally distributed six sigma capable process will produce an average of only 3.4 defective products per million. Therefore, if a six sigma capable process is monitored by symbol and R charts, then, when a process shift occurs, we can detect the shift (by using the control charts), and we can take immediate corrective action before a substantial number of defective products are produced. This is, in fact, how control charts are supposed to be used to prevent the production of defective product. That is, our strategy is

Prevention Using Control Charts

1 Reduce common cause variation in order to create leeway between the natural tolerance limits and the specification limits.

2 Use control charts to establish statistical control and to monitor the process.

3 When the control charts give out-of-control signals, take immediate action on the process to reestablish control before out-of-specification product is produced.

  Since 1987, a number of U.S. companies have adopted a six sigma philosophy. In fact, these companies refer to themselves as six sigma companies. It is the goal of these companies to achieve six sigma capability for all processes in the entire organization. For instance, Motorola, Inc., the first company to adopt a six sigma philosophy, began a five-year quality improvement program in 1987. The goal of Motorola’s companywide defect reduction program is to achieve six sigma capability for all processes—for instance, manufacturing processes, delivery, information systems, order completeness, accuracy of transactions records, and so forth. As a result of its six sigma plan, Motorola claims to have saved more than $1.5 billion. The corporation won the Malcolm Baldrige National Quality Award in 1988, and Motorola’s six sigma plan has become a model for firms that are committed to quality improvement. Other companies that have adopted the six sigma philosophy include IBM, Digital Equipment Corporation, and General Electric.

    To conclude this section, we make two comments. First, it has been traditional to measure process capability by using what is called the Cpkindex. This index is calculated by dividing the sigma level capability by three. For example, since the hole punching process illustrated in Figure 17.22 has a sigma level capability of 3.33, the Cpk for this process is 1.11. In general, Cpk if is at least 1, then the sigma level capability of the process is at least 3 and thus the process is capable. Historically, Cpk has been used because its value relative to the number 1 describes the process capability. We prefer using sigma level capability to characterize process capability because we believe that it is more intuitive.

control charts

figure 17.20

figure 17.21

figure 17.22

figure 17.23