Current Position:Home > Industry Research

A complete set of Six Sigma Management core tools and applications: Improving the quality of enterprise process management

Six Sigma management is a very professional and important means of quality management. Its ideas, concepts and tools have a very valuable application for enterprises with a certain management foundation for redevelopment and upgrading. It is a kind of implementation principle and technology that can strictly, centrally and efficiently improve the quality of enterprise process management. This article shares with you some commonly used tools in Six Sigma management.

1. FMEA

Fault mode and impact analysis (FMEA) and Fault tree analysis (FTA) are widely used in reliability engineering. These techniques have been successfully applied to solve various quality problems abroad. In the ISO 9004:2000 version of the standard, FMEA and FTA analysis have been adopted as a method of risk assessment for design and development as well as validation and changes to products and processes. At present, FMEA and FTA technology is basically only applied to reliability design analysis in China. According to foreign literature and the practice of some Chinese enterprise technicians, FMEA and FTA can be applied to process (process) analysis and quality problem analysis. Quality is a very broad concept, and reliability is one of its aspects.

Through FMEA and FTA analysis, various potential quality problems and failure modes affecting product quality and reliability and their causes (including design defects, process problems, environmental factors, aging, wear and processing errors, etc.) were found. By taking corrective measures in design and process, the quality of products and the ability to resist various disturbances were improved. According to the literature, about 50% of the quality improvement of a world-class automobile company is achieved through FMEA and FTAETA.

Japanese quality expert Kano divides quality into three kinds according to the customer's feeling and the degree to which the customer's needs are met: the quality of granted, the quality of expectation and the quality of charm.

A: Quality of course. When its features are not sufficient (do not meet customer needs), the customer is very dissatisfied; When its features are sufficient (to meet customer needs), there is no satisfaction, the customer is satisfied at best.

B: Expected mass is also called monadic mass. When its features are insufficient, the customer is very dissatisfied, and when sufficient, the customer is satisfied. The less sufficient, the less satisfied. The more sufficient, the more satisfied.

C: Charm quality. When its features are not sufficient, and are irrelevant features, the customer does not care, and when its features are sufficient, the customer is very satisfied.

The quality taken for granted is the baseline quality, which is the most basic need to be met. Expected quality is a common form of quality. Charm quality is the competitive element of quality. It usually has the following characteristics:

1, with new functions, never before appeared;

2, performance greatly improved;

3. The introduction of a new mechanism that had not been seen or even considered before greatly improved customer loyalty;

4. A very new style.

The Kano model is divided into three qualities, improving the direction for 6 Sigma improvements. If the quality is taken for granted, it is necessary to ensure that the basic quality characteristics meet the specifications (standards) to meet the basic requirements of customers, and the project team should focus on how to reduce the failure rate; If quality is expected, the project team is not concerned with compliance with the specification (standard), but how to improve the specification (standard) itself. Continuously improve the quality characteristics to promote the improvement of customer satisfaction; If it is attractive quality, it needs to meet the potential needs of customers, so that the product or service to achieve unexpected new quality. The project team should focus on how to maintain the first two qualities, explore customer needs, create new products and add unexpected new qualities.

POKA-YOKE means "error-proof system". Japanese quality management expert and famous founder of Toyota production system, Mr. Shingeo Shingo, based on his long experience in field quality improvement, pioneered the concept of POKA-YOKE and developed it into a tool for obtaining zero defects and ultimately eliminating quality inspection.

The basic ideas of POKA-YOKE are as follows:

(1) Never allow even a little defective products to appear, in order to become a world enterprise, not only in concept, but also in practice must reach "0" defects.

(2) Production site is a complex environment, every day everything may occur, errors lead to defects, defects lead to customer dissatisfaction and waste of resources.

(3) We cannot eliminate errors, but we must promptly discover and correct them to prevent them from forming defects.

Quality Function Deployment (QFD) is a multi-level deductive analysis method that converts customer or market requirements into design requirements, component characteristics, process requirements and production requirements. It embodies the guiding ideology of market-oriented and customer requirements as the sole basis for product development. In the method system of robust design, quality function development technology plays an important role, it is the precursor step to carry out robust design, can determine the key links of product development, key parts and key processes, so as to point out the direction and determine the object for the specific implementation of stability optimization design. It makes all the development activities of the product closely related to meeting the requirements of customers, thus enhancing the market competitiveness of the product and ensuring the success of product development.

According to the literature, by using QFD method, the product development cycle can be shortened by one-third, the cost can be reduced by one-half, the quality can be greatly improved, and the output can be doubled. Quality function development has reached a very popular degree in the United States civil industry and defense industry, not only applied to specific product development and quality improvement, but also used by major companies as quality policy development and project management objectives.

ISO9000 series standards require "customer as the focus of attention", "to ensure that customer requirements are determined and met", as the analysis of customer needs to develop the quality function development method will be widely used in the implementation of ISO9000 series standards.

The Statement Of Work (SOW) is one of the attachments to the contract and has the same legal effect as the body of the contract. The work description specifies in detail the work to be completed by both parties during the contract period, such as program demonstration, design, analysis, testing, quality control, reliability, maintainability, support, standardization, measurement assurance, etc. The project should be provided to the other party, such as interface control documents, hardware, computer software, technical reports, drawings, materials, and when to conduct what kind of review, etc. Therefore, the work description further defines the customer's requirements and the work that the contractor must carry out to achieve the customer's requirements in the form of contractual documents, which makes the product management and quality assurance based on the legal basis. It becomes a powerful tool for Party A (customer) to conduct quality control over Party B (contractor). Detailed requirements of the job description can be found in GJB2742-96. The content of the job description is an important input to the development of the quality function.

The Work Breakdown structure (WBS) is a hierarchical system formed by the top-down breakdown of the work that should be completed in the development and production process of a weapon and equipment project. This hierarchical system is centered on the products to be developed and produced, and consists of product (hardware and software) items, service items and data items.

WBS is formed through the system engineering work, it shows and determines the work of the weapon equipment project, and indicates the relationship between the various works and their relationship with the final product, fully embodies the integrity, order (hierarchy) and relevance of the system. GJB2116-94 gives the typical development process and basic requirements of WBS, and provides the outline of seven types of weapon systems WBS in the appendix.

The application of WBS hierarchy system in the work of quality function development and system design, with reference to the outline WBS given in GJB2116-94, will greatly facilitate the conception of product function, structure and development work, contribute to the completion of QFD and system design work, and also contribute to the preparation of job description (SOW). WBS is an effective tool to implement systematic engineering management for weapon and equipment development, and it is also the guarantee of design integrity. The principles and ideas of WBS are also applicable to a variety of large, complex and high-tech civilian products.

Concurrent Engineering is a systematic and integrated approach to the parallel design of products and their associated processes (including manufacturing and support processes), which requires developers to consider all elements of the entire product life cycle (from concept formation to end-of-life disposal) from the outset. Including quality, cost, schedule and customer needs. Concurrent engineering requires special attention to the source design, at the beginning of the design, to try to integrate all the information needed for product development, and bring together the experience and wisdom of experts in many disciplines.

In robust design, especially in quality function development and system design, the principle and guiding ideology of concurrent engineering must be carried out.

Parameter Design is carried out after system design. The basic idea of parameter design is to select the best horizontal combination of all parameters in the system (including raw materials, parts, components, etc.), so as to minimize the influence of external, internal and inter-product interference, so that the designed product quality characteristics fluctuate little and have good stability. In addition, in the parameter design stage, the lowest quality grade components and cost-effective processing accuracy are generally selected to meet the environmental conditions of use, so that the quality and cost of the product are improved.

Parameter design is a multi-factor optimization problem. Considering the influence of three kinds of interference on the fluctuation of product quality characteristic value and seeking the design scheme with good anti-interference performance, the parameter design is more complicated than the orthogonal test design. Dr. Taguchi arranged the test plan by using the direct product of the inside orthogonal table and the outside orthogonal table, and used the signal-to-noise ratio as the stability index of the product quality characteristics for statistical analysis.

Why is the function of the system very stable through parametric design, even if the quality level is not high and the fluctuation is large? This is because parametric design takes advantage of nonlinear effects. Generally, there is a nonlinear relationship between the product quality characteristic value y and the level of some component parameters. If the output characteristic value of a product is y, the target value is m, and the selected component parameter is x, its fluctuation range is Dx(generally normal distribution), if the parameter x takes the level x1. Due to the fluctuation Dx, the fluctuation of y is Dy1(as shown in Figure). Through parameter design, x1 is moved to x2. At this time, the same fluctuation range Δx causes the fluctuation range of y to shrink to Dy2. Because the nonlinear effect is very obvious, Dy2Dy1. It can be seen that as long as the level of the parameter is reasonably selected, under the condition that the fluctuation range of the parameter is unchanged (which means that the cost is not increased), the fluctuation range of the quality characteristic value y can be greatly reduced, thus improving the stability of the product. But at the same time, a new contradiction occurs, which is that the target value of y is moved from m to m, and the deviation Δm= m-m. How do you keep y stable without deviating from the target value? At this time, you can try to find a linear relationship with the output characteristic y, and easy to adjust the component parameter z(adjustment factor), that is, y=a+bz, as long as the z from z1 to z2. The deviation Δm can be compensated. If the parametric design is not adopted, the nonlinear relationship is used, but the tolerance design is directly carried out according to the traditional method, and the component x is changed from a lower quality grade to a high quality grade, that is, the fluctuation range of parameter x is reduced from Δx to Δx1. The fluctuation range of the mass characteristic y corresponding to the level x1 becomes Δy3 although Δy3Δy1. But this has been achieved at the expense of increased costs; And it may still be Δy3Δy2. That is, after improving the quality level of the component, the fluctuation range of the product quality characteristic y corresponding to x1 is still wider than that of the y fluctuation range Dy2 corresponding to the level x2 using a lower quality level component, which can be seen the superiority of the parameter design.

Divergent thinking, also known as different-seeking thinking and radiation thinking, refers to the thinking that starts from a goal, thinks along various different ways, and explores a variety of answers, as opposed to aggregated thinking. Many psychologists believe that divergent thinking is the most important characteristic of creative thinking, and it is one of the main signs of measuring creativity.

American psychologist Gilford believes that divergent thinking has three main characteristics: fluency, flexibility and originality.

Fluency means that the intellectual activity is sensitive and rapid, unblocked and can publish more ideas in a short time, which is an indicator of the amount of divergent thinking. Flexibility means that the thinking has a multi-directional, analogy, adaptable, not subject to the functional fixation, set of constraints, so that it can produce extraordinary ideas, put forward extraordinary new ideas; Originality refers to the fact that thinking has unusual new and different components, so it is more indicative of the essence of divergent thinking. The ability of divergent thinking can be cultivated by thinking about the same problem from different aspects, such as "one problem with multiple solutions", "one thing with multiple writing", and "one thing with multiple uses".

Analysis of Variance (ANOVA) is one of the commonly used data processing methods in mathematical statistics, and it is an effective tool for analyzing experimental data in industrial and agricultural production and scientific research. It is also the mathematical basis of experiment design, parameter design and tolerance design. A complex thing, in which there are often many factors mutually restricted and interdependent. The purpose of ANOVA is to find out the factors that have a significant impact on the thing through data analysis, the interaction between various factors, and the optimal level of significant impact factors. Anova is a technique in which the total "variation" between data in a comparable array is decomposed by each specified source of variation. For the measurement of variation, the sum of squares of deviation is used. The method of ANOVA is to decompose the total sum of squares of deviation from the partial sum of squares of deviation that can be traced back to the specified source. This is an important idea.

Regression Analysis is a mathematical tool to study the correlation between a variable Y and several other variables X. It is to find the dependency between variables masked by randomness on the basis of a set of experimental or observational data. Roughly speaking, it can be understood as using a definite functional relationship to approximate the more complex correlation, which is called the regression function, and is called the empirical formula in practical problems. The main problem of regression analysis is how to use the observed values (samples) of variables X and Y to make statistical inferences to regression functions, including estimating it and testing hypotheses related to it.

The ISO9000 series of standards requires companies to measure and monitor information about customers' feelings about whether the organization has met their requirements. Customer-related information may include surveys of customers and users, feedback on products, customer requests and complaints, contractual information, market demand, service delivery data and competitive information.

There are various methods to evaluate Customer Satisfaction. In recent years, the United States, Sweden and other countries have adopted Customer Satisfaction Index (CSI) to evaluate, which is very effective. CSI is a parameter used to evaluate the degree to which a product (hardware, software, service, process material) meets customer needs, and is also a comprehensive index to evaluate product quality. Suppose that the customer puts forward n demands for the product, and the degree of satisfaction of each demand is qi, (i= 1.2.... n), then customer satisfaction index CSI is a function of qi.

For qi, market developers should conduct random sampling surveys on customer groups, combined with customer complaints collected through after-sales service and analysis and statistics on product quality problems to determine. The evaluation of customer satisfaction index is quite complicated. Enterprises, society and state organs can, according to needs, commission neutral professional organizations to evaluate the customer satisfaction index of products, services and industries to guide the direction of quality improvement.

Orthogonal experimental design has two characteristics when selecting test points: uniform dispersion and neat comparison. "Uniform dispersion" makes the test sites representative, and "neatly comparable" facilitates the analysis of test data. In order to ensure the characteristics of "neat comparability", the orthogonal design requires at least q2 tests. The only way to reduce the number of tests is to remove the requirement of neat comparability. Uniform design is a test design method that only considers the uniform distribution of test points within the test range. Similar to orthogonal design, uniform design is also designed by a set of carefully designed tables, uniform table, and regression analysis is used to analyze the test results.

Each uniform design table has a code name or, where U means uniform design, n means n tests to be done, q means there are q levels for each factor, s means the table has s columns, and the upper right corner of U with and without "" represents two different types of uniform tables. Uniform tables with "" usually have better uniformity. A significant feature of uniform design is that the number of tests decreases significantly with the increase of the factor level.

The full name of permutation diagram is "primary and secondary factor permutation diagram", also known as Pareto diagram. It is a method used to influence the main factors of the various factors of product quality, and can be used to determine the direction of quality improvement. Because most problems that exist in reality are usually caused by a small number of causes.

The 8020 principle of economics is used in the field of management, and its basic principle is to distinguish the "key minority" and "secondary majority", which helps to grasp the key factors and solve the main problem, for the purpose of intuition, with a graph, this graph is the arrangement chart.

Robert of Harvard Business School. S. Kaplan, professor of Leadership development at Harvard Business School, and David Kaplan, director of the Norrone NORTON Institute. After a year-long study of 12 leading companies in performance measurement, P. NORTON (founder and president of the Renaissance Global Strategy Group) developed a new organizational performance management method, the "Balanced Scorecard", which was published in the December 1992 issue of the Harvard Business Review.

The basic content of the balanced scorecard: The balanced scorecard breaks the traditional performance management method that only focuses on financial indicators, and believes that the traditional financial accounting model can only measure what happened in the past. In the industrial age, management methods focused on financial indicators were effective, but in the information society, traditional performance management methods are not comprehensive. Organizations must gain momentum by investing in customers, suppliers, employees, organizational processes, technology, and innovation. Based on this understanding, the Balanced Scorecard approach argues that organizations should look at their performance from four perspectives: customers, business processes, learning and growth, and finance. The goals and evaluation indicators in the balanced scorecard are derived from the organizational strategy, which translates the mission and strategy of the organization into tangible goals and measurement indicators.

Tolerance Design is carried out after the system design is completed and the optimal horizontal combination of controllable factors is determined by parameter design. At this time, the quality level of each component (parameter) is low and the parameter fluctuation range is wide.

The purpose of tolerance design is to determine the appropriate tolerance of each parameter on the basis of the best conditions determined in the parameter design stage. The basic idea of tolerance design is as follows: according to the size of the contribution (influence) of the fluctuations of each parameter to the quality characteristics of the product, it is necessary to consider from the economic point of view whether it is necessary to give a smaller tolerance to the parameters with a large impact (such as replacing the components of a lower quality grade with components of a higher quality grade). On the one hand, the fluctuation of quality characteristics can be further reduced, the stability of the product can be improved, and the quality loss can be reduced. On the other hand, due to the improvement of the quality level of the components, the cost of the product has been improved. Therefore, the tolerance design stage should consider not only to further reduce the quality loss of the product that still exists after the parameter design, but also to consider that reducing the tolerance of some components will increase the cost, to weigh the pros and cons of the two, and take the best decision.

In short, through the tolerance design to determine the most reasonable tolerance of each parameter, so that the total loss (sum of quality and cost) to achieve the best (minimum). We know that reducing tolerances for a number of parameters requires an increase in cost, but this will improve quality and reduce the loss of functional fluctuations. Therefore, it is necessary to find a tolerance design scheme that minimizes the total loss. The main tools used for tolerance design are mass loss functions and orthogonal polynomial regression.

Parameter design and tolerance design complement each other. According to the principle of parametric design, each level of products (systems, subsystems, equipment, components, parts), especially the final products delivered to customers should reduce quality fluctuations as much as possible, narrow the tolerance, in order to improve product quality and enhance customer satisfaction; But on the other hand, each level of products should have a strong ability to withstand various interference (including processing errors), that is, should allow its subordinate parts have a large tolerance range. To determine the scientific and reasonable tolerance of the subordinate parts through the tolerance design, as the basis for the conformity control in the manufacturing stage. However, it should be pointed out that there are two differences between the compliance control here and the traditional quality management compliance control:

First, the inspection process can not only record the pass or fail, but also record the specific value of the quality characteristics; We should not only give the unqualified rate, but also formulate scientific statistical methods to give the data of quality level according to the theory of quality loss.

Secondly, the on-line quality control method (such as advanced SPC method, etc.) which is suitable for robust design is adopted to monitor the fluctuation of product quality in real time and make feedback and adjustment of process parameters. In view of the existing problems, continuous measures are taken to improve the process design, improve product quality, and make the quality characteristics closer and closer to the target value under the premise of reducing the total loss. When conditions are available, the tolerance range should be reduced.

Design of Experiments (DOE) is a mathematical theory and method that studies how to develop appropriate experimental schemes in order to carry out effective statistical analysis of experimental data. Experimental design should follow three principles: randomization, local control and repetition. The aim of randomization is to avoid bias of experimental results due to subjective and objective system factors. Local control is to make the conditions within the partition group as consistent as possible. Repetition is to reduce the impact of random error, and the aim is still to avoid the influence of controllable systemic factors. Experimental design can be roughly divided into four types: factorial design, block design, regression design and uniform design. Factorial design is divided into full implementation method and partial implementation method. Factorial experimental design method is often called orthogonal experimental design.

The so-called orthogonal experimental design is to use a normalized table - orthogonal table to arrange the experiment reasonably, use the principle of mathematical statistics to scientifically analyze the experimental results, and deal with the scientific method of multi-factor experiment. The advantage of this method is that it can find out the influence of each factor on the experimental index through a few highly representative experiments, determine the primary and secondary order of factors, and find out the better production conditions or the optimal parameter combination. Experience shows that orthogonal experimental design is an effective method to solve multi-factor optimization problems. Orthogonal table is a kind of table constructed on the basis of Latin square and orthogonal Latin square by combinatorial mathematics theory. It is a basic tool of orthogonal design. It has the characteristics of balanced dispersion and neat comparability.

Experimental design method has more than 70 years of history, in the United States and Japan, is widely used in agriculture, pharmaceutical, chemical, machinery, metallurgy, electronics, automotive, aviation, aerospace and other almost all industrial fields to improve product quality. The American automotive industry standard QS 9000 "Quality system requirements" has listed experimental design as one of the technologies that must be applied. The famous parameter design is also developed on the basis of orthogonal experimental design. In addition, the experimental design can not only find the optimal parameter combination, but also determine the influence of various error factors such as environmental factors and processing errors on the expected product characteristics qualitatively by setting error columns and conducting variance analysis in many cases, and take improvement measures to eliminate the influence of these errors. Therefore, for some simple engineering problems, the direct application of experimental design method can also obtain a satisfactory and robust design scheme. Experimental design can also be applied to improve enterprise management, adjust product structure, and make production plans with higher production efficiency.

Benchmarking is the continuous process of comparing and measuring the performance, quality, and after-sales service of products against the strongest competitors or companies that have become industry leaders, and taking measures to improve them.

Benchmarking comparison method includes two important aspects, on the one hand, make plans, constantly seek and establish the domestic and international advanced level of benchmarking, through comparison and comprehensive thinking to find their own product gap; On the other hand, constantly take the improvement measures of design, process and quality management, take people's strengths, make up for the shortcomings, and constantly improve the technology and quality level of products, exceeding all competitors, to reach and maintain the world's advanced level. The use of horizontal comparison method is not a simple imitation, but a creative reference.

Through in-depth thinking, research, set the strength of all families, carry out technological innovation, and achieve a breakthrough in product performance. Only by mastering breakthrough technology can we lead the world. In order to better implement the horizontal comparison law, the relevant database should be established and constantly updated. The horizontal comparison method has been widely used in the United States and achieved obvious results.

Statistical Process Control (SPC) was proposed by Dr. Shewhart in the United States in the 1920s, since the Second World War, SPC has gradually become the basic method of online quality control in Western industrial countries. According to the SPC theory, the fluctuation of product quality characteristics is the root cause of quality problems, and the quality fluctuation has statistical regularity. The abnormal can be found through the control chart, and the abnormal cause can be found and eliminated through the process control and diagnosis theory (SPCD). Commonly used Shewhart control charts include mean-range (x-R) control chart, mean-standard deviation (x-S) control chart, median - range (x-R) control chart, single-value - moving range (x-Rs) control chart, nonconforming product rate (P) control chart, nonconforming product number (Pn) control chart, defect number (C) control chart, unit defect number (u) control chart, etc. SPC method is a powerful tool to keep the production line stable and reduce quality fluctuations.

In recent years, the SPC method has been further developed, such as Boeing's introduction of a new supplier quality assurance specification Dl-9000 to implement robust design ideas. The main change is the requirement to establish an Advanced Quality System (AQS). The AQS system incorporates the concept of Taguchi's quality loss into the quality management of the manufacturing stage, and puts forward a set of manufacturing quality control requirements that are compatible with robust design.

The AQS system first requires to determine the key characteristics of the product in the manufacturing stage, and the process robust design is required for these key characteristics and the parts involved in order to determine the robust process. In order to establish monitoring measures for key characteristics in production and manufacturing, in addition to the conventional control chart of SPC, AQS gives three small-batch control charts, namely single value moving range control chart, target control chart and proportional control chart, two improved control charts, namely moving average control chart and geometric moving average control chart, and some measures to improve the monitoring sensitivity of control chart. According to the monitoring situation and actual needs, improve the process parameters or improve the process design, correct any man-machine material process factors that cause quality fluctuations, so as to achieve continuous quality improvement.

The brainstorming method, also known as the intellectual stimulation method, was put forward by the founder of modern creative science, American Osborne, and is a collective training method of creative ability. It organizes all the members of a group together, so that each member has no qualms about expressing their own ideas, not afraid of others' sarcasm, nor afraid of others' criticism and accusations. It is the most effective way for everyone to put forward a large number of new ideas and solve problems creatively. It has four basic principles:

First, excluding critical criticism, the comments on the proposed ideas should be carried out later.

Second, encourage "free imagination". The more absurd the idea, the more valuable it is likely to be.

Third, ask for a certain number of ideas. The more ideas presented, the more valuable ideas are likely to be acquired.

Fourth, explore the concept of research combination and improvement. In addition to the ideas put forward by the participants themselves, participants were asked to indicate what they thought could be done to combine several ideas to produce another new idea; Or ask the participants to use the problem to improve the ideas put forward by others.

Related Articles