Top Banner
DBA 1656 QUALITY MANAGEMENT 1 NOTES Anna University Chennai INTRODUCTION TO QUALITY MANAGEMENT INTRODUCTION Today’s organization can gain importance on their products and services embedded with quality .The quality tentacles could be seen everywhere in the organization which is ultimately called as “Total Quality Management”. Customer is in the forefront of the Total Quality Management process because the entire exercise is focused towards customer satisfaction ultimately ending up with customer delight. The Total Quality Management helps in shaping the future. This unit deals with Quality Management – Definitions, TQM framework, Benefits, Awareness and Obstacles, Quality – Vision, Mission and Policy Statements, Customer Focus – Customer Perception of Quality, Translating needs into requirements, Customer retention, Dimensions of Product and Service Quality, Cost of quality. LEARNING OBJECTIVES Upon completion of this unit, you will be able to: Have a feel of Quality Get a framework on TQM Get an introduction on vision Understand the indispensability of the customer Identify the dimensions of product and service quality 1.1 QUALITY MANAGEMENT – DEFINITIONS Principles and Philosophies of Quality Management Quality management is becoming increasingly important to the leadership and management of all organizations. It is necessary to identify Quality Management as a distinct discipline of management and lay down universally understood and accepted rules for this discipline. The ISO technical committee working on the ISO9000 standards had published a document detailing the quality management principles and application guidelines. The latest revision (version 2000) of ISO9000 standards are based on these principles. UNIT I
342
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: DBA1656

DBA 1656 QUALITY MANAGEMENT

1

NOTES

Anna University Chennai

INTRODUCTION TO QUALITYMANAGEMENT

INTRODUCTION

Today’s organization can gain importance on their products and servicesembedded with quality .The quality tentacles could be seen everywhere in the organizationwhich is ultimately called as “Total Quality Management”. Customer is in the forefrontof the Total Quality Management process because the entire exercise is focused towardscustomer satisfaction ultimately ending up with customer delight. The Total QualityManagement helps in shaping the future. This unit deals with Quality Management –Definitions, TQM framework, Benefits, Awareness and Obstacles, Quality – Vision,Mission and Policy Statements, Customer Focus – Customer Perception of Quality,Translating needs into requirements, Customer retention, Dimensions of Product andService Quality, Cost of quality.

LEARNING OBJECTIVES

Upon completion of this unit, you will be able to:

• Have a feel of Quality• Get a framework on TQM• Get an introduction on vision• Understand the indispensability of the customer• Identify the dimensions of product and service quality

1.1 QUALITY MANAGEMENT – DEFINITIONS

Principles and Philosophies of Quality Management

Quality management is becoming increasingly important to the leadership andmanagement of all organizations. It is necessary to identify Quality Management as adistinct discipline of management and lay down universally understood and acceptedrules for this discipline.

The ISO technical committee working on the ISO9000 standards had publisheda document detailing the quality management principles and application guidelines. Thelatest revision (version 2000) of ISO9000 standards are based on these principles.

UNIT I

Page 2: DBA1656

DBA 1656 QUALITY MANAGEMENT

2

NOTES

Anna University Chennai

Definition of Quality Management Principle

“A quality management principle is a comprehensive and fundamental rule /belief, for leading and operating an organization, aimed at continually improvingperformance over the long term by focusing on customers while addressing the needsof all other stake holders”.

The eight principles are:

1. Customer-Focused Organization2. Leadership3. Involvement of People4. Process Approach5. System Approach to Management6. Continual Improvement7. Factual Approach to Decision-Making and8. Mutually Beneficial Supplier Relationships.

Now let us examine the principles in detail.

Principle 1 - Customer-Focused Organization

“Organizations depend on their customers and therefore should understandcurrent and future customer needs, meet customer requirements and strive to exceedcustomer expectations”.

Steps in application of this principle are:

1. Understand customer needs and expectations for products, delivery, price,dependability, etc.

2. Ensure a balanced approach among customers and other stake holders (owners,people, suppliers, local communities and society at large) needs and expectations.

3. Communicate these needs and expectations throughout the organization.

4. Measure customer satisfaction and act on results, and

5. Manage customer relationships.

Principle 2 - Leadership

“Leaders establish unity of purpose and direction of the organization. Theyshould create and maintain the internal environment in which people can become fullyinvolved in achieving the organization’s objectives.”

Steps in application of this principle are:

1. Be proactive and lead by example.

Page 3: DBA1656

DBA 1656 QUALITY MANAGEMENT

3

NOTES

Anna University Chennai

2. Understand and respond to changes in the external environment.3. Consider the needs of all stake holders including customers, owners, people,

suppliers, local communities and society at large.4. Establish a clear vision of the organization’s future.5. Establish shared values and ethical role models at all levels of the organization.6. Build trust and eliminate fear.7. Provide people with the required resources and freedom to act with

responsibility and accountability.8. Inspire, encourage and recognize people’s contributions.9. Promote open and honest communication.10. Educate, train and coach people.11. Set challenging goals and targets, and12. Implement a strategy to achieve these goals and targets.

Principle 3 - Involvement of People

“People at all levels are the essence of an organization and their full involvementenables their abilities to be used for the organization’s benefit”.

Steps in application of this principle are:

1. Accept ownership and responsibility to solve problems.2. Actively seek opportunities to make improvements, and enhance competencies,

knowledge and experience.3. Freely share knowledge and experience in teams.4. Focus on the creation of value for customers.5. Be innovative in furthering the organization’s objectives.6. Improve the way of representing the organization to customers, local communities

and society at large.7. Help people derive satisfaction from their work, and8. Make people enthusiastic and proud to be part of the organization.

Principle 4 - Process Approach

“A desired result is achieved more efficiently when related resources and activitiesare managed as a process.”

Steps in application of this principle are:

1. Define the process to achieve the desired result.2. Identify and measure the inputs and outputs of the process.3. Identify the interfaces of the process with the functions of the organization.4. Evaluate possible risks, consequences and impacts of processes on customers,

suppliers and other stake holders of the process.5. Establish clear responsibility, authority, and accountability for managing the

process.

Page 4: DBA1656

DBA 1656 QUALITY MANAGEMENT

4

NOTES

Anna University Chennai

6. Identify internal and external customers, suppliers and other stake holders ofthe process, and

7. When designing processes, consider process steps, activities, flows, controlmeasures, training needs, equipment, methods, information, materials and otherresources to achieve the desired result.

Principle 5 - System Approach to Management

“Identifying, understanding and managing a system of interrelated processesfor a given objective improves the organization’s effectiveness and efficiency.”

Steps in application of this principle are:

1. Define the system by identifying or developing the processes that affect a givenobjective.

2. Structure the system to achieve the objective in the most efficient way.3. Understand the interdependencies among the processes of the system.4. Continually improve the system through measurement and evaluation, and5. Estimate the resource requirements and establish resource constraints prior to

action.

Principle 6 - Continual Improvement

“Continual improvement should be a permanent objective of the organization.”

Steps in application of this principle are:

1. Make continual improvement of products, processes and systems an objectivefor every individual in the organisation.

2. Apply the basic improvement concepts of incremental improvement andbreakthrough improvement.

3. Use periodic assessments against established criteria of excellence to identifyareas for potential improvement.

4. Continually improve the efficiency and effectiveness of all processes.5. Promote prevention based activities.6. Provide every member of the organization with appropriate education and

training, on the methods and tools of continual improvement such as the Plan-Do-Check-Act cycle, problem solving, process re-engineering, and processinnovation.

7. Establish measures and goals to guide and track improvements, and8. Recognize improvements.

Page 5: DBA1656

DBA 1656 QUALITY MANAGEMENT

5

NOTES

Anna University Chennai

Principle 7 - Factual Approach to Decision Making

“Effective decisions are based on the analysis of data and information.”

Steps in application of this principle are:

1. Take measurements and collect data and information relevant to the objective.2. Ensure that the data and information are sufficiently accurate, reliable and

accessible.3. Analyze the data and information using valid methods.4. Understand the value of appropriate statistical techniques, and5. Make decisions and take action based on the results of logical analysis balanced

with experience and intuition.

Principle 8 - Mutually Beneficial Supplier Relationships

“An organization and its suppliers are interdependent, and a mutually beneficialrelationship enhances the ability of both to create value.”

Steps in application of this principle are:

1. Identify and select key suppliers.2. Establish supplier relationships that balance short-term gains with long-term

considerations for the organization and society at large.3. Create clear and open communications.4. Initiate joint development and improvement of products and processes.5. Jointly establish a clear understanding of customers’ needs.6. Share information and future plans, and7. Recognize supplier improvements and achievements.

Page 6: DBA1656

DBA 1656 QUALITY MANAGEMENT

6

NOTES

Anna University Chennai

1.2 TQM FRAMEWORK, BENEFITS, AWARENESS AND OBSTACLES

FIGURE 1.1 Total Quality Management

For organizations to survive and grow in today’s challenging marketplace theyneed true commitment to meet customer needs through communication, planning andcontinuous process improvement activities. Creating this culture change can improvethe products and services of your organization as well as improve employee attitudesand enthusiasm. All of these help with the ultimate goal of improved quality, productivityand customer satisfaction which is an important competitive advantage in today’smarketplace.

Total Quality Management (TQM) is a management strategy aimed at embeddingawareness of quality in all organizational processes. TQM has been widely used inmanufacturing, education, government, and service industrie176s, as well as NASAspace and science programs. Total Quality provides an umbrella under which everyonein the organization can strive and create customer satisfaction. TQ is a people focusedmanagement system that aims at continual increase in customer satisfaction at continuallylower real costs.

Definition

TQM is composed of three paradigms:

• Total: Organization wide• Quality: With its usual definitions, with all its complexities (External Definition)

Page 7: DBA1656

DBA 1656 QUALITY MANAGEMENT

7

NOTES

Anna University Chennai

• Management: The system of managing with steps like Plan, Organize, Control,Lead, Staff, etc.

As defined by the International Organization for Standardization (ISO):

“TQM is a management approach for an organization, centered on quality, based onthe participation of all its members and aiming at, long-term success through customersatisfaction, and benefits to all members of the organization and to the society.”

In Japan, TQM comprises four process steps, namely:

1. Kaizen – Focuses on Continuous Process Improvement, to make processesvisible, repeatable and measurable.

2. Atarimae Hinshitsu – The idea that things will work as they are supposed to(eg. a pen will write.).

3. Kansei – Examining the way the user applies the product leads to improvementin the product itself.

4. Miryokuteki Hinshitsu – The idea that things should have an aesthetic quality(eg. a pen will write in a way that is pleasing to the writer.)

TQM requires that the company maintain this quality standard in all aspects ofits business. This requires ensuring that things are done right the first time and thatdefects and waste are eliminated from operations.

Origins

“Total Quality Control” was the key concept of Armand Feigenbaum’s 1951book, Quality Control: Principles, Practice, and Administration, a book that wassubsequently released in 1961 under the title, Total Quality Control (ISBN 0-07-020353-9). Joseph Juran, Philip B. Crosby, and Kaoru Ishikawa also contributed tothe body of knowledge now known as TQM.

The American Society for Quality says that the term Total Quality Managementwas first used by the U.S. Naval Air Systems Command “to describe its Japanese-style management approach to quality improvement.” This is consistent with the storythat the United States Navy Personnel Research and Development Center beganresearching the use of statistical process control (SPC); the work of Juran, Crosby,and Ishikawa; and the philosophy of W. Edwards Deming to make performanceimprovements in 1984. This approach was first tested at the North Island Naval AviationDepot.

In his paper, “The Making of TQM: History and Margins of the Hi(gh)-Story”from 1994, Xu claims that “Total Quality Control” is translated incorrectly from Japanesesince, there is no difference between the words “control” and “management” in Japanese.William Golimski refers to Koji Kobayashi, former CEO of NEC, and the first personto use TQM, which he did during a speech when he got the Deming Prize in 1974.

Page 8: DBA1656

DBA 1656 QUALITY MANAGEMENT

8

NOTES

Anna University Chennai

TQM in Manufacturing

Quality assurance through statistical methods is a key component in amanufacturing organization, where TQM generally starts by sampling a random selectionof the product. The sample can then be tested for things that matter most to the endusers. The causes of any failures are isolated, secondary measures of the productionprocess are designed, and then the causes of the failure are corrected. The statisticaldistributions of important measurements are tracked. When parts’ measures drift into adefined “error band”, the process is fixed. The error band is usually a tighter distributionthan the “failure band”, so that the production process is fixed before failing parts canbe produced.

It is important to record not just the measurement ranges, but what failurescaused them to be chosen. In that way, cheaper fixes can be substituted later (say,when the product is redesigned) with no loss of quality. After TQM has been in use, itis very common for parts to be redesigned so that critical measurements either cease toexist, or become much wider.

It took people awhile to develop tests to find emergent problems. One populartest is a “life test” in which the sample product is operated until a part fails. Anotherpopular test is called “shake and bake”, in which the product is mounted on a vibratorin an environmental oven, and operated at progressively more extreme vibration andtemperatures until something fails. The failure is then isolated and engineers design animprovement.

A commonly-discovered failure is for the product to disintegrate. If fastenersfail, the improvements might be to use measured-tension nutdrivers to ensure that screwsdon’t come off, or improved adhesives to ensure that parts remain glued.

If a gearbox wears out first, a typical engineering design improvement might beto substitute a brushless stepper motor for a DC motor with a gearbox. The improvementis that a stepper motor has no brushes or gears to wear out, so it lasts ten or more timeslonger. The stepper motor is more expensive than a DC motor, but cheaper than a DCmotor combined with a gearbox. The electronics are radically different, but equallyexpensive. One disadvantage might be that a stepper motor can hum or whine, andusually needs noise-isolating mounts.

Often, a “TQMed” product is cheaper to produce because of efficiency/performance improvements and because there’s no need to repair dead-on-arrivalproducts, which represents an immensely more desirable product.

Page 9: DBA1656

DBA 1656 QUALITY MANAGEMENT

9

NOTES

Anna University Chennai

TQM and contingency-based research

TQM has not been independent of its environment. In the context of managementaccounting systems (MCSs), Sim and Killough (1998) show that incentive pay enhancedthe positive effects of TQM on customer and quality performance. Ittner and Larcker(1995) demonstrated that product focused TQM was linked to timely problem solvinginformation and flexible revisions to reward systems. Chendall (2003) summarizes thefindings from contingency-based research concerning management control systems andTQM by noting that “TQM is associated with broadly based MCSs including timely,flexible, externally focused information; close interactions between advanced technologiesand strategy; and non-financial performance measurement.”

TQM, just another Management fad?

Abrahamson (1996) argued that fashionable management discourse such asQuality Circles tends to follow a lifecycle in the form of a bell curve. Ponzi and Koenig(2002) showed that the same can be said about TQM, which peaked between 1992and 1996, while rapidly losing popularity in terms of citations after these years. Dubois(2002) argued that the use of the term TQM in management discourse created a positiveutility regardless of what managers meant by it (which showed a large variation), whilein the late 1990s the usage of the term TQM in implementation of reforms lost thepositive utility attached to the mere fact of using the term and sometimes associationswith TQM became even negative. Nevertheless, management concepts such as TQMleave their traces, as their core ideas can be very valuable. For example, Dubois (2002)showed that the core ideas behind the two management fads Reengineering and TQM,without explicit usage of their names, can even work in a synergistic way.

Benefits of TQM

TQM as a slogan has been around since 1985. TQM provides a managementsystem, using various combinations of tools that have been in existence for much longer.Each tool has its own use, with benefits that follow. TQM leads to a synergy of benefits.

• Through the application of TQM, senior management will empower all levelsof management, including self-management at worker level, to manage qualitysystems.

• Outlined below are some advantages to be gained by a hotel, from the use ofTQM. These are split into the five key areas of TQM.

• Continuous Improvement. People wish to improve themselves and get a betterlifestyle. If the desire for individual improvement is transferred to systems withinthe workplace, then these systems will improve.

Page 10: DBA1656

DBA 1656 QUALITY MANAGEMENT

10

NOTES

Anna University Chennai

• Management can, at times, be a restraint to innovation through relying on historicalsystems. This will result in “always do what you have always done and you willalways get what you have always got”.

• A good chef will know how best to prepare and present food. If given thefreedom to innovate, then the standard of food will improve.

• When mistakes are made by staff, it is rarely through a desire to make a mistake.The system used is at fault. With departments constantly striving for improvement,hotel systems will improve, leading to reduced internal costs and a better servicefor customers.

• Multifunctional Teams. Within the hotel, departments are customers and suppliersfor each other. A waiter is the chef’s supplier giving information on what hasbeen ordered from the menu including any special requests about how the foodshould be prepared (medium, well done etc). When the food is ready, the rolesare reversed. The chef becomes the waiter’s supplier providing the food.

• If the hotel’s organization is structured in such a way, that people in differentdepartments work with each other to solve problems as a team, traditionalinter departmental barriers will be removed. Inter departmental communicationon a day to day basis is essential for effective management.

• Multifunctional teamwork allows the problems and requirements of eachdepartment to be passed on at worker level, throughout the hotel. This will leadto a better understanding of how the hotel systems work, by all employees.

• Individuals will work with each other, identifying causes of problems ratherthan blaming each other for the results of a problem. This will remove the blameculture.

• Reduction in Variation. Systems are influenced by many elements which causevariation. For example, as new staffs are employed, they will do their jobs in adifferent way to previous staff. This can lead to a change in the service providedby the hotel. If such influences can be minimized, then standards of service canbe maintained.

• This can be achieved by documenting systems. Giving workers the opportunityto own their processes by letting them do this documentation boosts morale.Also, the documentation will accurately reflect what is actually done rather thanwhat management thinks is done. Initially this will lead to guests receiving aknown standard of service which will result in repeat business.

• Once the standard is stabilized, changes made to improve the service can bemeasured and directly linked to the improvement made.

Page 11: DBA1656

DBA 1656 QUALITY MANAGEMENT

11

NOTES

Anna University Chennai

• Supplier Integration: By involving your suppliers directly with your staff, twoway communications can be established. Your chef can explain the standard offood required. Your supplier can explain the difficulties in supplying such food.Any problems that arise can be solved jointly, using your knowledge of hotelsystems, the chef’s knowledge of food and the suppliers’ expertise in supplysystems.

• This prevents waste through returned goods for the supplier. Expense through:

• Obtaining replacements from another supplier.

• A drop in service by not being able to provide what was intended.

• Education and Training. Through education, management and staff are giventhe tools to achieve all the above. Education provides for guided innovationfrom all levels. Training, which is a cost, shows a commitment by managementto:

• Individual staff self improvement, which is a motivator.

• TQM.

• Summary. Staff will collectively provide continual improvement of hotel systems.By working together, communication/departmental barriers will be brokendown. The standard of service can be set, maintained and then improved.Suppliers will be working with rather than working for the hotel. The standardof staff and management will improve through education. The adoption of anew attitude to work, by everyone embracing the ideas of TQM.

The primary, long-term benefits of TQM in the public sector include better services,reduced costs and satisfied customers. Progressive improvement in the managementsystems and the quality of services offered result in increasingly satisfied customers.

In addition a number of other benefits are:

• observable including improved skills,• morale and confidence among public service staff,• enhanced relationships between governments and its constituents,• increased government accountability and transparency and

Improved productivity and efficiency of public services.A philosophy that improves business from top to bottomA focused, systematic and structured approach to enhancing customer’ssatisfactionProcess improvement methods that reduce or eliminate problems i.e. nonconformance costs

Page 12: DBA1656

DBA 1656 QUALITY MANAGEMENT

12

NOTES

Anna University Chennai

Tools and techniques for improvement - quality operating systemDelivering what the customer wants in terms of service, product and the wholeexperienceIntrinsic motivation and improved attitudes throughout the workforceWorkforce is proactive - prevention orientatedEnhanced communicationReduction in waste and re-workIncrease in process ownership- employee involvement and empowermentEveryone from top to bottom educatedImproved customer/supplier relationships (internally & externally)Market competitivenessQuality based management system for ISO 9001:2000 certification

1.3 QUALITY – VISION, MISSION AND POLICY STATEMENTS

SAP : Vision

SAP’s vision for quality management is to consistently deliver high-quality solutionsfocused on improving customer satisfaction.

Mission

The mission of quality management at SAP is to:

Research and develop new methods and standards

Proactively communicate and share knowledge

Apply the knowledge to enhance our products, processes, and services

Continually monitor and improve our performance against set targets

Strive for prevention of failure, defect reduction, and increased customersatisfaction

Policy

Quality is the basic requirement for the satisfaction of our customers and the resultingcompetitiveness and economic success of SAP. The Executive Board dedicates itselfto implementing and monitoring the following global quality policy principles:

SAP strives to further intensify the close cooperation with its internal and externalcustomers and partners, and the performance-oriented communication with itssuppliers.

Page 13: DBA1656

DBA 1656 QUALITY MANAGEMENT

13

NOTES

Anna University Chennai

The continual improvement of our products, processes, and services combinedwith innovation is at the center of our endeavors. For this purpose, we strive tofurther optimize our organizational, operational, and technical processes. Qualitymanagement supports the business-oriented behavior of all parties involved.

Promoting employee satisfaction and quality awareness are major managerialfunctions in the entire company.

Commitment, professional competence, and personal responsibility are requiredfrom all SAP employees to achieve the goals based on the global quality policyprinciples. Employees know the input requirements to comply with quality intheir area. Internal education is provided to help SAP employees fulfill theirtasks.

The quality goals based on this policy are regularly defined, implemented, andmonitored by the responsible parties within the framework of quality managementat SAP.

Honeywell Technology Solutions Lab (HTSL)

HTSL Vision

Be the premier growth company delivering unsurpassed value to Honeywell customersby providing Innovative Total Solutions and Services enhancing the safety, security,comfort, energy efficiency and productivity of the environment where they live, workand travel.

HTSL Mission

Maximize the value and impact on Honeywell businesses and customers byproviding Technology Product and Business Solutions and Services setting standardsof world class performance.

HTSL Quality Policy

To delight our customers by providing six sigma quality total solutions,demonstrating value and continuous improvement through competent and disciplinedprofessionals.

Quality is the basis for the long-term profitability and growth of K-Tron. In theindustries we serve, we strive to be every customer’s first choice quality supplier.

Page 14: DBA1656

DBA 1656 QUALITY MANAGEMENT

14

NOTES

Anna University Chennai

Our quality policy is based on:

Customer Satisfaction

Our organization is focused on our customers. We are committed to satisfyingthe needs and expectations of our customers and other interested parties, includingtheir economic, social and environmental concerns.

Continual Improvements

We are committed to implementing continual improvements to every K-Tronprocess, product, service and quality management system.

Quality Involvement

Top management reviews our quality policies and objectives to ensure theircontinual suitability and the framework of the quality management system. We believethat they complement our mission, vision and strategy. The quality policies, objectives,resource needs and effectiveness of the quality management system are communicatedand understood within the organization.

Employee awareness, development, involvement and self-responsibility areessential to the effectiveness of our quality management system. All employees arechallenged to “do it right the first time.”

Adherence to our quality management system is a permanent commitment ofall employees, suppliers and partners of K-Tron. We obligate ourselves to maintain aquality management system in accordance with ISO 9001:2000.

DoD : Mission and Vision

Vision

• Lead Defense Acquisition to meet DoD’s needs with excellence every time

Mission

• Identify and develop best acquisition policies and practices to promoteflexibility and take advantage of the global marketplace

• Integrate policy creation, training, and communication to quickly andeffectively deliver new policies and practices to the community

• Provide timely and sound acquisition advice to Federal leadership and DoDpersonnel as the DoD Acquisition Ombudsman

Page 15: DBA1656

DBA 1656 QUALITY MANAGEMENT

15

NOTES

Anna University Chennai

• Assist the acquisition community to obtain the best quality weapons,equipment and services for war fighters

• Lead the DoD acquisition, technology, and logistics community in recruiting,retaining, and training — the right workforce with the right skills in the rightplace at the right time with the right pay

• Leverage use of technology to provide best possible tools to the acquisitionworkforce

• Continuously assess the results of our efforts and make improvements

1.4 CUSTOMER FOCUS – CUSTOMER PERCEPTION OF QUALITY,TRANSLATING NEEDS INTO REQUIREMENTS, CUSTOMERRETENTION

Customer perception of Quality

There are three key elements of quality: customer, process and employee.Everything we do to remain a world-class quality company focuses on these threeessential elements.

… the customerDelighting Customers

Customers are the center of GE’s universe: they define quality. They expect performance,reliability, competitive prices, on-time delivery, service, clear and correct transactionprocessing and more. In every attribute that influences customer perception, we knowthat just being good is not enough. Delighting our customers is a necessity, because ifwe don’t do it, someone else will!

...the ProcessOutside-In Thinking

Quality requires us to look at our business from the customer’s perspective, not ours.In other words, we must look at our processes from the outside-in. By understandingthe transaction lifecycle from the customer’s needs and processes, we can discoverwhat they are seeing and feeling. With this knowledge, we can identify areas where wecan add significant value or improvement from their perspective.

Page 16: DBA1656

DBA 1656 QUALITY MANAGEMENT

16

NOTES

Anna University Chennai

FIGURE 1.2...the EmployeeLeadership Commitment People create results. Involving all employees is essential to GE’s qualityapproach. GE is committed to providing opportunities and incentives for employees tofocus their talents and energies on satisfying customers.All GE employees are trained in the strategy, statistical tools and techniques of SixSigma quality. Training courses are offered at various levels:

• Quality Overview Seminars: basic Six Sigma awareness.• Team Training: basic tool introduction to equip employees to participate on Six

Sigma teams.• Master Black Belt, Black Belt and Green Belt Training: in-depth quality training

that includes high-level statistical tools, basic quality control tools, ChangeAcceleration Process and Flow technology tools.

Design for Six Sigma (DFSS) Training: prepares teams for the use of statistical tools todesign it right the first time.

Quality is the responsibility of every employee. Every employee must be involved,motivated and knowledgeable if we are to succeed.

Our Customers Feel the Variance, Not the Mean

Often, our inside-out view of the business is based on average or mean-basedmeasures of our recent past. Customers don’t judge us on averages, they feel thevariance in each transaction, each product we ship. Six Sigma focuses first on reducingprocess variation and then on improving the process capability.

Customers value consistent, predictable business processes that deliver world-class levels of quality. This is what Six Sigma strives to produce.

GE success with Six Sigma has exceeded our most optimistic predictions. Acrossthe company, GE associates embrace Six Sigma’s customer-focused, data-drivenphilosophy and apply it to everything we do. We are building on these successes bysharing best practices across all of our businesses, putting the full power of GE behindour quest for better, faster customer solutions.

Page 17: DBA1656

DBA 1656 QUALITY MANAGEMENT

17

NOTES

Anna University Chennai

Defining Relationships

Why define Relationships between Lists?

Relationships between lists indicate how the two lists are related to each other.They are generally used to prioritize one list based upon the priorities of another list.Relationships can be defined by answering a particular question for each cell in a Matrix.For example, the Relationships between Customer Requirements and Design Measuresmight be defined by asking “To what degree does this Measure predict the Customer’sSatisfaction with this Requirement?” By asking this same question consistently for eachMeasure and Requirement combination, a set of relationships will be defined in theMatrix which will help to determine which Measures are most important to control inorder to achieve a desired level of Customer Satisfaction. Another question which canbe asked in order to define relationships is “What percent of this Requirement is handledby this Design Measure? The relationships defined using this question would result inthe highest priority being assigned to the Measures which control most of the functionality.These may not be the same as the Measures defined in order to predict CustomerSatisfaction. Given these examples, you can see that it is critical that a team understandswhat question they are trying to answer before they start defining Relationships. It isalso critical that they use the same question consistently. By doing so, the team will beable to prioritize the Output List accurately.

How can Relationships be defined?

Relationships are defined within the Matrix Window of QFD/CAPTURE. Thereare two different methods of defining the Relationships. The Spreadsheet View providesa familiar format for entering the relationship values. The Tree View provides a snapshotof the relationship values for each Input row, and makes it easier to assess eachrelationship’s value relative to the other Relationships. The Spreadsheet View presentsthe Lists being related in a Spreadsheet format. The entries in the Input Lists form therow headings and the entries in the Output Lists form the column headings. The cellswithin the spreadsheet contain the relationships between the Input and Output entries.The Tree View presents all of the Output List Entries which are related to the currentlyselected Input List Entry using a Tree structure. This allows the team to consider therelative strength of the relationships. It is equivalent to defining an entire row ofrelationships across a Matrix.

What scales should we use?

The scales used to define the relationships can have a significant impact on thePrioritization of the Output List Entries. The main consideration is the tradeoff betweenthe number of levels in a scale, the speed of relationship definition, and the relative

Page 18: DBA1656

DBA 1656 QUALITY MANAGEMENT

18

NOTES

Anna University Chennai

accuracy of the resulting Prioritization. In general, the more levels in the scale, the moreaccurate the relative prioritization. The different values of the scale allow the team toindicate the levels of relationships that the Output List Entries have with the Input ListEntries. For example, the team may choose to use relationships with values of 1 through10. Using this scale a value of 6 would indicate that one Output List Entry is twice asimportant to the satisfaction of an Input List Entry as an Output List Entry with a valueof 3. relationship definition usually goes much faster if the team limits their choices ofrelationship values to just a few. Standard QFD practice usually supports the values 1,3, and 9. This Standard QFD Scale accentuates the Strong Relationships (value of 9).Output List Entries with several Strong Relationships to Input List Entries will tend tobe given a higher level of priority than Output List Entries related to many Input ListEntries with either moderate or weak values. Other common scales include 1, 2, 3 and1, 3, 5. With greater frequency, teams are defining relationships using advanced methodssuch as the Analytic Hierarchy Process to establish scales with an infinite number oflevels. The resulting relationship values usually represent the percent contribution ofeach Output List Entry to the selected Input List Entry. QFD/CAPTURE supports allof these different scales. Each time a new matrix is created, the user is given theopportunity to specify which of the standard scales will be used for the relationships.You may define your own unique scales as well. The software will also allow the teamto define relationship values as real numbers that represent percentages.

Identifying Tradeoffs

Why evaluate Tradeoffs?

The tradeoffs, located in the “Roof” of the House of Quality, indicate thesynergistic or detrimental impacts of changes in the Design Measures. They are used toidentify critical compromises in the design. Since these compromises are likely to beencountered sooner or later, they may as well be examined as part of the QFD effort sothat any required design changes are as inexpensive as possible.

How should we evaluate Tradeoffs?

As with other matrices, the team should agree upon the question that they willask in order to define the Relationships of this Matrix. A common question used is “Ifwe improve our performance against this Measure, what is the impact on this otherMeasure? The team will determine if improving performance of one Measure helps orhurts the product’s performance against another Measure. Generally, positive andnegative values are used to indicate the positive or negative impact. The TradeoffsScale provided by QFD/CAPTURE can be used as a scale for Relationships definedwithin this Matrix. QFD/CAPTURE allows a team to capture the tradeoffs it identifies.A matrix is created to capture the tradeoff information. One list forms both the inputand the output of the matrix.

Page 19: DBA1656

DBA 1656 QUALITY MANAGEMENT

19

NOTES

Anna University Chennai

How should we document Actions?

If a trade-off is identified, there is usually some action which is required in orderto reduce the impact or work around the potential compromise. These actions can bedocumented in several ways. One approach is to create a document within QFD/Capture and record each action as a paragraph within the document. Another approachwould be to define a list of actions. This would give the team the opportunity to relateactions with the Customer Requirements, Design Measures, or any other List definedin the project. This approach would support prioritization of the actions based upontheir effect on the satisfaction of the related Input List.

1.5 DIMENSIONS OF PRODUCT AND SERVICE QUALITY

When it comes to measuring the quality of your services, it helps to understandthe concepts of product and service dimensions. Users may want a keyboard that isdurable and flexible for using on the wireless carts. Customers may want a service deskassistant who is empathetic and resourceful when reporting issues. Quality ismultidimensional. Product and service quality are comprised of a number of dimensions,which determine how customer requirements are achieved. Therefore it is essential thatyou consider the entire dimension that may be important to your customers.

Product quality has two dimensions:

• Physical dimension - A product’s physical dimension measures the tangibleproduct itself and includes such things as length, weight, and temperature.

• Performance dimension - A product’s performance dimension measures howwell a product works and includes such things as speed and capacity.

While performance dimensions are more difficult to measure and obtain whencompared to physical dimensions, but the efforts will provide more insight into how theproduct satisfies the customer.

Like product quality, service quality has several dimensions:

• Responsiveness - Responsiveness refers to the reaction time of the service.

• Assurance - Assurance refers to the level of certainty a customer has, regardingthe quality of the service provided.

• Tangibles - Tangibles refer to a service’s look or feel.

• Empathy - Empathy is when a service employee shows that she understandsand sympathizes with the customer’s situation. The greater the level of thisunderstanding, the better. Some situations require more empathy than others.

Page 20: DBA1656

DBA 1656 QUALITY MANAGEMENT

20

NOTES

Anna University Chennai

• Reliability - Reliability refers to the dependability of the service providers andtheir ability to keep their promises.

The quality of products and services can be measured by their dimensions. Evaluatingall dimensions of a product or service helps to determine how well the service stacks upagainst meeting the customer requirements.

Quality Framework

Garvin proposes eight critical dimensions or categories of quality that can serveas a framework for strategic analysis: Performance, features, reliability, conformance,durability, serviceability, aesthetics, and perceived quality.

1. Performance

Performance refers to a product’s primary operating characteristics. For anautomobile, performance would include traits like acceleration, handling, cruising speed,and comfort. Because this dimension of quality involves measurable attributes, brandscan usually be ranked objectively on individual aspects of performance. Overallperformance rankings, however, are more difficult to develop, especially when theyinvolve benefits that not every customer needs.

2. Features

Features are usually the secondary aspects of performance, the “bells andwhistles” of products and services, and those characteristics that supplement their basicfunctioning. The line separating primary performance characteristics from secondaryfeatures is often difficult to draw. What is crucial is that features involve objective andmeasurable attributes; objective individual needs, not prejudices, affect their translationinto quality differences.

3. Reliability

This dimension reflects the probability of a product malfunctioning or failingwithin a specified time period. Among the most common measures of reliability are themean time to first failure, the mean time between failures, and the failure rate per unittime. Because these measures require a product to be in use for a specified period, theyare more relevant to durable goods than to products or services that are consumedinstantly.

4. Conformance

Conformance is the degree to which a product’s design and operatingcharacteristics meet established standards. The two most common measures of failurein conformance are defect rates in the factory and, once a product is in the hands of thecustomer, the incidence of service calls. These measures neglect other deviations from

Page 21: DBA1656

DBA 1656 QUALITY MANAGEMENT

21

NOTES

Anna University Chennai

standard, like misspelled labels or shoddy construction, that do not lead to service orrepair.

5. Durability

A measure of product life, durability has both economic and technicaldimensions. Technically, durability can be defined as the amount of use one gets from aproduct before it deteriorates. Alternatively, it may be defined as the amount of use onegets from a product before it breaks down and replacement is preferable to continuedrepair.

6. Serviceability

Serviceability is the speed, courtesy, competence, and ease of repair. Consumersare concerned not only about a product breaking down but also about the time beforeservice is restored, the timeliness with which service appointments are kept, the natureof dealings with service personnel, and the frequency with which service calls or repairsfail to correct outstanding problems. In those cases where problems are not immediatelyresolved and complaints are filed, a company’s complaints handling procedures arealso likely to affect customers’ ultimate evaluation of product and service quality.

7. Aesthetics

Aesthetics is a subjective dimension of quality. How a product looks, feels,sounds, tastes, or smells is a matter of personal judgment and a reflection of individualpreference. On this dimension of quality, it may be difficult to please everyone.

8. Perceived Quality

Consumers do not always have complete information about a product’s or service’sattributes; indirect measures may be their only basis for comparing brands. A product’sdurability for example can seldom be observed directly; it must usually be inferred fromvarious tangible and intangible aspects of the product. In such circumstances, images,advertising, and brand names - inferences about quality rather than the reality itself -can be critical.

1.6 COST OF QUALITY

The price of nonconformance (Philip Crosby) or the cost of poor quality (JosephJuran), the term ‘Cost of Quality’, refers to the costs associated with providing poorquality product or service.

Why is it important?

Quality processes cannot be justified simply because “everyone else is doingthem” - but return on quality (ROQ) has dramatic impacts as companies mature.

Page 22: DBA1656

DBA 1656 QUALITY MANAGEMENT

22

NOTES

Anna University Chennai

Research shows that the costs of poor quality can range from 15%-40% of businesscosts (e.g., rework, returns or complaints, reduced service levels, lost revenue). Mostbusinesses do not know what their quality costs are because they do not keep reliablestatistics. Finding and correcting mistakes consume an inordinately large portionresources. Typically, the cost to eliminate a failure in the customer phase is five timesgreater than it is at the development or manufacturing phase. Effective quality managementdecreases production costs because the sooner an error is found and corrected, theless costly it will be.

When to use it?

Cost of quality comprises of four parts:

• External Failure Cost: cost associated with defects found after the customerreceives the product or service

ex: processing customer complaints, customer returns, warranty claims, productrecalls.

• Internal Failure Cost : cost associated with defects found before the customerreceives the product or service

ex: scrap, rework, re-inspection, re-testing, material review, materialdowngrades.

• Inspection (appraisal) Cost: cost incurred to determine the degree ofconformance to quality requirements (measuring, evaluating or auditing)

ex: inspection, testing, process or service audits, calibration of measuring andtest equipment.

• Prevention Cost: Cost incurred to prevent (keep failure and appraisal cost to aminimum) poor quality

ex: new product review, quality planning, supplier surveys, process reviews,quality improvement teams, education and training.

How to use it?

Gather some basic information about the number of failures in the system, applysome basic assumptions to that data in order to quantify the data, chart the data basedon the four elements listed above and study it, allocate resources to combat the weak-spots, do this study on a regular basis and evaluate your performance

The Cost of Quality has other version too.

1. Like all things there is a price to pay for quality. This total cost can be splitinto two fundamental areas:

Page 23: DBA1656

DBA 1656 QUALITY MANAGEMENT

23

NOTES

Anna University Chennai

• a. Nonconformance. This area covers the price paid by not havingquality systems or a quality product. Examples of this are:

(1) Rework. Doing the job over again because it wasn’t right thefirst time.

(2) Scrap. Throwing away the results of your work because it isnot up to the required standard.

(3) Waiting. Time wasted whilst waiting for other people.(4) Down Time. Not being able to do your job because a machine

is broken.• b. Conformance. Conformance is an aim of quality assurance. This

aim is achieved at a price. Examples of this are:(1) Documentation. Writing work instructions, technical instructions

and producing paperwork.(2) Training. On the job training, quality training, etc.(3) Auditing. Internal, external and extrinsic.(4) Planning. Prevention, do the right thing first time and poka yoke.(5) Inspection. Vehicles, equipment, buildings and people.

2. These two main areas can be split further as shown below:

FIGURE 1.3

This shows the four segments of quality costs:

a. Prevention. This area covers avoiding defects (poka yoke), planning,preparation, training, preventative maintenance and evaluation.

b. Appraisal. This area covers finding defects by inspection (poka yoke),audit, calibration, test and measurement.

c. Internal Failure. This area covers the costs that are borne by the organizationitself such as scrap, rework, redesign, modifications, corrective action,down time, concessions and overtime.

Page 24: DBA1656

DBA 1656 QUALITY MANAGEMENT

24

NOTES

Anna University Chennai

d. External Failure. This area covers the costs that are borne by the customersuch as equipment failure, down time, warranty, administrative cost indealing with failure and the loss of goodwill.

3. Whilst aiming to reduce failure through appraisal and prevention it must beclear that these also cost as shown below:

Figure 1.4

4. The graph shows that there is a minimum Total Quality cost, which is acombination of prevention, appraisal and failure. Reducing any of these reducesthe total. The key to minimum cost, is striking the correct balance between thethree.

5. Clearly prevention reduces both appraisal and failure costs, however eventuallythe cost of prevention itself starts to increase the total cost and so this must becontrolled and set at an effective level.

6. The next graph shows that when Total Quality is initially introduced into anorganisation, there are huge savings that can be made:

FIGURE 1.5

Page 25: DBA1656

DBA 1656 QUALITY MANAGEMENT

25

NOTES

Anna University Chennai

7. However when Total Quality is introduced into a well organised system, that isusing inspection as a major standard setter, then the benefits are not so dramatic.The main benefits to be achieved are within management. The fat of theorganisation can then be cut.

FIGURE 1.6

8. The graph below shows the four stages of Total Quality acceptance /implementation and what happens theoretically to the four segments of the costof quality:

FIGURE 1.7

9. The minimum total cost, is shown below as being achieved at 98% perfection.This percentage is also known as best practice. That is, the cost of achieving animprovement outweighs the benefits of that improvement.

Page 26: DBA1656

DBA 1656 QUALITY MANAGEMENT

26

NOTES

Anna University Chennai

FIGURE 1.8

SUMMARY

The eight principle, which act as a base for defining Quality Management ispresented in detail. The Total Quality Management is presented in framework thatcomprises of three paradigms and each one compliments the other .The process quality,involvement of leaders and employee, culture of the organization will all lead towardsthe establishment of TQM in organization, The benefits are many and using an exampleof hotel, it is emphasized for the benefit of readers. Vision, mission and policy statementson quality are presented using examples of SAP, Honeywell Technology Solutions Lab,Department of Defence, USA to give an idea. The indispensability of the customer andthe focus required to be is emphasized are presented in detail. The quality dimensionsof product and service are deliberated in detail. The cost of quality, its importance,application areas are elaborated for the benefit of the learners.

REVIEW QUESTIONS

1. Enumerate various principles of quality management and explain theirusefulness to managers.

2. “The TQM framework is dynamic and is fast changing”-Critically examine.

3. Describe the benefit of TQM in streamlining the operations of an organization.

4. Discuss the importance of vision, mission and policy statements on qualityin organizations. Describe the process of its evolution.

5. Explain the impact on quality due to nonretention of customers.

6. Explain various cost elements in achieving quality in an organization.Demonstrate how do they interact with each other

Page 27: DBA1656

DBA 1656 QUALITY MANAGEMENT

27

NOTES

Anna University Chennai

PRINCIPLES AND PHILOSOPHIES OFQUALITY MANAGEMENT

INTRODUCTION

Different schools of thought on management dominate the minds of theindustrialists and practitioners. The western school of thought accelerates the decisionmaking process but when it comes to the stages of implementation there is a slow downcomparatively. On the other side, the Japanese Management spend a lot more time inarriving at the particular decision and because of which the implementation is faster.Many quality management principles and philosophies have emerged from Japanesesoil. The notable contributions and the profile of the contributors are presented in thispart. This unit deals with Overview of the contributions of Walter Shewhart, Overviewof the contributions of Deming, Overview of the contributions of Juran, Overview ofthe contributions of Crosby, Overview of the contributions of Masaaki Imai, Overviewof the contributions of Feigenbaum, Overview of the contributions of Ishikawa, Overviewof the contributions of Taguchi, Overview of the contributions of Shingeo, Concepts ofquality circle, Japanese 5S Principles, 8D Methodology.

LEARNING OBJECTIVES

Upon completion of this unit, you will be able to:

• Have an understanding of western and Japanese thinking on quality• Appreciate the evolution process of various dominant techniques in quality• Understand the process of evolution of various techniques• The contributors profile and their contributions in particular

UNIT-II

Page 28: DBA1656

DBA 1656 QUALITY MANAGEMENT

28

NOTES

Anna University Chennai

2.1 OVERVIEW OF THE CONTRIBUTIONS OF WALTER SHEWHART

Walter A. Shewhart

Walter Andrew Shewhart (pronounced like “Shoe-heart”, March 18, 1891- March 11, 1967) was an American physicist, engineer and statistician, sometimesknown as the father of statistical quality control.

W. Edwards Deming said of him:

As a statistician, he was, like so many of the rest of us, self-taught, on agood background of physics and mathematics.

Born in New Canton, Illinois to Anton and Esta Barney Shewhart, he attendedthe University of Illinois before being awarded his doctorate in physics from the Universityof California, Berkeley in 1917.

Work on industrial quality

Bell Telephone’s engineers had been working to improve the reliability of theirtransmission systems. Because amplifiers and other equipment had to be buried

Page 29: DBA1656

DBA 1656 QUALITY MANAGEMENT

29

NOTES

Anna University Chennai

underground, there was a business need to reduce the frequency of failures and repairs.When Dr. Shewhart joined the Western Electric Company Inspection EngineeringDepartment at the Hawthorne Works in 1918, industrial quality was limited to inspectingfinished products and removing defective items. That all changed on May 16, 1924.Dr. Shewhart’s boss, George D Edwards, recalled: “Dr. Shewhart prepared a littlememorandum only about a page in length. About a third of that page was given over toa simple diagram which we would all recognize today as a schematic control chart.That diagram, and the short text which preceded and followed it, set forth all of theessential principles and considerations which are involved in what we know today asprocess quality control.” Shewhart’s work pointed out the importance of reducingvariation in a manufacturing process and the understanding that continual process-adjustment in reaction to nonconformance actually increased variation and degradedquality.

Shewhart framed the problem in terms of assignable-cause and chance-causevariation and introduced the control chart as a tool for distinguishing between the two.Shewhart stressed that bringing a production process into a state of statistical control,where there is only chance-cause variation, and keeping it in control, is necessary topredict future output and to manage a process economically. Dr. Shewhart created thebasis for the control chart and the concept of a state of statistical control by carefullydesigned experiments. While Dr. Shewhart drew from pure mathematical statisticaltheories, he understood that data from physical processes never produce a “normaldistribution curve” (a Gaussian distribution, also commonly referred to as a “bell curve”).He discovered that observed variation in manufacturing data did not always behave thesame way as data in nature (Brownian motion of particles). Dr. Shewhart concludedthat while every process displays variation, some processes display controlled variationthat is natural to the process, while others display uncontrolled variation that is notpresent in the process causal system at all times.

Shewhart worked to advance the thinking at Bell Telephone Laboratories fromtheir foundation in 1925 until his retirement in 1956, publishing a series of papers in theBell System Technical Journal.

His work was summarised in his book Economic Control of Quality ofManufactured Product (1931).Shewhart’s charts were adopted by the AmericanSociety for Testing and Materials (ASTM) in 1933 and advocated to improve productionduring World War II in American War Standards Z1.1-1941, Z1.2-1941 and Z1.3-1942.

Later work

From the late 1930s onwards, Shewhart’s interests expanded out from industrialquality to wider concerns in science and statistical inference. The title of his second

Page 30: DBA1656

DBA 1656 QUALITY MANAGEMENT

30

NOTES

Anna University Chennai

book Statistical Method from the Viewpoint of Quality Control (1939) asks theaudacious question: What can statistical practice, and science in general, learnfrom the experience of industrial quality control?

Shewhart’s approach to statistics was radically different from that of many ofhis contemporaries. He possessed a strong operationalist outlook, largely absorbedfrom the writings of pragmatist philosopher C. I. Lewis, and this influenced his statisticalpractice. In particular, he had read Lewis’s Mind and the World Order many times.Though he lectured in England in 1932 under the sponsorship of Karl Pearson (anothercommitted operationalist) his ideas attracted little enthusiasm within the English statisticaltradition. The British standards nominally based on his work, in fact, diverge on seriousphilosophical and methodological issues from his practice.

His more conventional work led him to formulate the statistical idea of toleranceintervals and to propose his data presentation rules, which are listed below:

1. Data has no meaning apart from its context.2. Data contains both signal and noise. To be able to extract information, one

must separate the signal from the noise within the data.

Walter Shewhart visited India in 1947-48 under the sponsorship of P. C.Mahalanobis of the Indian Statistical Institute. Shewhart toured the country, heldconferences and stimulated interest in statistical quality control among Indianindustrialists.He died at Troy Hills, New Jersey in 1967.

Walter Shewhart’s invention of statistical control charts the pioneering ofindustrial quality control methods.

Pioneer of Modern Quality Control

Recognized the need to separate variation into assignable andUnassignable causes (defined “in control”.)“Founder of the control chart” (eg. X-bar and R chart).Originator of the plan-do-check-act cycle.Perhaps the first to successfully integrate statistics,Engineering and economics.Defined quality in terms of objective and subjective quality

• objective quality: quality of a thing independent of people.• subjective quality: quality is relative to how people perceive

it.(Value)

Page 31: DBA1656

DBA 1656 QUALITY MANAGEMENT

31

NOTES

Anna University Chennai

Influence

In 1938 his work came to the attention of physicists W. Edwards Deming andRaymond T. Birge. The two had been deeply intrigued by the issue of measurementerror in science and had published a landmark paper in Reviews of Modern Physics in1934. On reading of Shewhart’s insights, they wrote to the journal to wholly recasttheir approach in the terms that Shewhart advocated.

The encounter began a long collaboration between Shewhart and Deming thatinvolved work on productivity during World War II and Deming’s championing ofShewhart’s ideas in Japan from 1950 onwards. Deming developed some of Shewhart’smethodological proposals around scientific inference and named his synthesis theShewhart cycle.

Achievements and honours

In his obituary for the American Statistical Association, Deming wrote ofShewhart:

As a man, he was gentle, genteel, never ruffled, never off his dignity. Heknew disappointment and frustration, through failure of many writers inmathematical statistics to understand his point of view.

He was the founding editor of the Wiley Series in Mathematical Statistics, arole that he maintained for twenty years, always championing freedom of speech andconfident to publish views at variance with his own.

His honours included:

Founding member, fellow and president of the Institute of MathematicalStatistics;Founding member, first honorary member and first Shewhart Medalist of theAmerican Society for Quality Control;Fellow and president of the American Statistical Association;Fellow of the International Statistical Institute;Honorary fellow of the Royal Statistical Society;Holley medal of the American Society of Mechanical Engineers;Honorary Doctor of Science, Indian Statistical Institute, Calcutta.

1. All chance systems of causes are not alike in the sense that they enable us to predictthe future in terms of the past.

2. Constant systems of chance causes do exist in nature.3. Assignable causes of variation may be found and eliminated.

Page 32: DBA1656

DBA 1656 QUALITY MANAGEMENT

32

NOTES

Anna University Chennai

Based upon evidence such as already presented, it appears feasible to set upcriteria by which to determine when assignable causes of variation in quality have beeneliminated so that the product may then be considered to be controlled within limits.This state of control appears to be, in general, a kind of limit to which we may expect togo economically in finding and removing causes of variability without changing a majorportion of the manufacturing process as, for example, would be involved in the substitutionof new materials or designs.

The definition of random in terms of a physical operation is notoriously withouteffect on the mathematical operations of statistical theory because so far as thesemathematical operations are concerned random is purely and simply an undefined term.The formal and abstract mathematical theory has an independent and sometimes lonelyexistence of its own. But when an undefined mathematical term such as random is givena definite operational meaning in physical terms, it takes on empirical and practicalsignificance. Every mathematical theorem involving this mathematically undefined conceptcan then be given the following predictive form: If you do so and so, then such and suchwill happen.

Every sentence in order to have definite scientific meaning must be practicallyor at least theoretically verifiable as either true or false upon the basis of experimentalmeasurements either practically or theoretically obtainable by carrying out a definiteand previously specified operation in the future. The meaning of such a sentence is themethod of its verification.

In other words, the fact that the criterion we happen to use has a fine ancestryof highbrow statistical theorems does not justify its use. Such justification must comefrom empirical evidence that it works.

Presentation of data depends on the intended actions

Rule 1. Original data should be presented in a way that will preserve the evidence in theoriginal data for all the predictions assumed to be useful.Rule 2. Any summary of a distribution of numbers in terms of symmetric functionsshould not give an objective degree of belief in any one of the inferences or predictionsto be made there for that would cause human action significantly different from whatthis action would be if the original distributions had been taken as evidence.

The original notions of Total Quality Management and continuous improvementtrace back to a former Bell Telephone employee named Walter Shewhart. One of W.Edwards Deming’s teachers, he preached the importance of adapting managementprocesses to create profitable situations for both businesses and consumers, promotingthe utilization of his own creation — the SPC control chart.

Page 33: DBA1656

DBA 1656 QUALITY MANAGEMENT

33

NOTES

Anna University Chennai

Dr. Shewhart believed that lack of information greatly hampered the efforts ofcontrol and management processes in a production environment. In order to aid amanager in making scientific, efficient, economical decisions, he developed, statisticalprocess control methods. Many of the modern ideas regarding quality owe theirinspiration to Dr. Shewhart.

He also developed the Shewhart cycle-learning and Improvement cycle,combining both creative management thinking with statistical analysis. This cycle containsfour continuous steps: Plan, Do, Study and Act. These steps (commonly referred to asthe PDSA cycle), Shewhart believed, ultimately lead to total quality improvement. Thecycle draws its structure from the notion that constant evaluations of managementpractices — as well as the willingness of management to adopt and disregardunsupported ideas —are keys to the evolution of a successful enterprise.

2.2 OVERVIEW OF THE CONTRIBUTIONS OF DEMING

Understanding the Deming Management Philosophy

FIGURE 2.1

W. Edwards Deming called it the Shewhart cycle, giving credit to its inventor,Walter A. Shewhart. The Japanese have always called it the Deming cycle in honor ofthe contributions Deming made to Japan’s quality improvement efforts over many years.Some people simply call it the PDCA—plan, do, check, and act—cycle. Regardlessof its name, the idea is well-known to process improvement engineers, qualityprofessionals, quality improvement teams and others involved in continuous improvementefforts.

The model can be used for the ongoing improvement of almost anything and itcontains the following four continuous steps: Plan, Do, Study and Act. Students,facilitated by their teacher should be able to complete the following steps using informationfrom a classroom data center. Building staffs, facilitated by their principal or district

Page 34: DBA1656

DBA 1656 QUALITY MANAGEMENT

34

NOTES

Anna University Chennai

quality facilitators should be able to complete the steps of PDSA using information froma building data center. Similarly, district level strategic planning committees can use thesame process.

In the first step (PLAN), based on data, identify a problem worthy of study toeffect improvement. Define the specific changes you want. Look at the data [numericalinformation] related to the current status. List a numerical measure for the future target.

In the second step (DO), the plan is carried out, preferably on a small scale. Identify the process owners who are on the team. Within the action plan and on thestoryboard, explain why new steps are necessary. Collect and chart the baseline data. Form a hypothesis of possible causes that are related to the current performance results. A quality tool such as a fish-bone diagram or an affinity diagram would be useful at thisstage. Implement the strategy to bring about the change.

In the third step (STUDY), the effects of the plan are observed. Monitor thedata. On the baseline data chart, continue with the next data points. Explain when andhow data analysis with appropriate people takes place. Explain what is being learnedthough the improvement process. Identify trends, if any can be discerned. Conduct agap analysis. Use comparisons, and benchmark [best practice] data. If the data resultis negative, undergo another cycle of PDSA.

In the last step (ACT), the results are studied to determine what was learnedand what can be predicted. If positive data result, standardize the process/strategy. Standardize and keep the new process going. Repeat the cycle starting with PLAN todefine a new change you want.

William Edwards Deming was an American statistician, college professor,author, lecturer, and consultant. Deming is widely credited with improving production inthe United States during World War II, although he is perhaps best known for his workin Japan. There, from 1950 onward he taught top management how to improve design(and thus service), product quality, testing and sales (the latter through global markets).Deming made a significant contribution to Japan becoming renowned for producinginnovative high-quality products. Deming is regarded as having had more impact uponJapanese manufacturing and business than any other individual not of Japanese heritage.

After World War II (1947), Deming was involved in early planning for the1951 Japanese Census. He was asked by the Department of the Army to assist in thiscensus. While he was there, his expertise in quality control techniques, combined withhis involvement in Japanese society, led to his receiving an invitation by the JapaneseUnion of Scientists and Engineers (JUSE).

JUSE members had studied Shewhart’s techniques, and as part of Japan’sreconstruction efforts they sought an expert to teach statistical control. During June-

Page 35: DBA1656

DBA 1656 QUALITY MANAGEMENT

35

NOTES

Anna University Chennai

August 1950, Deming trained hundreds of engineers, managers, and scholars in statisticalprocess control (SPC) and concepts of quality. He also conducted at least one sessionfor top management. Deming’s message to Japan’s chief executives: improving qualitywill reduce expenses while increasing productivity and market share. Perhaps the bestknown of these management lectures was delivered at the Mt. Hakone ConferenceCenter in August of 1950.

A number of Japanese manufacturers applied his techniques widely, andexperienced theretofore unheard of levels of quality and productivity. The improvedquality combined with the lowered cost created new international demand for Japaneseproducts.

Deming declined to receive royalties from the transcripts of his 1950 lectures,so JUSE’s board of directors established the Deming Prize (December 1950) to repayhim for his friendship and kindness. The Deming Prize, especially the Deming ApplicationPrize that is given to companies, has exerted an immeasurable influence directly orindirectly on the development of quality control and quality management in Japan.

In 1960, the Prime Minister of Japan (Nobusuke Kishi), acting on behalf ofEmperor Hirohito, awarded Dr. Deming Japan’s Order of the Sacred Treasures, SecondClass. The citation on the medal recognizes Deming’s contributions to Japan’s industrialrebirth and its worldwide success.

The first section of the meritorious service record describes his work in Japan:

1 1947 Rice Statistics Mission member2 1950 Assistant to the Supreme Commander of the Allied Powers3 Instructor in sample survey methods in government statistics

The second half of the record lists his service to private enterprise through theintroduction of epochal ideas, such as quality control and market survey techniques

Contributions of Deming

The recovery of Japan after World War II has many explanations: Japan wasforbidden to be involved in military industries, the Japanese concentrated on consumerproducts; powerful conglomerates of industry and banks (zaibatsus) poured moneyinto selected companies; the Japanese people consented to great sacrifices in order tosupport the recovery.

The Japanese themselves point to American W. Edwards Deming as one factorof their success. During the War, Deming was one of many who helped apply statisticalquality control methods developed by Walter Shewhart at Bell Labs to help with theindustrial mobilization. After the war, Deming was disappointed by American industry’s

Page 36: DBA1656

DBA 1656 QUALITY MANAGEMENT

36

NOTES

Anna University Chennai

rejection of these methods. Deming visited Japan after the war as representative of theUS government, to help the Japanese set up a census. He met with Japanese engineersinterested in applying Shewhart’s methods. In 1950, the Japanese Union of Engineersinvited Deming to give a series of lectures on quality control, which were attended bytop Japanese industrialists.Within months, they found amazing increase in productivityand statistical quality control took off in Japan. “The top people came to Deming witha desire to learn that bordered on obssession.” The Japanese integrated the statisticalmethods into their companies, involving all the workers in the movement to improvequality.

American industry flourished in the postwar boom in the US, but found itselfgetting hints and finally clear indications of Japanese competition in the 1970s. HalSperlich, a Ford executive, visited a Japanese auto factory in the early seventies andwas amazed to find that the factories had no area dedicated to repairing shoddy work;in fact the plant had no inspectors. “Sperlich left that factory somewhat shaken: InAmerica, he thought, we have repair bins the size of football fields”. William Ouchiwrote that when he began to study Japanese practices in 1973, there was little interestin the US in his findings. When his book, Theory X, was published in 1981, interest hadgrown tremendously and the book was a best seller. However, even in 1981, a topofficer in Motorola warned American manufacturers of computer chips that they werecomplacent and not paying enough attention to Japanese quality. In 1981, Ford engineerscompared automatic transmissions, some built by Mazda for the Ford Escorts, andsome built by Ford. “The ones made in Japan were well liked by our customers; manyof those from Ohio were not. Ours were more erratic; many shifted poorly through thegears, and customers said they didn’t like the way they performed.” The difference wasdue to the tighter tolerances in the Japanese made transmissions.

NBC aired a documentary, ‘If Japan Can, Why Can’t We’. The documentaryexplained what Japan was doing and especially stressed the contributions of Deming.Donald Peterson, then president of Ford, was one of many CEOs motivated to callDeming. Deming said his phone rang off the hook.

Deming began with statistical quality control, but he recognized that successdepended on involving everyone. His 14 points are a manifesto for worker involvementand worker pride. Peterson sent teams from Ford to visit Japanese companies: “Beforethose visits, many of the people at Ford believed that the Japanese were succeedingbecause they used highly sophisticated machinery. Other thought their industry wasorchestrated by Japan’s government. The value of our visits, however, lay in Fordpeople’s discovery that the real secret was how the people worked together — howthe Japanese companies organized their people into teams, trained their workers withthe skills they needed, and gave them the power to do their jobs properly. Somehow orother, they had managed to hold on to a fundamental simplicity of human enterprise,while we built layers of bureaucracy”.

Page 37: DBA1656

DBA 1656 QUALITY MANAGEMENT

37

NOTES

Anna University Chennai

In a return to using the brain, not just the brawn, of the worker, the Japanesemethods, building on Deming, actually deTaylorize work. “The classical Taylor modelof scientific management, which favored the separation of mental from physical laborand the retention of all decision-making in the hands of management, is abandoned infavor of a cooperative team approach designed to harness the full mental capabilitiesand work experience of everyone involved in the process ...”.

While Deming’s principles as filtered through the Japanese methods argue forreskilling work and reject of Taylor’s belief that workers should just do what they aretold, Taylorism lives on, only now it is called McDonaldization. Taiichi Ohno, ToyotaProduction System developed between 1945 and 1970.

While the US had much to learn from Japanese methods, careful observersrealized that the differences between Japanese and American societies were so greatthat not all ideas could be imported (some of the cooperation among Japanese companieswould violate US antitrust laws), that the Japanese methods were not always what theyseemed (for example, life time employment was limited to a minority), and Americancompanies, unheralded, were already using many of the new Japanese methods. Ouchiin Theory Z, examined Japanese practices in their treatment of workers, distilled themto the central ideas which he called Theory Z, and discovered that the best examples ofTheory Z management were American companies.

2.3 OVERVIEW OF THE CONTRIBUTIONS OF JURAN

Joseph Juran

Juran expressed his approach to quality in the form of the Quality trilogy. Managingfor quality involved three basic processes:

Quality Planning: This involves identifying the customer (both internal and external),determining their needs, design goods and services to meet these needs at theestablished quality and cost goals. Then design the process and transfer this to theoperators.Quality Control: Establish standards or critical elements of performance, identifymeasures and methods of measurements, compare actual to standard and takeaction if necessary.

Quality Improvement: Identify appropriate improvement projects, organize the team,discover the causes and provide remedies and finally develop mechanisms to controlthe new process and hold the gains.

The relationship among the three processes is shown in the Quality Trilogy figurebelow:

Page 38: DBA1656

DBA 1656 QUALITY MANAGEMENT

38

NOTES

Anna University Chennai

FIGURE 2.2

The errors made during the initial planning result in a higher cost which Juranlabeled Chronic Waste: At the beginning, the process stays within control limits. Aquality improvement project is initiated and succeeds in reducing chronic waste.

Juran also created the concept of Cost of Quality. There are four elementscomprising the cost of quality.

Prevention Costs: Initial design quality, actions during product creation (e.g.,marketing research, establishing product specifications, determining consumer needs,training workers, vendor evaluation, quality audit, preventive maintenance, etc.)

Appraisal Costs: Inspection testing of raw materials, work-in-progress, finishedgoods, procedures for testing, training manuals, laboratories.

External Costs: Returned merchandise, making warranty repairs or refunds, credibilityloss, lawsuits.

Internal Costs: Scrap, rework, redesign, downtime, broken equipment, reducedyield, selling products at a discount etc.

The graph in the following exhibit shows that costs of conformance (appraisal andprevention) increase as defect rate declines. However, the costs of nonconformance(internal and external failures) decrease. The trade-off leads to an optimalconformance level.

Page 39: DBA1656

DBA 1656 QUALITY MANAGEMENT

39

NOTES

Anna University Chennai

FIGURE 2.3The Quality Trilogy

Quality Planning: Determine quality goals; implementation planning; resourceplanning; express goals in quality terms; create the quality plan.Quality Control: Monitor performance; compare objectives with achievements: actto reduce the gap.Quality Improvement: Reduce waste; enhance logistics; improve employee morale;improve profitability; satisfy customers.

Philosophy

Management is largely responsible for qualityQuality can only be improved through planningPlans and objectives must be specific and measurableTraining is essential and starts at the topThree step process of planning, control and action

The Quality Planning Roadmap

Step 1 : Identify who are the customers

Step 2 : Determine the needs of those customers

Step 3 : Translate those needs into our language (the language of theorganization)

Step 4 : Develop a product that can respond to those needs

Page 40: DBA1656

DBA 1656 QUALITY MANAGEMENT

40

NOTES

Anna University Chennai

Step 5 : Optimize the product features so as to meet the company’sneeds as well as customers needs

Step 6 : Develop a process, which is able to produce the product

Step 7 : Optimize the process

Step 8 : Prove that the process can produce the product underoperating conditions

Step 9 : Transfer the process to operations

Ten Steps to Continuous Quality Improvement

Step 1 : Create awareness of the need and opportunity for qualityimprovement

Step 2 : Set goals for continuous improvement

Step 3 : Build an organization to achieve goals by establishing a qualitycouncil, identifying problems, selecting a project, appointingteams and choosing facilitators

Step 4 : Give everyone training

Step 5 : Carry out projects to solve problems

Step 6 : Report progress

Step 7 : Show recognition

Step 8 : Communicate results

Step 9 : Keep a record of successes.

Step 10 : Incorporate annual improvements into the company’s regularsystems and processes and thereby maintain momentum.

2.4 OVERVIEW OF THE CONTRIBUTIONS OF CROSBY

Philip Crosby

Crosby described quality as free and argued that zero defects was a desirableand achievable goal. He articulated his view of quality as the four absolutes of qualitymanagement:1. Quality means conformance to requirements. Requirements needed to be clearly

specified so that everyone knew what was expected of them.

2. With Quality comes prevention, and prevention was a result of training, discipline,example, leadership, and more.

3. Quality performance standard is zero defect. Errors should not be tolerated.

4. Quality measurement is the price of nonconformance.

Page 41: DBA1656

DBA 1656 QUALITY MANAGEMENT

41

NOTES

Anna University Chennai

In addition to the above, Crosby developed a quality management maturitygrid in which he listed five stages of management’s maturity with quality issues. Thesefive are Uncertainty, Awakening, Enlightenment, Wisdom and Certainty. In the firststage, management fails to see quality as a tool; problems are handled by “firefighting”and are rarely resolved; there are no organized quality improvement activities. By thelast stage, the company is convinced that quality is essential to the company’s success;problems are generally prevented; and quality improvement activities are regular andcontinuing.

Five absolutes of quality management: Philip B. Crosby

Quality is defined as conformance to requirements, not as ‘goodness’ nor ‘elegance’.

There is no such thing as a quality problem.

It is always cheaper to do it right the first time.

The only performance measurement is the cost of quality.

The only performance standard is zero defects.

Crosby’s Quality Vaccine

IntegrityDedication to communication and customer satisfactionCompany-side policies and operations which support the quality thrust

FIGURE 2.4

Fourteen Step Quality Programme: Philip B. Crosby

Step 1 Establish management commitment – it is seen as vital that the wholemanagement team participates in the programme, a half hearted effortwill fail.

Step 2 Form quality improvement teams – the emphasis here is onmultidisciplinary team effort. An initiative from the quality departments

Page 42: DBA1656

DBA 1656 QUALITY MANAGEMENT

42

NOTES

Anna University Chennai

will not be successful. It is considered essential to build team workacross arbitrary, and often artificial, organizational boundaries.

Step 3 Establish quality measurements – these must apply to every activitythroughout the company. A way must be found to capture every aspect,design, manufacturing, delivery and so on. These measurements providea platform for the next step.

Step 4 Evaluate the cost of quality – this evaluation must highlight, using themeasures established in the previous step, where quality improvementwill be profitable.

Step 5 Raise quality awareness – this is normally undertaken through the trainingof managers and supervisors, through communications such as videosand books, and by displays of posters etc.

Step 6 Take action to correct problems – this involves encouraging staff toidentify and rectify defects, or pass them on to higher supervisory levelswhere they can be addressed.

Step 7 Zero defects planning – establish a committee, or working group todevelop ways to initiate and implement a zero defects programme.

Step 8 Train supervisors and managers – this step is focused on achievingunderstanding by all managers and supervisors of the steps in the qualityimprovement programme in order that they can explain it in turn.

Step 9 Hold a ‘Zero Defects’ day to establish the attitude and expectationwithin the company. Crosby sees this as being achieved in a celebratoryatmosphere accompanied by badges, buttons and balloons.

Step 10 Encourage the setting of goals for improvement – goals are ofcourse ofno value unless they are related to appropriate time-scales for theirachievement.

Step 11 Obstacle reporting – This is encouragement to employees to advisemanagement of the factors which prevent them from achieving errorfree work. This might cover defective or inadequate equipment, poorquality components, etc.

Step 12 Recognition for contributors – Crosby considers that those whocontribute to the programme should be rewarded through a formal,although non-monetary, reward scheme. Readers may be aware ofthe ‘Gold Banana’ award given by Foxboro for scientific achievement(Peters and Waterman, 1982).

Page 43: DBA1656

DBA 1656 QUALITY MANAGEMENT

43

NOTES

Anna University Chennai

Step 13 Establish quality councils – these are essentially forums composed ofquality professional and team leaders allowing them to communicateand determine action plans for further quality improvement.

Step 14 Do it all over again – the message here is very simple – achievement ofquality is an ongoing process. However far you have got, there isalways further to go!

2.5 OVERVIEW OF THE CONTRIBUTIONS OF MASAAKI IMAI

Masaaki Imai, a quality management consultant, was born in Tokyo in 1930.In 1955, he received his bachelor’s degree from the University of Tokyo, where healso did graduate work in international relations. In the 1950’s he worked for five yearsin Washington, D.C. at the Japanese Productivity Center where his principle duty wasescorting groups of Japanese business people through major U.S. Plants. In 1962, hefounded Cambridge Corp., an international management and executive recruiting firmbased in Tokyo. As a consultant, he assisted more than 200 foreign and joint-venturecompanies in Japan in fields including recruiting, executive development, personnelmanagement and organizational studies. From 1976 to 1986, Imai served as presidentof the Japan Federation of Recruiting and Employment Agency Associations.

In 1986, Imai established the “Kaizen Institute” , to help Western companiesintroduce kaizen concepts, systems and tools. That same year, he published his bookon Japanese management, Kaizen: The Key to Japan’s Competitive Success. Thisbest-selling book has since been translated into 14 languages. Other book by Imaiinclude, 16 Ways To Avoid Saying No, Never Take Yes for an Answer and GembaKaizen published in 1997. Till date, the Kaizen institute is operating in over 22 countriesand continues to act as an enabler to companies to accomplish their manufacturing,process, and service goals.

The True Total Quality according to Masaaki Imai is important to recognise theimportance of the commonsense approach of gemba (shop floor) kaizen to qualityimprovement, as against the technology-only approach to quality practised in the west.

The production system (batch production) employed by over 90% of all thecompanies in the world is one of the biggest obstacles to quality improvement. Aconversion from a batch to a JIT (just-in-time)/lean production system should be themost urgent task for any manufacturing company today in order to survive in the nextmillennium.

Page 44: DBA1656

DBA 1656 QUALITY MANAGEMENT

44

NOTES

Anna University Chennai

2.6 OVERVIEW OF THE CONTRIBUTIONS OF FEIGENBAUM

Mitchell Jay Feigenbaum (born December 19, 1944; Philadelphia, USA) isa mathematical physicist whose pioneering studies in chaos theory led to the discoveryof the Feigenbaum constant.

The son of a Polish and a Ukrainian Jewish immigrants, Feigenbaum’s educationwas not a happy one. Despite excelling in examinations, his early schooling at TildenHigh School, Brooklyn, New York, and the City College of New York seemed unableto stimulate his appetite to learn. However, in 1964 he began his graduate studies at theMassachusetts Institute of Technology (MIT). Enrolling for graduate study in electricalengineering, he changed his area to physics. He completed his doctorate in 1970 for athesis on dispersion relations, under the supervision of Professor Francis Low.

After short positions at Cornell University and the Virginia Polytechnic Institute,he was offered a long-term post at the Los Alamos National Laboratory in New Mexicoto study turbulence in fluids. Although that group of researchers was ultimately unableto unravel the currently intractable theory of turbulent fluids, his research led him tostudy chaotic mappings.

Some mathematical mappings involving a single linear parameter exhibit theapparently random behavior, known as chaos, when the parameter lies within certainranges. As the parameter is increased towards this region, the mapping undergoesbifurcations at precise values of the parameter. At first there is one stable point, thenbifurcating to an oscillation between two values, then bifurcating again to oscillatebetween four values and so on. In 1975, Dr. Feigenbaum, using the small HP-65computer he had been issued, discovered that the ratio of the difference between thevalues at which such successive period-doubling bifurcations occur tends to a constantof around 4.6692... He was then able to provide a mathematical proof of that fact, andhe then showed that the same behavior, with the same mathematical constant, wouldoccur within a wide class of mathematical functions, prior to the onset of chaos. For thefirst time, this universal result enabled mathematicians to take their first step to unravellingthe apparently intractable “random” behavior of chaotic systems. This “ratio ofconvergence” is now known as the Feigenbaum constant.

The Logistic map is a prominent example of the mappings that Feigenbaumstudied in his noted 1978 article: Quantitative Universality for a Class of NonlinearTransformations.

During Dr Feigenbaum’s duty at the Los Alamos Lab, he acquired a uniqueposition which led to many scientists’ being sorry to see him leave. When anyone in anyof the many fields of work going on at the Los Alamos Lab was stuck on a problem, iteventually became a common practice to seek out Feigenbaum, and then go for a walkto discuss the problem. Dr. Feigenbaum frequently helped others to understand the

Page 45: DBA1656

DBA 1656 QUALITY MANAGEMENT

45

NOTES

Anna University Chennai

problem they were dealing with better, and he often turned out to have read a paperthat would help them; he was usually able to tell them the title, authors, and publicationdate to make things easier, and he did so straight off the top of his head most of thetime. The amount of reading he was doing must have been formidable, and that wouldhave left many without time to do any of their assigned research. Yet, his appetite forwork was such that he continued to make a significant contribution to the work that hewas assigned to do. It should be noted that the people who found him helpful in thismanner were working in a very wide range of different kinds of scientific work. Fewmen would have stood a chance of being able to understand these all in enough depthto help out. “Not my field” would have been the response of most of them if they haddiscussed these matters with one another instead.

Feigenbaum’s other contributions include important new fractal methods incartography, starting when he was hired by Hammond to develop techniques to allowcomputers to assist in drawing maps. The introduction to the Hammond Atlas (1992)states:“Using fractal geometry to describe natural forms such as coastlines, mathematicalphysicist Mitchell Feigenbaum developed software capable reconfiguring coastlines,borders, and mountain ranges to fit a multitude of map scales and projections. Dr.Feigenbaum also created a new computerized type placement program which placesthousands of map labels in minutes, a task which previously required days of tediouslabor.”

In 1983 he was awarded a MacArthur Fellowship, and in 1986, he was awardedthe Wolf Prize in Physics. He has been Toyota Professor at Rockefeller Universitysince 1986.

2.7 OVERVIEW OF THE CONTRIBUTIONS OF ISHIKAWA

ISHIKAWA DIAGRAM

An Ishikawa diagram, also known as a Fishbone diagram or cause and effectdiagram, is a diagram that shows the causes of a certain event. It was first used byKaoru Ishikawa in the 1960s, and is considered one of the seven basic tools of qualitymanagement, including the histogram, Pareto chart, check sheet, control chart, causeand effect diagram, flowchart, and scatter diagram.. Because of its shape, an Ishikawadiagram can be known as a Fishbone Diagram. It is also known as a cause and effectdiagram.

A common use of the Ishikawa diagram is in product design, to identify desirablefactors leading to an overall effect. Mazda Motors famously used a Ishikawa diagramin the development of the Miata sports car, where the required result was “Jinba Ittai”or “Horse and Rider as One”. The main causes included such aspects as “touch” and“braking” with the lesser causes including highly granular factors such as “50/50 weightdistribution” and “able to rest elbow on top of driver’s door”. Every factor identified inthe diagram was included in the final design.

Page 46: DBA1656

DBA 1656 QUALITY MANAGEMENT

46

NOTES

Anna University Chennai

FIGURE 2.5

A generic Ishikawa diagram showing general and more refined causes for anevent.

People sometimes call Ishikawa diagrams “fishbone diagrams” because of theirfish-like appearance. Most Ishikawa diagrams have a box at the right hand side inwhich is written the effect that is to be examined. The main body of the diagram is ahorizontal line from which stem the general causes, represented as “bones”. These aredrawn towards the left hand corners of the paper, and they are each labeled with thecauses to be investigated. Off each of the large bones there may be smaller boneshighlighting more specific aspects of a certain cause. When the most probable causeshave been identified, they are written in the box along with the original effect.

Definition: A graphic tool used to explore and display opinion about sources of variationin a process. (Also called a Cause-and-Effect or Fishbone Diagram.)

Purpose: To arrive at a few key sources that contributes most significantly to the problembeing examined. These sources are then targeted for improvement. The diagram alsoillustrates the relationships among the wide variety of possible contributors to the effect.

The figure below shows a simple Ishikawa diagram. Note that this tool is referredto by several different names: Ishikawa diagram, Cause-and-Effect diagram, Fishbonediagram, and Root Cause Analysis. The first name is after the inventor of the tool,Kaoru Ishikawa (1969) who first used the technique in the 1960s.

The basic concept in the Cause-and-Effect diagram is that the name of a basicproblem of interest is entered at the right of the diagram at the end of the main “bone”. The main possible causes of the problem (the effect) are drawn as bones off of themain backbone. The “Four-M” categories are typically used as a starting point:“Materials”, “Machines”, “Manpower”, and “Methods”. Different names can be chosento suit the problem at hand, or these general categories can be revised. The key is tohave three to six main categories that encompass all possible influences. Brainstormingis typically done to add possible causes to the main “bones” and more specific causesto the “bones” on the main “bones”. This subdivision into ever increasing specificity

Page 47: DBA1656

DBA 1656 QUALITY MANAGEMENT

47

NOTES

Anna University Chennai

continues as long as the problem areas can be further subdivided. The practical maximumdepth of this tree is usually about four or five levels. When the fishbone is complete, onehas a rather complete picture of all the possibilities about what could be the root causefor the designated problem.

FIGURE 2.6

The Cause-and-Effect diagram can be used by individuals or teams; probablymost effectively by a group. A typical utilization is the drawing of a diagram on a blackboardby a team leader who first presents the main problem and asks for assistance from thegroup to determine the main causes which are subsequently drawn on the board as themain bones of the diagram. The team assists by making suggestions and, eventually, theentire cause and effect diagram is filled out. Once the entire fishbone is complete, teamdiscussion takes place to decide what are all the most likely root causes of the problem.These causes are circled to indicate items that should be acted upon, and the use of thetool is complete.

The Ishikawa diagram, like most quality tools, is a visualization and knowledgeorganization tool. Simply collecting the ideas of a group in a systematic way facilitatesthe understanding and ultimate diagnosis of the problem. Several computer tools havebeen created for assisting in creating Ishikawa diagrams. A tool created by the JapaneseUnion of Scientists and Engineers (JUSE) provides a rather rigid tool with a limitednumber of bones. Other similar tools can be created using various commercial tools.

Page 48: DBA1656

DBA 1656 QUALITY MANAGEMENT

48

NOTES

Anna University Chennai

Only one tool has been created that adds computer analysis to the fishbone.Bourne et al. (1991) reported using Dempster-Shafer theory (Shafer and Logan, 1987)to systematically organize the beliefs about the various causes that contribute to themain problem. Based on the idea that the main problem has a total belief of one, eachremaining bone has a belief assigned to it based on several factors; these include thehistory of problems of a given bone, events and their causal relationship to the bone,and the belief of the user of the tool about the likelihood that any particular bone is thecause of the problem.

How to Construct:

1. Place the main problem under investigation in a box on the right.

2. Have the team generate and clarify all the potential sources of variation.

3. Use an affinity diagram to sort the process variables into naturally relatedgroups. The labels of these groups are the names for the major boneson the Ishikawa diagram.

4. Place the process variables on the appropriate bones of the Ishikawadiagram.

5. Combine each bone in turn, insuring that the process variables arespecific, measurable, and controllable. If they are not, branch or“explode” the process variables until the ends of the branches arespecific, measurable, and controllable.

Tip:

Take care to identify causes rather than symptoms.

Post diagrams to stimulate thinking and get input from other staff.

Self-adhesive notes can be used to construct Ishikawa diagrams. Sourcesof variation can be rearranged to reflect appropriate categories with minimalrework.

Insure that the ideas placed on the Ishikawa diagram are process variables,not special caused, other problems, tampering, etc.

Review the quick fixes and rephrase them, if possible, so that they areprocess variables.

2.8 OVERVIEW OF THE CONTRIBUTIONS OF TAGUCHI

Taguchi Methods: Introduction

Dr. Genichi Taguchi has played an important role in popularising Design OfExperiments (DOE). However, it would be wrong to think that the Taguchi Methodsare just another way of performing DOE. He has developed a complete philosophyand the associated methods for “Quality Engineering”. His most important ideas are:

Page 49: DBA1656

DBA 1656 QUALITY MANAGEMENT

49

NOTES

Anna University Chennai

• A quality product is a product that causes a minimal loss (expressed in money!) tosociety during it’s entire life. The relation between this loss and the technicalcharacteristics is expressed by the loss function

• Quality must be built into products and processes. There has to be much moreattention to Off Line Quality Control in order to prevent problems from occurringin production.

• Different types of noise (variation within tolerance, external conditions, dissipationfrom neighbouring systems, …) have an influence on our system and lead todeviations from the optimal condition. To avoid the influence of these noises weneed to develop robust products and processes. The robustness of a system is it’sability to function optimally even under changing noise conditions.

FIGURE 2.7

FIGURE 2.8

Page 50: DBA1656

DBA 1656 QUALITY MANAGEMENT

50

NOTES

Anna University Chennai

Taguchi methods

FIGURE 2.9Taguchi methods are statistical methods developed by GenichiTaguchi to improve the quality of manufactured goods and, more recently, tobiotechnology, marketing and advertising. Taguchi methods are controversial amongmany conventional Western statisticians unfamiliar with the Taguchi methodology.

Taguchi’s principle contributions to statistics are:

1. Taguchi loss-function;

2. The philosophy of off-line quality control; and

3. Innovations in the design of experiments.

Loss functions

Taguchi’s reaction to the classical design of experiments methodology of R. A.Fisher was that it was perfectly adapted in seeking to improve the mean outcome of aprocess. As Fisher’s work had been largely motivated by programmes to increaseagricultural production, this was hardly surprising. However, Taguchi realised that inmuch industrial production, there is a need to produce an outcome on target, for example,to machine a hole to a specified diameter or to manufacture a cell to produce a givenvoltage. He also realised, as had Walter A. Shewhart and others before him, that excessivevariation lay at the root of poor manufactured quality and that reacting to individualitems inside and outside specification was counter-productive.

He therefore, argued that quality engineering should start with an understandingof the cost of poor quality in various situations. In much conventional industrial engineeringthe cost of poor quality is simply represented by the number of items outside specificationmultiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturersbroaden their horizons to consider cost to society. Though the short-term costs maysimply be those of nonconformance, any item manufactured away from nominal wouldresult in some loss to the customer or the wider community through early wear-out;difficulties in interfacing with other parts, themselves probably wide of nominal; or theneed to build-in safety margins. These losses are externalities and are usually ignoredby manufacturers. In the wider economy the Coase Theorem predicts that they preventmarkets from operating efficiently. Taguchi argued that such losses would inevitablyfind their way back to the originating corporation (in an effect similar to the tragedy ofthe commons) and that by working to minimise them, manufacturers would enhancebrand reputation, win markets and generate profits.

Such losses are, of course, very small when an item is near to nominal. DonaldJ. Wheeler characterised the region within specification limits as where we deny thatlosses exist. As we diverge from nominal, losses grow until the point where losses aretoo great to deny and the specification limit is drawn. All these losses are, as W. EdwardsDeming would describe them, ...unknown and unknowable, but Taguchi wanted to finda useful way of representing them within statistics.

Page 51: DBA1656

DBA 1656 QUALITY MANAGEMENT

51

NOTES

Anna University Chennai

Taguchi specified three situations:

1. Larger the better (for example, agricultural yield);

2. Smaller the better (for example, carbon dioxide emissions); and

3. On-target, minimum-variation (for example, a mating part in an assembly).

The first two cases are represented by simple monotonic loss functions. In thethird case, Taguchi adopted a squared-error loss function on the grounds:

• It is the first symmetric term in the Taylor series expansion of any reasonable,real-life loss function, and so is a “first-order” approximation;

• Total loss is measured by the variance. As variance is additive it is an attractivemodel of cost; and

• There was an established body of statistical theory around the use of the leastsquares principle.

The squared-error loss function had been used by John von Neumann andOskar Morgenstern in the 1930s.

Though much of this thinking is endorsed by statisticians and economists ingeneral, Taguchi extended the argument to insist that industrial experiments seek tomaximize an appropriate signal to noise ratio representing the magnitude of the meanof a process, compared to its variation. Most statisticians believe Taguchi’s signal tonoise ratios to be effective over too narrow a range of applications and they are generallydeprecated.

Off-line quality control

Taguchi realised that the best opportunity to eliminate variation is during designof a product and its manufacturing process (Taguchi’s rule for manufacturing).Consequently, he developed a strategy for quality engineering that can be used in bothcontexts. The process has three stages:

1. System design;

2. Parameter design; and

3. Tolerance design.

System design

This is the design at the conceptual level involving creativity and innovation.

Parameter design

Once the concept is established, the nominal values of the various dimensionsand design parameters need to be set, the detailed design phase of conventionalengineering. In 1802, philosopher William Paley had observed that the inverse-squarelaw of gravitation was the only law that resulted in stable orbits if the planets were

Page 52: DBA1656

DBA 1656 QUALITY MANAGEMENT

52

NOTES

Anna University Chennai

perturbed in their motion. Paley’s understanding that engineering should aim at designsrobust against variation led him to use the phenomenon of gravitation as an argumentfor the existence of God. William Sealey Gosset in his work at the Guinness brewerysuggested as early as the beginning of the 20th century that the company might breedstrains of barley that not only yielded and malted well but whose characteristics wererobust against variation in the different soils and climates in which they were grown.Taguchi’s radical insight was that the exact choice of values required is under-specifiedby the performance requirements of the system. In many circumstances, this allows theparameters to be chosen so as to minimize the effects on performance arising fromvariation in manufacture, environment and cumulative damage. This approach is oftenknown as robust design or Robustification.

Tolerance design

With a successfully completed parameter design, and an understanding of theeffect that the various parameters have on performance, resources can be focused onreducing and controlling variation in the critical few dimensions.

Design of experiments

Taguchi developed much of his thinking in isolation from the school of R. A.Fisher, only coming into direct contact in 1954. His framework for design of experimentsis idiosyncratic and often flawed but contains much that is of enormous value. He madea number of innovations.

Outer arrays

In his later work, R. A. Fisher started to consider the prospect of using designof experiments to understand variation in a wider inductive basis. Taguchi sought tounderstand the influence that parameters had on variation, not just on the mean. Hecontended, as had W. Edwards Deming in his discussion of analytic studies, thatconventional sampling is inadequate here as there is no way of obtaining a randomsample of future conditions. In conventional design of experiments, variation betweenexperimental replications is a nuisance that the experimenter would like to eliminatewhereas, in Taguchi’s thinking, it is a central object of investigation.

Taguchi’s innovation was to replicate each experiment by means of an outerarray, itself an orthogonal array that seeks deliberately to emulate the sources of variationthat a product would encounter in reality. This is an example of judgement sampling.Though statisticians following in the Shewhart-Deming tradition have embraced outerarrays, many academics are still skeptical. An alternative approach proposed by EllisR. Ott is to use a chunk variable.

Management of interactions

Many of the orthogonal arrays that Taguchi has advocated are saturated allowingno scope for estimation of interactions between control factors, or inner array factors.This is a continuing topic of controversy. However, by combining orthogonal arrayswith an outer array consisting of noise factors, Taguchi’s method provides complete

Page 53: DBA1656

DBA 1656 QUALITY MANAGEMENT

53

NOTES

Anna University Chennai

information on interactions between control factors and noise factors. The strategy isthat these are the interactions of most interest in creating a system that is least sensitiveto noise factor variation.

• Followers of Taguchi argue that the designs offer rapid results and that controlfactor interactions can be eliminated by proper choice of quality characteristic(ideal function) and by transforming the data. Notwithstanding, a confirmationexperiment offers protection against any residual interactions. In his laterteachings, Taguchi emphasizes the need to use an ideal function that is relatedto the energy transformation in the system. This is an effective way to minimizecontrol factor interactions.

• Western statisticians argue that interactions are part of the real world and thatTaguchi’s arrays have complicated alias structures that leave interactions difficultto disentangle. George Box, and others, have argued that a more effective andefficient approach is to use sequential assembly.

Analysis of experiments

Taguchi introduced many methods for analysing experimental results includingnovel applications of the analysis of variance and minute analysis. Little of this workhas been validated by Western statisticians.

Assessment

Genichi Taguchi has made seminal and valuable methodological innovations instatistics and engineering, within the Shewhart-Deming tradition. His emphasis on lossto society; techniques for investigating variation in experiments and his overall strategyof system, parameter and tolerance design have been massively influential in improvingmanufactured quality worldwide.

Cost of QualityI Assessing the cost of Quality

The quality of a product is one of the most important factors that determine acompany’s sales and profit. Quality is measured in relation with the characteristics ofthe products that customers’ expect to find on it, so the quality level of the products isultimately determined by the customers. The customers’ expectations about a product’sperformance, reliability and attributes are translated into Critical-To-Quality (CTQ)characteristics and integrated in the products’ design by the design engineers.While designing the products, they must also take into account the resources’ capabilities(machines, people, materials…), i.e., their ability to produce products that meet thecustomers’ expectations. They specify with exactitude the quality targets for every aspectof the products.

But quality comes with a cost. The definition of the Cost Of Quality is contentious.Some authors define it as the cost of nonconformance, i.e., how much producingnonconforming products would cost a company. This is a one-sided approach, since it

Page 54: DBA1656

DBA 1656 QUALITY MANAGEMENT

54

NOTES

Anna University Chennai

does not consider the cost incurred to prevent nonconformance and above all in acompetitive market, the cost of improving the quality targets.

For instance, in the case of an LCD (Liquid Crystal Display) manufacturer, ifthe market standard for a 15” LCD with a resolution of 1024x768 is 786,432 pixelsand a higher resolution requires more pixels, improving the quality of the 15” LCDs,pushing the company’s specifications beyond the market standards would require theengineering of LCDs with more pixels which would require extra cost.

The cost of quality is traditionally measured in terms of the costs conformanceand the cost of nonconformance to which we will add the cost of innovation. The costof conformance includes the appraisal and preventive costs while the cost of non-conformance includes the costs of internal and external defects.

Cost of conformance

Preventive CostsThe costs incurred by the company to prevent nonconformance.It includes the costs of:

o Process capability assessment and improvement

o The planning of new quality initiatives (process changes, qualityimprovement projects….)

o Employee training …

Appraisal Cost.

The cost incurred while assessing, auditing, inspecting products and proceduresto conform products and services to specifications. It is intended to detect quality relatedfailures. It includes:

o Cost of process audits

o Inspection of products received from suppliers

o Final inspection audit

o Design review

o Pre-release testing

Cost of nonconformance

The cost of nonconformance is in fact the cost of having to rework productsand the loss of customers that results from selling poor quality products.

Internal Failure

o Cost of reworking products that failed audit

o Cost of bad marketing

Page 55: DBA1656

DBA 1656 QUALITY MANAGEMENT

55

NOTES

Anna University Chennai

o Scrap

External Failure

o Cost of customer support

o Cost of shipping returned products

o Cost of reworking products returned from customers

o Cost of refunds

o Loss of customer goodwill

o Cost of discounts to recapture customers

In the short term, there is a positive correlation between quality improvementand the cost of conformance and a negative correlation between quality improvementand the cost of nonconformance. In other words, an improvement in the quality of theproducts will lead to an increase in the cost of conformance that generated it. This isbecause an improvement in the quality level of a product might require extra investmentin R&D, more spending in appraisal cost, more investment in failure prevention and soon.

But a quality improvement will lead to a decrease in the cost of nonconformancebecause fewer products will be returned from the customers, therefore less operatingcost of customer support and there will be less internal rework.

For instance, one of the CTQs (Critical-To-Quality) for an LCD (Liquid CrystalDisplay) is the number of pixels it contains. The brightness of each pixel is controlled byindividual transistors that switch the backlights on and off. The manufacturing of LCDsis very complex and very expensive and it is very hard to determine the number of deadpixels on an LCD before the end of the manufacturing process. So, in order to reducethe number of scrapped units, if the number of dead pixels is infinitesimal or the deadpixels are almost invisible, the manufacturer would consider the LCDs as “good enough”to be sold. Otherwise, the cost of scrap or internal rework would be so prohibitive thatit would jeopardize the cost of production. Improving the quality level of the LCDs tozero dead pixels would therefore increase the cost of conformance.

On the other hand, not improving the quality level of the LCDs will lead to anincrease in the probability of having returned products from customers and internalrework, therefore increasing the cost of nonconformance.

The following graph plots the relationship between quality improvement andthe cost of conformance on one hand and the cost of nonconformance on the otherhand.

Page 56: DBA1656

DBA 1656 QUALITY MANAGEMENT

56

NOTES

Anna University Chennai

FIGURE 2.9

If the manufacturer determines the quality level at Q2, the cost of conformancewould be low (C1), but the cost of nonconformance would be high (C2) because theprobability for customer dissatisfaction will be high and more products will be returnedfor rework therefore increasing the cost of rework, the cost of customers services andshipping and handling.

The Total cost of Quality would be the sum the cost of conformance and thecost of nonconformance, that cost would be C3 for a quality level of Q2.

C3 = C1 + C2.

FIGURE 2.10

Should the manufacturer decide that the quality level would be at Q1, the costof conformance (C2) would be higher than the cost of nonconformance (C1) and theTotal cost of Quality would be at C3.

Page 57: DBA1656

DBA 1656 QUALITY MANAGEMENT

57

NOTES

Anna University Chennai

The Total Cost Of Quality is minimized only when the cost of conformance andthe cost of nonconformance are equal.

It is worth to note that currently, the frequently used graph to represent thethroughput yield in manufacturing is the normal curve. For a given target and specifiedlimits, the normal curve helps estimate the volume of defects that should be expected.So while the normal curve estimates the volume of defects, the U curve estimates thecost incurred as a result of producing parts that do not match the target.

The following graph represents both the volume of expected conforming andnonconforming parts and the costs associated to them at every level.

FIGURE 2.11

II Taguchi’s Loss Function

In the now traditional quality management acceptance, the engineers integrateall the CTQs in the design of their new products and they clearly specify the target fortheir production processes as they define the characteristics of the products to be sentto the customers, but because of unavoidable common causes of variation (variationsthat are inherent to the production process and that are hard to eliminate) and the highcosts of conformance, they are obliged to allow some variation or tolerance around thetarget. Any product that falls within the specified tolerance is considered as meeting thecustomers’ expectations, and any product outside the specified limits would beconsidered as nonconforming.

But according to Taguchi, the products that do not match the target, even ifthey are within the specified limits do not operate as intended and any deviation fromthe target, be it within the specified limits or not will generate financial loss to the customers,the company and to society and the loss is proportional to the deviation from the target.

Suppose that a design engineer specifies the length and diameter of a certainbolt that needs to fit a given part of a machine. Even if the customers do not notice it,any deviation from the specified target will cause the machine to wear out faster causingthe company financial loss under the form of repair of the products under warranty or aloss of customers if the warranty has expired.

Page 58: DBA1656

DBA 1656 QUALITY MANAGEMENT

58

NOTES

Anna University Chennai

Taguchi Constructed a Loss Function equation to determine how much societyloses every time the parts produced do not match the specified target. The Loss Functiondetermines the financial loss that occurs every time a CTQ of a product deviates fromits target. The loss function is the square of the deviation multiplied by a constant k, withk being the ratio of the cost of defective product and the square of the tolerance.

The loss function quantifies the deviation from the target and assigns a financialvalue to the deviation.

∆ = cost of a defective productAnd m = LSL – T or m = T - USL

According to Taguchi, the cost of quality in relation with the deviation from thetarget is not linear because the customers’ frustration increases (at a faster rate) asmore defects are found on a product. That’s why the loss function is quadratic.

FIGURE 2.12

The graph that depicts the financial loss to society that results from a deviationfrom the target resembles the Total Cost of quality U graph that we built earlier but thepremises that helped build them are not the same. While the Total Cost curve was builtbased on the costs of conformance and nonconformance, Taguchi’s Loss Function isprimarily based on the deviation from the target and measures the loss from thecustomers’ expectation perspective.

Example:

Suppose a machine manufacturer specifies the target for the diameter of a givenrivet to be 6 inches and the upper and lower limits of 5.98 and 6.02 inches respectively.

Page 59: DBA1656

DBA 1656 QUALITY MANAGEMENT

59

NOTES

Anna University Chennai

A bolt measuring 5.99 inches is inserted in its intended hole of a machine. Five monthsafter the machine was sold, it breaks down as a result of loose parts. The cost of repairis estimated at $95, find the loss to society incurred as a result of the part not matchingits target.

Solution:We must first determine the value of the constant k

T = 6 USL = 6.02 m = (USL - T) = 6.02 - 6 = 0.02 ∆ = 95 K = (95 / 0.004) = 237500 Therefore

Not producing a bolt that match the target would have resulted in a financialloss to society that amounted to $23.75.

Taguchi Method :Variability Reduction

Since the deviation from the target is the source of financial loss to society,what needs to be done in order to prevent any deviation from the set target?

The first thought might be to reduce the specification range and improve theonline quality control, to bring the specified limits closer to the target and inspectmore samples during the production process in order to find the defective productsbefore they reach the customers. But this would not be a good option since it wouldonly address the symptoms and not the root causes of the problem. It would be anexpensive alternative because it would require more inspection which would at besthelp detect nonconforming parts early enough to prevent them from reaching thecustomers.

The root of the problem is in fact the variation within the production process,i.e., the value of sigma, the standard deviation from the mean.

Let’s illustrate this assertion with an example. Let’s suppose that the length ofa screw is a Critical-To-Quality (CTQ) characteristic and the target is determined to be15” with a LCL of 14. 96 and a UCL of 15.04. The following sample was taken fortesting:

Page 60: DBA1656

DBA 1656 QUALITY MANAGEMENT

60

NOTES

Anna University Chennai

15.02

14.99

14.96

15.03

14.98

14.99

15.03

15.01

14.99

All the observed items in this sample fall within the control limits even though allof them do not match the target. The mean is 15 and the standard deviation is 0.023979.Should the manufacturer decide to improve the quality of the output by reducing therange of the control limits to 14.98 and 15.02, three of the items in the sample wouldhave failed audit and would have to be reworked or discarded.

Let’s suppose that the manufacturer decides instead to reduce the variability(the standard deviation) around the target and leave the control limits untouched. Afterprocess improvement, the following sample is taken:

15.01

15

14.99

15.01

14.99

14.99

15

15.01

15

The mean is still 15 but the standard deviation has been reduced to 0.00866and all the observed items are closer to the target. Reducing the variability around thetarget has resulted in improving quality in the production process at a lower cost.

This is not to suggest that the tolerance around the target should never bereduced; addressing the tolerance limits should be done under specific conditions andonly after the variability around the target has been reduced.

Page 61: DBA1656

DBA 1656 QUALITY MANAGEMENT

61

NOTES

Anna University Chennai

Since variability is a source of financial loss to producers, customers and societyat large, it is necessary to determine what the sources of variation are so that actionscan be taken to reduce them. According to Taguchi, these sources of variation that hecalls Noise factors can be reduced to three:

• The Inner NoiseInner noises are deteriorations due to time. Product wear, metal rust orfading colors, material shrinkage and product waning are among the InnerNoise factors.

• The Outer Noises which are environmental effects on the products.They are factors such as heat, humidity, operating conditions or pressure.These factors have negative effects on products or processes. In the case ofmy notebook, at first the LCD would not display until it heats up so humiditywas the noise factor that was preventing it from operating properly. Themanufacturer has no control over these factors.

• The Product Noise or manufacturing imperfectionsProduct noises are due to production malfunctions, they can come from badmaterials, inexperienced operator or bad machine settings.

But if the online quality control is not the appropriate way to reduce productionvariations, what needs to be done to prevent deviations from the target?

According to Taguchi, a pre-emptive approach must be taken to thwart thevariations in the production processes. That pre-emptive approach that he calls Off-line Quality control consists in creating a robust design, in other words designingproducts that are insensitive to the noise factors.

Concept Design

The production of a product starts with the concept design, which consists inchoosing the product or service to be produced and defining its structural design andthe production process that will be used to generate it. These factors are contingentupon among other factors the cost of production, the company’s strategy, the currenttechnology and the market demand. So the concept design will consist in:

• Determining the intended use of the product and its basic functions

• Determining the materials needed to produce the selected product

• Determining the production process needed to produce it

Parameter Design

The next step in the production process is the parameter design. After thedesign architecture has been selected; the producer will need to set the parameterdesign. The parameter design consists in selecting the best combination of controlfactors that would optimize the quality level of the product by reducing the product’ssensitivity to noise factors. Control factors are parameters over which the designer

Page 62: DBA1656

DBA 1656 QUALITY MANAGEMENT

62

NOTES

Anna University Chennai

has control. When an engineer designs a computer, he has control on factors such asthe CPU, System board, LCD, memory, LCD cables…. etc. He determines whatCPU best fits a motherboard, what memory stick and what wireless network card touse and how to design the system board that would make it easier for the parts to fit in.The way he combines those factors will impact the quality level of the computer.

The producer wants to design products at the lowest possible cost and at thesame time have the best quality result under current technology. To do so, the combinationof the control factors must be optimal while the effect of the noise factors must be sominimal that they will not have any negative impact on the functionality of the products.So the experiment that leads to the optimal results will require the identification of thenoise factors because they are part of the process and their effects need to be controlled.

Signal to Noise Ratio

One of the first steps the designer will take is to determine what the optimalquality level is. He will need to determine what the functional requirements are, assessthe Critical-To-Quality characteristics of the product and specify their targets. Thedetermination of the CTQs and their targets depends among other criteria on the customerrequirements, the cost of production and current technology. The engineer is seekingto produce the optimal design, a product that is insensitive to noise factors!The quality level of the CTQ characteristics of the product under optimal conditionsdepends on whether the response experiment is static or dynamic.The response experiment (or output of the experiment) is said to be dynamic when theproduct has a signal factor that steers the output. For instance, when I switch on thepower button on my computer, I am sending a signal to the computer to load myOperating System. It should power up and display within 5 seconds and it should do soexactly the same way every time I switch it on. If, as in the case of my computer, it failsto display because of the humidity, I conclude that the computer is sensitive to humidityand that humidity is a noise factor that negatively impacts the performance of mycomputer.

FIGURE 2.13

The response experiment is said to be static when the quality level of the CTQcharacteristic is fixed. In that case, the optimization process will seek to determine the

Page 63: DBA1656

DBA 1656 QUALITY MANAGEMENT

63

NOTES

Anna University Chennai

optimal combination of factors that enables to reach the targeted value. This happens inthe absence of a signal factor, the only input factors are the control factors and the noisefactors. When we build a table, we determine all the CTQ target and we want toproduce a balanced table with all the parts matching the targets.

The optimal quality level of a product depends on the nature of the productitself. In some cases, the more a CTQ characteristic is found on a product, the happierthe customers are, in other cases the less the CTQ is present, the better it is. Someproducts require the CTQs to match their specified targets.According to Taguchi, to optimize the quality level of his products, the producer mustseek to minimize the noise factors and maximize the Signal-To-Noise (S/N) ratio. Taguchiuses log functions to determine the Signal-To-Noise ratios that optimize the desiredoutput.

The Bigger-The-Better

If the number of minutes per dollar customers get from their cellular phoneservice provider is critical to quality, the customers will want to get the maximum numberof minutes they can for every dollar they spend on their phone bills.

If the lifetime of a battery is critical to quality, the customers will want theirbatteries to last forever. The longer the battery lasts, the better it is.

The Signal-To-Noise ratio for the bigger-the-better is:

S/N = -10*log (mean square of the inverse of the response)

The Smaller-The-Better

Impurity in drinking water is critical to quality. The less impurities customersfind in their in their drinking water, the better it is.

Vibrations are critical to quality for a car, the less vibration the customers feelwhile driving their cars the better, the more attractive the cars are.

The Signal-To-Noise ratio for the Smaller-The-Better is:

S/N = -10 *log (mean square of the response)

Page 64: DBA1656

DBA 1656 QUALITY MANAGEMENT

64

NOTES

Anna University Chennai

The Nominal-The-Best.

When a manufacturer is building matching parts, he would want every part tomatch the predetermined target. For instance, when he is creating pistons that need tobe anchored on a given part of a machine, failure to have the length of the piston tomatch a predetermined size will result in it being either too small or too long resulting inlowering the quality of the machine. In that case, the manufacturer wants all the parts tomatch their target.

When a customer buys ceramic tiles to decorate his bathroom, the size of thetiles is critical to quality, having tiles that do not match the predetermined target willresult in them not being correctly lined up against the bathroom walls.

The S/N equation for the Nominal-The-Best is:

S/N = 10 * log (the square of the mean divided by the variance)

Tolerance Design.

Parameter design may not completely eliminate variations from the target. That’swhy tolerance design must be used for all parts of a product to limit the possibility ofproducing defective products. The tolerance around the target is usually set by thedesign engineers; it is defined as the range within which variation may take place. Thetolerance limits are set after testing and experimentation. The setting of the tolerancemust be determined by criteria such as the set target, the safety factors, the functionallimits, the expected quality level and the financial cost of any deviation from the target.

The safety limits measure the loss incurred when products that are outside thespecified limits are produced.

With being the loss incurred when the functional limits are exceeded and A being theloss when the tolerance limits are exceeded.

tolerance specifications for the response factor will be:

With being the functional limit.

Example:

The functional limits of a conveyor motor are +/- 0.05 of the response RPM.The adjustments made at the audit station before a motor left the company cost $2.5and the cost associated to defective motors once it has been sold is on average $180.

Page 65: DBA1656

DBA 1656 QUALITY MANAGEMENT

65

NOTES

Anna University Chennai

Find the tolerance specification for a 2500 RPM motor.

Solution:

We need first of all to find the economical factor which is determined by theloss incurred when the functional limits or/and the tolerance limits are exceeded.

Now, we can determine the tolerance specification. The tolerance specificationwill be the value of the response factor plus or minus the allowed variation from thetarget.

Tolerance specification for the response factor:

The variation from the target:

2500 * 0.0059 = 14.73

The tolerance specification will be 2500 +/- 14.73.

2.9 OVERVIEW OF THE CONTRIBUTIONS OF SHINGEO

Shingeo

Mistake Proofing or Poka-Yoke was pioneered By Shingeo Shingeo anddetailed in his ‘Zero Defects Model’. Poke Yoke is defined as a simple, inexpensivedevice that is non operator dependent, build into the production process at the sourceof the operation for the purpose of preventing safety hazards and quality defects 100%of the time.

It has many applications from a production environment to a lean office orpaper trail process and is used as a method for introducing a mistake proofing idea intoa process to eliminate defects in that process.

Page 66: DBA1656

DBA 1656 QUALITY MANAGEMENT

66

NOTES

Anna University Chennai

Examples in everyday include:

• Bathroom sinks have a mistake-proofing device. It is the little hole near the topof the sink that helps prevent overflows

• An iron turns off automatically when it is left unattended or when it is returnedto its holder

• The window in the envelope is not only a labour saving device. It prevents thecontents of an envelope intended for one person being inserted in an envelopeaddress to another

Shigeo Shingo’s life-long work has contributed to the well-being of everyone inthe world. Shigeo Shingo along with Taiichi Ohno, Kaoru Ishikawa and others hashelped to revolutionise the way we manufacture goods. His improvement principlesvastly reduce the cost of manufacturing - which means more products to more people.They make the manufacturing process more responsive while opening the way to newand innovative products with less defects and better quality.

He was the first to create some of the strategies on continuous and totalinvolvement of all employees. Shingo’s never-ending spirit of inquiry challenges thestatus quo at every level- he proposed that everything could be improved. Shingobelieved that inventory is not just a necessary evil, but all inventory is absolute evil. Heis one of the pioneers of change management. He brought about many new conceptssuch as ZD (Zero Defects), shifting use of statistics for acceptance or rejection fromSQC (Statistical Quality Control) to SPC (Statistical Process Control), SMED (SingleMinute Exchange of Dies), POKA-YOKE (mistake proofing), Defining Processes &Operations in two-dimensions of VA (Value Addition) and non-VA, etc.

2.10 CONCEPTS OF QUALITY CIRCLE

Quality circle

Quality is conformance to the claims made. A quality circle is a volunteer groupcomposed of workers who meet together to discuss workplace improvement, andmake presentations to management with their ideas. Typical topics are improving safety,improving product design, and improvement in manufacturing process. Quality circleshave the advantage of continuity, the circle remains intact from project to project.

Quality Circles were started in Japan in 1962 ( Kaoru Ishikawa has beencredited for creating Quality Circles) as another method of improving quality. Themovement in Japan was coordinated by the Japanese Union of Scientists and Engineers(JUSE). Prof. Ishikawa, who believed in tapping the creative potential of workers,

Page 67: DBA1656

DBA 1656 QUALITY MANAGEMENT

67

NOTES

Anna University Chennai

innovated the Quality Circle movement to give Japanese industry that extra creativeedge. A Quality Circle is a small group of employees from the same work area whovoluntarily meet at regular intervals to identify, analyze, and resolve work relatedproblems. This can not only improve the performance of any organization, but alsomotivate and enrich the work life of employees.

The use of Quality Circles in many highly innovative companies in theScandinavian countries has been proven. The practice of it is recommended by manyeconomist/business scholars.

Dictionary meaning of Quality circle is: A group of employees who performsimilar duties and meet at periodic intervals, often with management, to discuss work-related issues and to offer suggestions and ideas for improvements, as in productionmethods or quality control.

Business Dictionary defines Quality Circles as: Small groups of employeesmeeting on a regular basis within an organization for the purpose of discussing anddeveloping management issues and procedures. Quality circles are established withmanagement approval and can be important in implementing new procedures. Whileresults can be mixed, on the whole, management has accepted quality circles as animportant organizational methodology.

As per the Small Business Encyclopedia, Quality Circle is identified as: A qualitycircle is a participatory management technique that enlists the help of employees insolving problems related to their own jobs. In their volume Japanese Quality Circlesand Productivity, Joel E. Ross and William C. Ross define a quality circle as “a smallgroup of employees doing similar or related work who meet regularly to identify, analyze,and solve product-quality and production problems and to improve general operations.The circle is a relatively autonomous unit (ideally about ten workers), usually led by asupervisor or a senior worker and organized as a work unit.” Employees who participatein quality circles usually receive training in formal problem-solving methods—such asbrainstorming, pareto analysis, and cause-and-effect diagrams—and then areencouraged to apply these methods to either specific or general company problems.After completing an analysis, they often present their findings to management and thenhandle implementation of approved solutions.

Although most commonly found in manufacturing environments, quality circlesare applicable to a wide variety of business situations and problems. They are based ontwo ideas: that employees can often make better suggestions for improving workprocesses than management; and that employees are motivated by their participation in

Page 68: DBA1656

DBA 1656 QUALITY MANAGEMENT

68

NOTES

Anna University Chennai

making such improvements. Thus, implemented correctly, quality circles can help asmall business reduce costs, increase productivity, and improve employee morale. Otherpotential benefits that may be realized by a small business include greater operationalefficiency, reduced absenteeism, improved employee health and safety, and an overallbetter working climate. In their book Production and Operations Management,Howard J. Weiss and Mark E. Gershon called quality circles “the best means today formeeting the goal of designing quality into a product.”

The interest of U.S. manufacturers in quality circles was sparked by dramaticimprovements in the quality and economic competitiveness of Japanese goods in thepost-World War II years. The emphasis of Japanese quality circles was on preventingdefects from occurring rather than inspecting products for defects following amanufacturing process. Japanese quality circles also attempted to minimize the scrapand downtime that resulted from part and product defects. In the United States, thequality circle movement evolved to encompass the broader goals of cost reduction,productivity improvement, employee involvement, and problem-solving activities.

Background

Quality circles were originally associated with Japanese management andmanufacturing techniques. The introduction of quality circles in Japan in the postwaryears was inspired by the lectures of W. Edwards Deming (1900-1993), a statisticianfor the U.S. government. Deming based his proposals on the experience of U.S. firmsoperating under wartime industrial standards. Noting that American management hadtypically given line managers and engineers about 85 percent of the responsibility forquality control and line workers only about 15 percent, Deming argued that these sharesshould be reversed. He suggested redesigning production processes to more fully accountfor quality control, and continuously educating all employees in a firm—from the topdown—in quality control techniques and statistical control technologies. Quality circleswere the means by which this continuous education was to take place for productionworkers.

Deming predicted that if Japanese firms adopted the system of quality controlshe advocated, nations around the world would be imposing import quotas on Japaneseproducts within five years. His prediction was vindicated. Deming’s ideas became veryinfluential in Japan, and he received several prestigious awards for his contributions tothe Japanese economy.

The principles of Deming’s quality circles simply moved quality control to anearlier position in the production process. Rather than relying upon post-production

Page 69: DBA1656

DBA 1656 QUALITY MANAGEMENT

69

NOTES

Anna University Chennai

inspections to catch errors and defects, quality circles attempted to prevent defectsfrom occurring in the first place. As an added bonus, machine downtime and scrapmaterials that formerly occurred due to product defects were minimized. Deming’s ideathat improving quality could increase productivity led to the development in Japan ofthe Total Quality Control (TQC) concept, in which quality and productivity are viewedas two sides of a coin. TQC also required that a manufacturer’s suppliers make use ofquality circles.

Quality circles in Japan were part of a system of relatively cooperative labor-management relations, involving company unions and lifetime employment guaranteesfor many full-time permanent employees. Consistent with this decentralized, enterprise-oriented system, quality circles provided a means by which production workers wereencouraged to participate in company matters and by which management could benefitfrom production workers’ intimate knowledge of the production process. In 1980 alone,changes resulting from employee suggestions resulted in savings of $10 billion forJapanese firms and bonuses of $4 billion for Japanese employees.

Active American interest in Japanese quality control began in the early 1970s,when the U.S. aerospace manufacturer Lockheed organized a tour of Japanese industrialplants. This trip marked a turning point in the previously established pattern, in whichJapanese managers had made educational tours of industrial plants in the United States.Lockheed’s visit resulted in the gradual establishment of quality circles in its factoriesbeginning in 1974. Within two years, Lockheed estimated that its fifteen quality circleshad saved nearly $3 million, with a ratio of savings to cost of six to one. As Lockheed’ssuccesses became known, other firms in the aerospace industry began adopting qualitycircles. Thereafter, quality circles spread rapidly throughout the U.S. economy; by 1980,over one-half of firms in the Fortune 500 had implemented or were planning onimplementing quality circles.

In the early 1990s, the U.S. National Labor Relations Board (NLRB) madeseveral important rulings regarding the legality of certain forms of quality circles. Theserulings were based on the 1935 Wagner Act, which prohibited company unions andmanagement-dominated labor organizations. One NLRB ruling found quality programsunlawful that were established by the firm, that featured agendas dominated by the firm,and addressed the conditions of employment within the firm. Another ruling held that acompany’s labor-management committees were in effect labor organizations used tobypass negotiations with a labor union. As a result of these rulings, a number of employerrepresentatives expressed their concern that quality circles, as well as other kinds oflabor-management co-operation programs, would be hindered. However, the NLRBstated that these rulings were not general indictments against quality circles and labor-management cooperation programs, but were aimed specifically at the practices of thecompanies in question.

Page 70: DBA1656

DBA 1656 QUALITY MANAGEMENT

70

NOTES

Anna University Chennai

Requirements for Successful Quality Circles

In his book Productivity Improvement: A Guide for Small Business, Ira B.Gregerman outlined a number of requirements for a small business contemplating theuse of quality circles. First, the small business owner should be comfortable with aparticipative management approach. It is also important that the small business havegood, co-operative labor-management relations, as well as the support of middlemanagers for the quality circle program. The small business owner must be willing andable to commit the time and resources needed to train the employees who will participatein the program, particularly the quality circle leaders and facilitators. It may even benecessary to hire outside facilitators if the time and expertise does not exist in-house.Some small businesses may find it helpful to establish a steering committee to providedirection and guidance for quality circle activities. Even if all these requirements aremet, the small business will only benefit from quality circles if employee participation isvoluntary, and if employees are allowed some input into the selection of problems to beaddressed. Finally, the small business owner must allow time for the quality circles tobegin achieving desired results; in some cases, it can take more than a year forexpectations to be met.

But successful quality circles offer a wide variety of benefits for small businesses.For example, they serve to increase management’s awareness of employee ideas, aswell as employee awareness of the need for innovation within the company. Qualitycircles also serve to facilitate communication and increase commitment among bothlabor and management. In enhancing employee satisfaction through participation indecision-making, such initiatives may also improve a small business’s ability to recruitand retain qualified employees. In addition, many companies find that quality circlesfurther teamwork and reduce employee resistance to change. Finally, quality circlescan improve a small business’s overall competitiveness by reducing costs, improvingquality, and promoting innovation.

2.11 JAPANESE 5S PRINCIPLES

5S

The 5Ses referred to in Lean are:

• Sort• Straighten• Shine• Standardize• Sustain

In fact, these 5S principles are actually loose translations of five Japanese words:

• Seiri - Put things in order (remove what is not needed and keep whatis needed)

Page 71: DBA1656

DBA 1656 QUALITY MANAGEMENT

71

NOTES

Anna University Chennai

• Seiton - Proper Arrangement (Place things in such a way that they canbe easily reached whenever they are needed)

• Seiso - Clean(Keep things clean and polished; no trash or dirt in theworkplace)

• Seiketsu - Purity (Maintain cleanliness after cleaning - perpetualcleaning)

• Shitsuke - Commitment (a typical teaching attitude towards anyundertaking to inspire pride and adherence to standards establishedfor the four components)

Another way to summarize 5S is:

A place for everything (first 3Ses) and everything in it’s place (last two Ss)

2.12 8D METHODOLOGY

8 Disciplines

The “8D (8 Disciplines)” process is another problem solving method that isoften required specifically in the automotive industry. One of the distinguishingcharacteristics of the 8D methodology is its emphasis on “teams.”

The steps to 8D analysis are:

1. Use Team Approach

2. Describe the Problem

3. Implement and Verify Interim Actions (Containment)

4. Identify Potential Causes

5. Choose/Verify Corrective Actions

6. Implement Permanent Corrective Actions

7. Prevent Recurrence

8. Congratulate Your Team

SUMMARY

The principles and philosophies behind the evolution of various qualitymanagement techniques are elaborated in this unit. Walter A. Shewhart focused hiswork on ensuring control in industrial quality process. He laid the foundation forevolutionary thinking on quality and its management. William Edwards Deming, acontemporary to Shewhart has developed PDSA cycle and named it as Shewhartcycle. Later, it was changed as PDCA cycle. Deming has made an attempt to balancethe standardized changes and continuous improvement of things in the organization.

Page 72: DBA1656

DBA 1656 QUALITY MANAGEMENT

72

NOTES

Anna University Chennai

Joseph Juran’s contribution is Quality Trilogy. While emphasizing quality planning duringdesign of a product, he laid more stress on quality control during operations. The famousCost of quality curve to identify the optimum conformance level was developed byJuran . The road –map for quality planning and the steps to continuous qualityimprovement are amongst the contributions of Juran. Philip B.Crosby has identifiedfive absolutes of quality management and he has prescribed a quality vaccine. Toachieve zero defects in organization he has spelt out fourteen Step Quality Programmeand firmly believes that the zero defect is an achievable goal. Masaaki Imai introducedKaizen to the world. He has established “Kaizen Institute” which is propagating hisideas throughout the world. Mitchell Jay Feigenbaum pioneered studies in Chaos theory.Through his publications, he was able to disseminate the logistic maps developed byhim. Kaoru Ishikawa suggested the diagram to identify the root cause of a problem.The fish bone diagram also known as cause and effect diagram is predominantly used infixing quality related problems. The construction methodology is also presented.Dr.Genich Taguchi has popularized the concept design of experiments. He has alsodeveloped a complete philosophy of off-line quality control and innovations in the designon experiments are deliberated in detail. Shingeo introduced “Zero Defects Model” tothe production community. Poke-Yoke is advocated by him. The concepts of qualitycircle was introduced by Deming. The requirements for successful quality circles and itsevolution are presented. Japanese 5S principles, namely Seiri, Seiton, Seiso, Seiketsu,Shitsuke and the 8 disciplines to be focused in the new era are deliberated.

REVIEW QUESTIONS

1. Explain the influence of Walter A.Shewhart on ensuring quality in organization.

2. Explain Shewhart Cycle and elaborate the contributions of Deming on that.

3. Illustrate Juran Trilogy and demonstrate how quality is ensured throughout that.

4. Enumerate the 14-step quality programme advocated by Crosby.

5. Explain how the logistics map was developed by Feigenbaum.

6. Illustrate Ishikawa diagram and demonstrate its usefulness in problem solving.

7. What is Taguchi loss function? Explain the principles of operation.

8. Highlight the concept of quality circle and explain the requirements for successfullycarrying it out.

9. Explain the Japanese 5S principle.

10. Discuss the 8D methodology with examples.

Page 73: DBA1656

DBA 1656 QUALITY MANAGEMENT

73

NOTES

Anna University Chennai

STATISTICAL PROCESS CONTROLAND PROCESS CAPABILITY

INTRODUCTION

The application of the principles and philosophies of quality management callsfor capability assessment. This has to be undertaken by the professionals through theprocess of re-engineering. There are specific control tools like SPC, to be adopted invarious industries at various stages. To achieve TQM, every aspect of it whether it isreliability, maintenance, micro level technology assimilation has to be re-looked into.This comprehensive exercise will bring out the best part of organizational capabilities.This unit deals with Meaning and significance of Statistical Process Control (SPC),Construction of Control Charts for variables and attributes, Process Capability –Meaning, Significance and Measurement, Six Sigma Concepts of Process Capability,Reliability Concepts – Definitions, Reliability in Series and Parallel, Product LifeCharacteristics Curve, Total Productive Maintenance (TMP), Relevance to TQM,Terotechnology, Business Process Re-engineering (BPR), Principles, Applications, Re-engineering Process, Benefits and Limitations.

LEARNING OBJECTIVES

Upon completion of this unit, you will be able to:

• Assess the importance and need for process control.• Develop and use various charts and techniques for SPC• Analyze the evolution of six sigma• Understand the reliability variants and their applications• Get a scenario about BPR and its usage.• Apply various process control techniques in real life situation.

UNIT-III

Page 74: DBA1656

DBA 1656 QUALITY MANAGEMENT

74

NOTES

Anna University Chennai

3.1 MEANING AND SIGNIFICANCE OF STATISTICAL PROCESS CONTROL (SPC)

Statistical process control (SPC) is a method for achieving quality control inmanufacturing processes. It employs control charts to detect whether the processobserved is under control.

Classical quality control was achieved by inspecting 100% of the finished productand accepting or rejecting each item based on how well the item met specifications. Incontrast, statistical process control uses statistical tools to observe the performance ofthe production line to predict significant deviations that may result in rejected products.

The underlying assumption is that there is variability in any production process:The process produces products whose properties vary slightly from their designedvalues, even when the production line is running normally, and these variances can beanalyzed statistically to control the process. For example, a breakfast cereal packagingline may be designed to fill each cereal box with 500 grams of product, but some boxeswill have slightly more than 500 grams, and some will have slightly less, in accordancewith a distribution of net weights. If the production process, its inputs, or its environmentchanges (for example, the machines doing the manufacture begin to wear) this distributioncan change. For example, as its cams and pulleys wear out, the cereal filling machinemay start putting more cereal into each box than specified. If this change is allowed tocontinue unchecked, more and more product will be produced that fall outside thetolerances of the manufacturer or consumer, resulting in waste. While in this case, thewaste is in the form of “free” product for the consumer, typically waste consists ofrework or scrap.

By using statistical tools, the quality engineer responsible for the production linecan troubleshoot the root cause of the variation that has crept in to the process andcorrect the problem.

Focus Area: Quality - Measurement Definition and Summary: Applying statistical process control (use of control charts)to the management of software development efforts, to effect software processimprovement.

Statistical Process Control (SPC) can be applied to software developmentprocesses. A process has one or more outputs, as depicted in the figure below. Theseoutputs, in turn, have measurable attributes. SPC is based on the idea that these attributeshave two sources of variation: natural (also known as common) and assignable (also

Page 75: DBA1656

DBA 1656 QUALITY MANAGEMENT

75

NOTES

Anna University Chennai

known as special) causes. If the observed variability of the attributes of a process iswithin the range of variability from natural causes, the process is said to be under statisticalcontrol. The practitioner of SPC tracks the variability of the process to be controlled. When that variability exceeds the range to be expected from natural causes, thepractitioner then identifies and corrects assignable causes.

FIGURE 3.1

SPC is a powerful tool to optimize the amount of information needed for use inmaking management decisions. Statistical techniques provide an understanding of thebusiness baselines, insights for process improvements, communication of value andresults of processes, and active and visible involvement. SPC provides real time analysisto establish controllable process baselines; learn, set, and dynamically improve processcapabilities; and focus business on areas needing improvement. SPC moves awayfrom opinion-based decision-making.

These benefits of SPC cannot be obtained immediately by all organizations. SPC requires defined processes and a discipline of following them. It requires a climatein which personnel are not punished when problems are detected, and strong managementcommitment.

The key steps for implementing Statistical Process Control are:

o Identify defined processeso Identify measurable attributes of the processo Characterize natural variation of attributeso Track process variationo If the process is in control, continue to track

Page 76: DBA1656

DBA 1656 QUALITY MANAGEMENT

76

NOTES

Anna University Chennai

o If the process is not in control:- Identify assignable cause- Remove assignable cause- Return to “Track process variation”

FIGURE 3.2 Statistical Process Control

How To Perform SPC

In practice, reports of SPC in software development and maintenance tend toconcentrate on a few software processes. Specifically, SPC has been used to controlsoftware (formal) inspections, testing, maintenance, and personal process improvement. Control charts are the most common tools for determining whether a software processis under statistical control. A variety of types of control charts are used in SPC. Table1, based on a survey [Radice 2000] of SPC usage in organizations attaining Level 4 orhigher on the SEI CMM metric of process maturity, shows what types are mostcommonly used in applying SPC to software. The combination of an Upper ControlLimit (UCL) and a Lower Control Limit (LCL) specify, on control charts, the variabilitydue to natural causes. Table 2 shows the levels commonly used in setting control limitsfor software SPC. Table 3 shows the most common statistical techniques, other thancontrol charts, used in software SPC. Some of these techniques are used in trialapplications of SPC to explore the natural variability of processes. Some are used intechniques for eliminating assignable causes. Analysis of defects is the most commontechnique for eliminating assignable causes. Causal Analysis-related techniques, such

Page 77: DBA1656

DBA 1656 QUALITY MANAGEMENT

77

NOTES

Anna University Chennai

as Pareto analysis, Ishikawa diagrams, the Nominal Group Technique (NGT), andbrainstorming, are also frequently used for eliminating assignable causes.

Table 3.1: Usage of Control Charts

Type of Control/ Percentage

Attribute ChartXbar-mR 33.3%u-Chart 23.3%Xbar 13.3%c-Chart 6.7%z-Chart 6.7%Not clearly stated 16.7%

From Ron Radice’s survey of 25 CMM Level 4 and Level 5 organizations[Radice 2000]

Table 3.2: Location of UCL-LCL in Control Charts

Location PercentageThree-sigma 16%Two-sigma 4%One-Sigma 8%Combination 16%None/Not Clear 24%

From Ron Radice’s survey of 25 CMM level 4 and level 5 organizations[Radice 2000]

Table 3.3: Usage of Other Statistical Techniques

Statistical Technique Percentage

Run Charts 22.8%Histograms 21.1%Pareto Analysis 21.1%Scatter Diagrams 10.5%Regression Analysis 7.0%Pie Charts 3.5%Radar/Kiviat Charts 3.5%Other 10.5%

Page 78: DBA1656

DBA 1656 QUALITY MANAGEMENT

78

NOTES

Anna University Chennai

From Ron Radice’s survey of 25 CMM level 4 and level 5 organizations[Radice 2000]

Control charts are a central technology for SPC. Figure 3 shows a samplecontrol chart constructed from simulated data. This is an X-chart, where the value ofthe attribute is graphed. Control limits are graphed. In this case, the control limits arebased on a priori knowledge of the distribution of the attribute when the process isunder control. The control limits are at three sigma. For a normal distribution, 0.2% ofsamples would fall outside the limits by chance. This control chart indicates the processis out of control. If this control chart were for real data, the next step would be toinvestigate the process to identify assignable causes and to correct them, thereby bringingthe process under control.

FIGURE 3.3

Some have extended the focus of SPC in applying it to software processes. Inmanufacturing, the primary focus of control charts is to bring the process back intocontrol. In software, the product is also a focus. When a software process exceedsthe control limits, rework is typically performed on the product. In manufacturing, thecost of stopping a process is high. In software, the cost of stopping is lower, and fewshutdown and startup activities are needed [Jalote and Saxena 2002].

SPC is one way of applying statistics to software engineering. Otheropportunities for applying statistics exist in software engineering. Table 4 shows, bylifecycle phase, some of these uses of statistics. The National Research Council recentlysponsored the Panel on Statistical Methods in Software Engineering [NRC 1996]. The panel recommended a wide range of areas for applying statistics, from visualizingtest and metric data to conducting controlled experiments to demonstrate newmethodologies.

Page 79: DBA1656

DBA 1656 QUALITY MANAGEMENT

79

NOTES

Anna University Chennai

Table 3.4: Some Applications of Statistics in Software Engineering

Phase Use of Statistics

Requirements Specify performance goals that can be measured statistically, e.g.,no more than 50 total field faults and zero critical faults with 90%confidence.

Design Pareto analysis to identify fault-prone modules. Use of design ofexperiments in making design decisions empirically.

Coding Statistical control charts applied to inspections.

Testing Coverage metrics provides attributes. Design of experiments usefulin creating test suites. Statistical usage testing is based on specifiedoperational profile. Reliability models can be applied.

Based on [Dalal, et. al. 1993]

Those applying SPC to industrial organizations, in general, have built processimprovements on top of SPC. The focus of SPC is on removing variation caused byassignable causes. As defined here, SPC is not intended to lower process variationresulting from natural causes. Many corporations, however, have extended their SPCefforts with Six Sigma programs. Six Sigma provides continuous process improvementand attempts to reduce the natural variation in processes. Typically, Six Sigma programsuse the “Seven Tools of Quality” (Table 5). The Shewhart Cycle (Figure 4) is afundamental idea for continuous process improvement.

Table 3.5: The Seven Tools of Quality

Tool Example of Use

Check Sheet To count occurrences of problems.

Histogram To identify central tendencies and any skewing to oneside or the other.

Pareto Chart To identify the 20% of the modules which yield 80% ofthe issues.

Cause and Effect Diagram For identifying assignable causes.

Scatter Diagram For identifying correlation and suggesting causation.

Control Chart For identifying processes that are out of control.

Graph For visually displaying data, e.g., in a pie chart.

Page 80: DBA1656

DBA 1656 QUALITY MANAGEMENT

80

NOTES

Anna University Chennai

FIGURE 3.4

Anticipated Benefits of Implementation

SPC is a powerful tool to optimize the amount of information needed for use inmaking management decisions [Eickelmann and Anant 2003]. Statistical techniquesprovide an understanding of the business baselines, insights for process improvements,communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set,and dynamically improve process capabilities; and focus business on areas needingimprovement. SPC moves away from opinion-based decision making [Radice 2000].

These benefits of SPC cannot be obtained immediately by all organizations. SPC requires defined processes and a discipline of following them. It requires a climatein which personnel are not punished when problems are detected. It requiresmanagement commitment [Demmy 1989].

Detailed Characteristics

The processes controllable by SPC are unlimited in application domain, lifecyclephase, and design methodology. Processes need to exhibit certaincharacteristics to be suitable for SPC (as shown in the table below). In addition,a process to which SPC is applied should be homogeneous. For example,applications of SPC to software inspections have found that inspections mustoften be decomposed to apply SPC effectively. Florence [1999] found, forinstance, that the inspection of human machine interface specifications should

Page 81: DBA1656

DBA 1656 QUALITY MANAGEMENT

81

NOTES

Anna University Chennai

be treated as a process separate from the inspection of application specificationsin the project he examined. Weller [2000] found that inspections of new andrevised code should be treated as separate processes in the project heexamined. A trial application of SPC is useful in identifying homogeneous subprocesses. Issues other than identifying defined homogeneous processes arisein implementing SPC. A second table presents such implementation issues.

Criteria Of Processes Suitable for SPC

Well-defined Have attributes with observable measures Repetitive Sufficiently critical to justify monitoring effort

(Based on [Demmy 1989]) SPC Implementation Issues

TABLE 3.6

Define Process Consistent measurements cannot be expected fromsoftware processes that are not documented and generallyfollowed.

Choose Appropriate Measures need not be exhaustive. One or twoMeasures measures that provide insight into the performance of a

process or activity are adequate, especially if the measuresare related to the process or activity goal. Measures thatcan be tracked inexpensively are preferable.

Focus on Process Control charts should be constructed so as toTrends detect process trends, not individual nonconforming

events.Calculate Control Straightforward formulas exist for calculatingLimits Correctly control limits and analyzing distributions. Introductory

college courses in statistics usually do not addressprocess-control techniques in detail.

Investigate and Act SPC only signals the possible existence of a problem. Without detailed investigations, as in an audit, and institutingcorrective action, SPC will not provide any benefit.

Provide Training Problems in following the above recommendations forimplementing SPC can be decreased with effectivetraining. SPC training based on examples of softwareprocesses is to be preferred.(Based on [Card 1994])

Page 82: DBA1656

DBA 1656 QUALITY MANAGEMENT

82

NOTES

Anna University Chennai

Relationships to Other Practices:

The Figure below represents a high-level process architecture for the subjectpractice, depicting relationships among this practice and the nature of the influences onthe practice (describing how other practices might relate to this practice). Theserelationship statements are based on definitions of specific “best practices” found in theliterature and the notion that the successful implementation of practices may “influence”(or are influenced by) the ability to successfully implement other practices. A briefdescription of these influences is included in the table below.

FIGURE 3.5

Process Architecture for the “Statistical Process Control” Gold Practice

Page 83: DBA1656

DBA 1656 QUALITY MANAGEMENT

83

NOTES

Anna University Chennai

TABLE 3.7

Summary of Relationship Factors

INPUTS TO THE PRACTICE

Determine which Statistical Process Control can only be effective ifattributes, at what the most critical processes are identified andlevels, should be addressed using the technique. Practices that helpcontrolled establish clear goals and decision points, and are based on

meaningful metrics and attributes based on specific programor technical goals, stand to gain the most payback from usingSPC. SPC techniques need not be restricted to the present,i.e., planning for the insertion of new technology later in thelife cycle should also plan for the use of SPC to ensure thatprocesses are controlled and reliability of the resultingsoftware artifacts is optimized.

Define whether Practices such as Performance-Based Specificationsenvironment is and Commercial Specifications/Open System canappropriate for imply the generation and collection of data. Suchprocess control data may serve as appropriate input to SPC techniques such

as control charts. Therefore, an environment that is data-rich provides an excellent opportunity to leverage the benefitstatistical process control techniques.

Provide data upon An initial step in applying SPC is often to discoverwhich decisions controllable and homogeneous processes. Past can be based performance data can be used for this purpose. Formal

inspection processes and processes for leveraging COTS/NDI explicitly call for metrics to be collected. These metricscan be used as the basis for SPC.

OUTPUTS FROM THE PRACTICE

Assess progress SPC can be used not only to control processes, buttowards process also to determine if quantitative requirements oncontrol software processes are being met. The results of SPC, then,

provide valuable data and information that can be used tomanage progress towards achievement of softwarerequirements. Part of this ability to manage progress issupported by the quantitative progress measurements thatare inherent in the SPC process, primarily in the form of

Page 84: DBA1656

DBA 1656 QUALITY MANAGEMENT

84

NOTES

Anna University Chennai

defect tracking and correction against specific, quantitativequality targets.

Communicate Management go/no-go decisions can be based onprogress towards whether development processes are under control. controlling SPC presents graphical displays supporting suchprocesses decisions. The number and types of available graphical

formats that can be used as part of the SPC process provideaccessible visibility into progress being made to all programstakeholders. Demonstration-based reviews provide anexcellent vehicle for communicating the progress being madein controlling processes through the use of SPC.

Improve Testing By providing control over software developmentEfficiency and processes, SPC will result in more predictable andEffectiveness more reliable software. Rigorous testing that is guided by

specifications and supported by well-documented andaccurate operational and usage-based models will be muchmore effective under the controlled processes resulting fromSPC.

Definitions

An application of statistics to controlling industrial processes, including processesin software development and maintenance. Statistical Process Control (SPC) is usedto identify and remove variations in processes that exceed the variation to be expectedfrom natural causes. The purpose of process control is to detect any abnormality in the process.

- [Ishikawa 1982]Sources (Origins of the Practice)

Walter Shewhart developed Statistical Process Control (SPC) in the 1920s. Shewhart sought methods applying statistics to industrial practice. Acceptance testingand SPC grew out of this work. Shewhart proposed the use of control charts, a coretechnique for SPC, in a historic internal memorandum of 16 May 1924 at Bell TelephoneLaboratories.

For a long time, SPC was most widely adopted in Japan, not the United States. Shewhart mentored W. Edwards Deming, and Deming went on to introduce qualitytechnologies into Japanese industry. The ”Guide to Quality Control” Ishikawa [1982],first published in 1968 in Japanese, is a guide to quality control techniques that becameprevalent in Japan after World War II. Corporations in the United States began adopting

Page 85: DBA1656

DBA 1656 QUALITY MANAGEMENT

85

NOTES

Anna University Chennai

quality technology, including SPC, more widely in the 1980s. Recently, many UnitedStates corporations have instituted Six Sigma programs. These programs, throughcontinual process improvement, attempt to reduce the natural variation in industrialprocesses.

Some began applying SPC techniques to software in the 1980s. Gardiner andMontgomery [1987] report an example. Software inspections seem to provide theprocesses that are most commonly monitored with SPC in software. Some recentlyproposed software process models include opportunities for SPC. The spiral lifecycleprovides a natural time for tuning software processes, namely before the start of thedevelopment of each increment. SPC yields analyzed data that managers can use inselecting processes to tune. Cleanroom software engineering combines incrementaldevelopment and software inspections with other technologies, such as reliabilitymodeling. The Software Engineering Institute (SEI) Capability Maturity Model (CMM)mandates that SPC be used in Level 4 organizations.

3.2 CONSTRUCTION OF CONTROL CHARTS FOR VARIABLES ANDATTRIBUTES

Common Types of Charts

The types of charts are often classified according to the type of quality characteristicthat they are supposed to monitor: there are quality control charts for variables andcontrol charts for attributes. Specifically, the following charts are commonly constructedfor controlling variables:

• X-bar chart. In this chart the sample means are plotted in order to control themean value of a variable (e.g., size of piston rings, strength of materials, etc.).

• R chart. In this chart, the sample ranges are plotted in order to control thevariability of a variable.

• S chart. In this chart, the sample standard deviations are plotted in order tocontrol the variability of a variable.

• S**2 chart. In this chart, the sample variances are plotted in order to controlthe variability of a variable.For controlling quality characteristics that represent attributes of the product,

the following charts are commonly constructed:

Page 86: DBA1656

DBA 1656 QUALITY MANAGEMENT

86

NOTES

Anna University Chennai

• C chart. In this chart (see example below), we plot the number of defectives(per batch, per day, per machine, per 100 feet of pipe, etc.). This chart assumesthat defects of the quality attribute are rare, and the control limits in this chartare computed based on the Poisson distribution (distribution of rare events).

FIGURE 3.6

• U chart. In this chart we plot the rate of defectives, that is, the number ofdefectives divided by the number of units inspected (the n; e.g., feet of pipe,number of batches). Unlike the C chart, this chart does not require a constantnumber of units, and it can be used, for example, when the batches (samples)are of different sizes.

• Np chart. In this chart, we plot the number of defectives (per batch, per day,per machine) as in the C chart. However, the control limits in this chart are notbased on the distribution of rare events, but rather on the binomial distribution.Therefore, this chart should be used if the occurrence of defectives is not rare(e.g., they occur in more than 5% of the units inspected). For example, we mayuse this chart to control the number of units produced with minor flaws.

• P chart. In this chart, we plot the percent of defectives (per batch, per day, permachine, etc.) as in the U chart. However, the control limits in this chart are notbased on the distribution of rare events but rather on the binomial distribution(of proportions). Therefore, this chart is most applicable to situations wherethe occurrence of defectives is not rare (e.g., we expect the percent of defectivesto be more than 5% of the total number of units produced).

Page 87: DBA1656

DBA 1656 QUALITY MANAGEMENT

87

NOTES

Anna University Chennai

Control Charts for Variables vs. Charts for Attributes

Sometimes, the quality control engineer has a choice between variable controlcharts and attribute control charts.

Advantages of attribute control charts. Attribute control charts have the advantageof allowing for quick summaries of various aspects of the quality of a product, that is,the engineer may simply classify products as acceptable or unacceptable, based onvarious quality criteria. Thus, attribute charts sometimes bypass the need for expensive,precise devices and time-consuming measurement procedures. Also, this type of charttends to be more easily understood by managers unfamiliar with quality controlprocedures; therefore, it may provide more persuasive (to management) evidence ofquality problems.

Advantages of variable control charts. Variable control charts are more sensitivethan attribute control charts (see Montgomery, 1985, p. 203). Therefore, variable controlcharts may alert us to quality problems before any actual “unacceptables” (as detectedby the attribute chart) will occur. Montgomery (1985) calls the variable control chartsleading indicators of trouble that will sound an alarm before the number of rejects(scrap) increases in the production process.

3.3 PROCESS CAPABILITY – MEANING, SIGNIFICANCE ANDMEASUREMENT

Process Capability

1. Select a candidate for the study. This step should be institutionalized. A goal ofany organization should be the ongoing process improvement. However, because acompany has only a limited resource base and can’t solve all problems simultaneously,it must set priorities for its efforts. The tools for this include Pareto analysis andfishbone diagrams.

2. Define the process. It is all too easy to slip into the trap of solving the wrongproblem. Once the candidate area has been selected in step 1, define the scope ofthe study. A process is a unique combination of machines, tools, methods, andpersonnel engaged in adding value by providing a product or service. Each elementof the process should be identified at this stage. This is not a trivial exercise. Theinput of many people may be required. There are likely to be a number of conflictingopinions about what the process actually involves.

3. Procure resources for the study. Process capability studies disrupt normaloperations and require significant expenditures of both material and human resources.Since it is a project of major importance, it should be managed as such. All of theusual project management techniques should be brought to bear. This includesplanning, scheduling, and management status reporting.

Page 88: DBA1656

DBA 1656 QUALITY MANAGEMENT

88

NOTES

Anna University Chennai

4. Evaluate the measurement system. Using the techniques described in ChapterV, evaluate the measurement system’s ability to do the job. Again, be prepared tospend the time necessary to get a valid means of measuring the process beforegoing ahead.

5. Prepare a control plan. The purpose of the control plan is twofold: 1) isolate andcontrol as many important variables as possible and, 2) provide a mechanism fortracking variables that can not be completely controlled. The object of the capabilityanalysis is to determine what the process can do if it is operated the way it isdesigned to be operated. This means that such obvious sources of potential variationas operators and vendors will be controlled while the study is conducted. In otherwords, a single well-trained operator will be used and the material will be from asingle vendor. There are usually some variables that are important, but that are notcontrollable. One example is the ambient environment, including temperature,barometric pressure, or humidity. Certain process variables may degrade as part ofthe normal operation; for example, tools wear and chemicals are used. Thesevariables should still be tracked using logsheets and similar tools.

6. Select a method for the analysis. The SPC method will depend on the decisionsmade up to this point. If the performance measure is an attribute, one of the attributecharts will be used. Variables charts will be used for process performance measuresassessed on a continuous scale. Also considered will be the skill level of the personnelinvolved, need for sensitivity, and other resources required to collect, record, andanalyze the data.

7. Gather and analyze the data. Use one of the control charts described in thischapter, plus common sense. It is usually advisable to have at least two people goover the data analysis to catch inadvertent errors in transcribing data or performingthe analysis.

8. Track down and remove special causes. A special cause of variation may beobvious, or it may take months of investigation to find it. The effect of the specialcause may be good or bad. Removing a special cause that has a bad effect usuallyinvolves eliminating the cause itself. For example, if poorly trained operators arecausing variability, the special cause is the training system (not the operator), and itis eliminated by developing an improved training system or a process that requiresless training. However, the removal of a beneficial special cause may actually involveincorporating the special cause into the normal operating procedure. For example,

Page 89: DBA1656

DBA 1656 QUALITY MANAGEMENT

89

NOTES

Anna University Chennai

if it is discovered that materials with a particular chemistry produce better productthe special cause is the newly discovered material and it can be made a commoncause simply by changing the specification to assure that the new chemistry is alwaysused.

9. Estimate the process capability. One point can not be overemphasized: theprocess capability cannot be estimated until a state of statistical control has beenachieved! After this stage has been reached, the methods described later in thischapter may be used. After the numerical estimate of process capability has beenarrived at it must be compared to management’s goals for the process, or it can beused as an input into economic models. Deming’s all-or-none rules provide a simplemodel that can be used to determine if the output from a process should be sorted100% or shipped as-is.

10. Establish a plan for continuous process improvement. Once a stable processstate has been attained, steps should be taken to maintain it and improve upon it.SPC is just one means of doing this. Far more important than the particular approachtaken is a company environment that makes continuous improvement of a normalpart of the daily routine of everyone.

3.4 SIX SIGMA CONCEPTS OF PROCESS CAPABILITY

Six Sigma

The often-used six sigma symbol.

Six Sigma is a system of practices originally developed by Motorola tosystematically improve processes by eliminating defects. Defects are defined as unitsthat are not members of the intended population. Since it was originally developed, SixSigma has become an element of many Total Quality Management (TQM) initiatives.

The process was pioneered by Bill Smith at Motorola in 1986 and was originallydefined as a metric for measuring defects and improving quality, and a methodology toreduce defect levels below 3.4 Defects Per (one) Million Opportunities (DPMO).

Six Sigma is a registered service mark and trademark of Motorola, Inc.Motorola has reported over US$17 billion in savings from Six Sigma as of 2006.

In addition to Motorola, companies which also adopted Six Sigmamethodologies early-on and continue to practice it today include Bank of America,Caterpillar, Honeywell International (previously known as Allied Signal), Raytheon andGeneral Electric (introduced by Jack Welch).

Page 90: DBA1656

DBA 1656 QUALITY MANAGEMENT

90

NOTES

Anna University Chennai

Recently Six Sigma has been integrated with the TRIZ methodology for problemsolving and product design.

Key concepts of Six Sigma

At its core, Six Sigma revolves around a few key concepts.

Critical to Quality: Attributes most important to the customer

Defect: Failing to deliver what the customer wants

Process Capability: What your process can deliver

Variation: What the customer sees and feels

Stable Operations: Ensuring consistent, predictable processes to improve whatthe customer sees and feels

Design for Six Sigma: Designing to meet customer needs and process capability

Methodology

Six Sigma has two key methodologies: DMAIC and DMADV. DMAIC isused to improve an existing business process. DMADV is used to create new productdesigns or process designs in such a way that it results in a more predictable, matureand defect free performance.

DMAIC

Basic methodology consists of the following five steps:

Define the process improvement goals that are consistent with customerdemands and enterprise strategy.

Measure the current process and collect relevant data for future comparison.

Analyze to verify relationship and causality of factors. Determine what therelationship is, and attempt to ensure that all factors have been considered.

Improve or optimize the process based upon the analysis using techniques likeDesign of Experiments.

Control to ensure that any variances are corrected before they result in defects.Set up pilot runs to establish process capability, transition to production andthereafter continuously measure the process and institute control mechanisms.

Page 91: DBA1656

DBA 1656 QUALITY MANAGEMENT

91

NOTES

Anna University Chennai

DMADV

Basic methodology consists of the following five steps:

Define the goals of the design activity that are consistent with customer demandsand enterprise strategy.

Measure and identify CTQs (critical to qualities), product capabilities, productionprocess capability, and risk assessments.

Analyze to develop and design alternatives, create high-level design and evaluatedesign capability to select the best design.

Design details, optimize the design, and plan for design verification. This phasemay require simulations.

Verify the design, set up pilot runs, implement production process and handoverto process owners.

Some people have used DMAICR (Realize). Others contend that focusing on thefinancial gains realized through Six Sigma is counter-productive and that said financialgains are simply byproducts of a good process improvement.

Another additional flavor of Design for Six Sigma is the DMEDI method. Thisprocess is almost exactly like the DMADV process, utilizing the same toolkit, but witha different acronym. DMEDI stands for Define, Measure, Explore, Develop, Implement.

Quality approaches and models

DFSS (Design for Six Sigma) - A systematic methodology utilizing tools, trainingand measurements to enable us to design products and processes that meet customerexpectations and can be produced at Six Sigma Quality levels.

DMAIC (Define, Measure, Analyze, Improve and Control) - A process for continuedimprovement. It is systematic, scientific and fact based. This closed-loop processeliminates unproductive steps, often focuses on new measurements, and appliestechnology for improvement.

Six Sigma - A vision of quality, which equates with only 3.4 defects per millionopportunities for each product or service transaction. Strives for perfection.

Quality Tools

Associates are exposed to various tools and terms related to quality. Beloware just a few of them.

Control Chart - Monitors variance in a process over time and alerts the business tounexpected variance which may cause defects.

Page 92: DBA1656

DBA 1656 QUALITY MANAGEMENT

92

NOTES

Anna University Chennai

Defect Measurement - Accounting for the number or frequency of defects that causelapses in product or service quality.

Pareto Diagram - Focuses our efforts on the problems that have the greatest potentialfor improvement by showing relative frequency and/or size in a descending bar graph.Based on the proven Pareto principle: 20% of the sources cause 80% of any problems.

Process Mapping - Illustrated description of how things get done, which enablesparticipants to visualize an entire process and identify areas of strength and weaknesses.It helps reduce cycle time and defects while recognizing the value of individualcontributions.

Root Cause Analysis - Study of original reason for nonconformance with a process.When the root cause is removed or corrected, the nonconformance will be eliminated.

Statistical Process Control - The application of statistical methods to analyze data,study and monitor process capability and performance.

Tree Diagram - Graphically shows any broad goal broken into different levels of detailedactions. It encourages team members to expand their thinking when creating solutions.

Quality Terms

Black Belt - Leaders of teams responsible for measuring, analyzing, improving andcontrolling key processes that influence customer satisfaction and/or productivity growth.Black Belts are full-time positions.

Control - The state of stability, normal variation and predictability. Process of regulatingand guiding operations and processes using quantitative data.

CTQ: Critical to Quality (Critical “Y”) - Element of a process or practice which has adirect impact on its perceived quality.

Customer Needs, Expectations - Needs, as defined by customers, which meet theirbasic requirements and standards.

Defects - Sources of customer irritation. Defects are costly to both customers and tomanufacturers or service providers. Eliminating defects provides cost benefits.

Green Belt - Similar to Black Belt but not a full-time position.

Master Black Belt - First and foremost teachers. They also review and mentor BlackBelts. Selection criteria for Master Black Belts are quantitative skills and the ability toteach and mentor. Master Black Belts are full-time positions.

Page 93: DBA1656

DBA 1656 QUALITY MANAGEMENT

93

NOTES

Anna University Chennai

Variance - A change in a process or business practice that may alter its expectedoutcome.

Statistics and robustness

The core of the Six Sigma methodology is a data-driven, systematic approachto problem solving, with a focus on customer impact. Statistical tools and analysis areoften useful in the process. However, it is a mistake to view the core of the Six Sigmamethodology as statistics; an acceptable Six Sigma project can be started with onlyrudimentary statistical tools.

Still, some professional statisticians criticize Six Sigma because practitionershave highly varied levels of understanding of the statistics involved.

Six Sigma as a problem-solving approach has traditionally been used in fieldssuch as business, engineering, and production processes.

Roles required for implementation

Six Sigma identifies five key roles for its successful implementation.

Executive Leadership includes CEO and other key top management teammembers. They are responsible for setting up a vision for Six Sigmaimplementation. They also empower the other role holders with the freedomand resources to explore new ideas for breakthrough improvements.

Champions are responsible for the Six Sigma implementation across theorganization in an integrated manner. The Executive Leadership draws themfrom the upper management. Champions also act as mentor to Black Belts. AtGE this level of certification is now called “Quality Leader”.

Master Black Belts, identified by champions, act as in-house expert coach forthe organization on Six Sigma. They devote 100% of their time to Six Sigma.They assist champions and guide Black Belts and Green Belts. Apart from theusual rigor of statistics, their time is spent on ensuring integrated deployment ofSix Sigma across various functions and departments.

Experts This level of skill is used primarily within Aerospace and DefenseBusiness Sectors. Experts work across company boundaries, improvingservices, processes, and products for their suppliers, their entire campuses,and for their customers. Raytheon Incorporated was one of the first companiesto introduce Experts to their organizations. At Raytheon, Experts work notonly across multiple sites, but across business divisions, incorporating lessonslearned throughout the company.

Page 94: DBA1656

DBA 1656 QUALITY MANAGEMENT

94

NOTES

Anna University Chennai

Black Belts operate under Master Black Belts to apply Six Sigma methodologyto specific projects. They devote 100% of their time to Six Sigma. They primarilyfocus on Six Sigma project execution, whereas Champions and Master BlackBelts focus on identifying projects/functions for Six Sigma.

Green Belts are the employees who take up Six Sigma implementation alongwith their other job responsibilities. They operate under the guidance of BlackBelts and support them in achieving the overall results.

In many successful modern programs, Green Belts and Black Belts are empoweredto initiate, expand, and lead projects in their area of responsibility. The roles as definedabove, therefore, conform to the antiquated Mikel Harry/Richard Schroeder model,which is far from being universally accepted. The terms black belt and green belt areborrowed from the ranking systems in various martial arts.

The term Six Sigma

Sigma (the lower-case Greek letter ?) is used to represent standard deviation(a measure of variation) of a population (lower-case ‘s’, is an estimate, based on asample). The term “six sigma process” comes from the notion that if one has six standarddeviations between the mean of a process and the nearest specification limit, he willmake practically no items that exceed the specifications. This is the basis of the ProcessCapability Study, often used by quality professionals. The term “Six Sigma” has itsroots in this tool, rather than in simple process standard deviation, which is also measuredin sigmas. Criticism of the tool itself, and the way that the term was derived from thetool, often sparks criticism of Six Sigma.

The widely accepted definition of a six sigma process is one that produces 3.4defective parts per million opportunities (DPMO). A process that is normally distributedwill have 3.4 parts per million beyond a point that is 4.5 standard deviations above orbelow the mean (one-sided Capability Study). This implies that 3.4 DPMO correspondsto 4.5 sigmas, not six as the process name would imply. This can be confirmed byrunning on QuikSigma or Minitab a Capability Study on data with a mean of 0, astandard deviation of 1, and an upper specification limit of 4.5. The 1.5 sigmas addedto the name Six Sigma are arbitrary and they are called “1.5 sigma shift” (SBTI BlackBelt material, ca 1998). Dr. Donald Wheeler dismisses the 1.5 sigma shift as “goofy”.

In a Capability Study, sigma refers to the number of standard deviations betweenthe process mean and the nearest specification limit, rather than the standard deviationof the process, which is also measured in “sigmas”. As process standard deviation goes

Page 95: DBA1656

DBA 1656 QUALITY MANAGEMENT

95

NOTES

Anna University Chennai

up, or the mean of the process moves away from the center of the tolerance, the ProcessCapability sigma number goes down, because fewer standard deviations will then fitbetween the mean and the nearest specification limit (see Cpk Index). The notion that,in the long term, processes usually do not perform as well as they do in the short termis correct. That requires that Process Capability sigma based on long term data is lessthan or equal to an estimate based on short term sigma. However, the original use of the1.5 sigma shift is as shown above, and implicitly assumes the opposite.

As sample size increases, the error in the estimate of standard deviation convergesmuch more slowly than the estimate of the mean (see confidence interval). Even with afew dozen samples, the estimate of standard deviation often drags an alarming amountof uncertainty into the Capability Study calculations. It follows that estimates of defectrates can be very greatly influenced by uncertainty in the estimate of standard deviation,and that the defective parts per million estimates produced by Capability Studies oftenought not to be taken too literally.

Estimates for the number of defective parts per million produced also dependson knowing something about the shape of the distribution from which the samples aredrawn. Unfortunately, there are no means for proving that data belong to any particulardistribution. One can only assume normality, based on finding no evidence to the contrary.Estimating defective parts per million down into the 100s or 10s of units based on suchan assumption is wishful thinking, since actual defects are often deviations from normality,which have been assumed not to exist.

The ±1.5 Sigma Drift

The ±1.5 sigma drift is the drift of a process mean, which occurs in all processesin a six sigma program. If a product being manufactured measures 100 ± 3 cm (97 –103 cm), over time the ±1.5 sigma drift may cause the average to range up to 98.5 -104.5 cm or down to 95.5 - 101.5 cm. This could be of significance to customers.

The ±1.5 shift was introduced by Mikel Harry. Harry referred to a paper abouttolerancing, the overall error in an assembly is affected by the errors in components,written in 1975 by Evans, “Statistical Tolerancing: The State of the Art. Part 3. Shiftsand Drifts”. Evans refers to a paper by Bender in 1962, “Benderizing Tolerances – ASimple Practical Probability Method for Handling Tolerances for Limit Stack Ups”. Helooked at the classical situation with a stack of disks and how the overall error in thesize of the stack, relates to errors in the individual disks. Based on “probability,approximations and experience”, Bender suggests:

Page 96: DBA1656

DBA 1656 QUALITY MANAGEMENT

96

NOTES

Anna University Chennai

Harry then took this a step further. Supposing that there is a process in which 5samples are taken every half hour and plotted on a control chart, Harry considered the“instantaneous” initial 5 samples as being “short term” (Harry’s n=5) and the samplesthroughout the day as being “long term” (Harry’s g=50 points). Due to the randomvariation in the first 5 points, the mean of the initial sample is different to the overallmean. Harry derived a relationship between the short term and long term capability,using the equation above, to produce a capability shift or “Z shift” of 1.5. Over time, theoriginal meaning of “short term” and “long term” has been changed to result in “longterm” drifting means.

Harry has clung tenaciously to the “1.5” but over the years, its derivation hasbeen modified. In a recent note from Harry “We employed the value of 1.5 since noother empirical information was available at the time of reporting.” In other words, 1.5has now become an empirical rather than theoretical value. A further softening fromHarry: “... the 1.5 constant would not be needed as an approximation”.

Despite this, industry has fixed on the idea that it is impossible to keep processeson target. No matter what is done, process means will drift by ±1.5 sigma. In otherwords, if a process has a target value of 10.0, and control limits work out to be 13.0and 7.0, over the long term the mean will drift to 11.5 (or 8.5), with control limitschanging to 14.5 and 8.5.

In truth, any process where the mean changes by 1.5 sigma, or any other amount,is not in statistical control. Such a change can often be detected by a trend on a controlchart. A process that is not in control is not predictable. It may begin to produce defects,no matter where specification limits have been set.

Digital Six Sigma

In an effort to permanently minimize variation, Motorola has evolved the SixSigma methodology to use information systems tools to make business improvementsabsolutely permanent. Motorola calls this effort Digital Six Sigma.

Criticism

Some companies that have embraced it have done poorly

The cartoonist Scott Adams featured Six Sigma in a Dilbert cartoon publishedon November 26th 2006. When the process is introduced to his company Dilbert asks“Why don’t we jump on a fad that hasn’t already been widely discredited?” The Dilbertcharacter states “Fortune magazine says... blah blah... most companies that used SixSigma have trailed the S&P 500.”

Page 97: DBA1656

DBA 1656 QUALITY MANAGEMENT

97

NOTES

Anna University Chennai

Dilbert was referring to an article in Fortune which stated that “of 58 largecompanies that have announced Six Sigma programs, 91 percent have trailed the S&P500 since.” The statement is attributed to “an analysis by Charles Holland of consultingfirm Qualpro (which espouses a competing quality-improvement process).” The gist ofthe article is that Six Sigma is effective at what it is intended to do, but that it is “narrowlydesigned to fix an existing process” and does not help in “coming up with new productsor disruptive technologies.”

Based on arbritrary standards

While 3.4 defects per million might work well for certain products/processes, itmight not be ideal for others. A pacemaker might need higher standards, for example,whereas a direct mail advertising campaign might need less. The basis and justificationfor choosing 6 as the number of standard deviations is not clearly explained.

What is Six Sigma?

Six Sigma is a rigorous and disciplined methodology that uses data and statisticalanalysis to measure and improve a company’s operational performance by identifyingand eliminating “defects” in manufacturing and service-related processes. Commonlydefined as 3.4 defects per million opportunities, Six Sigma can be defined and understoodat three distinct levels: metric, methodology and philosophy...

The goal of Six Sigma is to increase profits by eliminating variability, defects andwaste that undermine customer loyalty.Six Sigma can be understood/perceived at three levels:

1. Metric: 3.4 Defects per Million Opportunities. DPMO allows you to takecomplexity of product/process into account. Rule of thumb is to consider atleast three opportunities for a physical part/component - one for form, one forfit and one for function, in absence of better considerations. Also you want tobe Six Sigma in the Critical to Quality characteristics and not the whole unit/characteristics.

2. Methodology: DMAIC/DFSS structured problem solving roadmap and tools.

3. Philosophy: Reduce variation in your business and take customer-focused,data driven decisions.

Six Sigma is a methodology that provides businesses with the tools to improvethe capability of their business processes. This increase in performance and decrease inprocess variation leads to defect reduction and vast improvement in profits, employeemorale and quality of product.

Page 98: DBA1656

DBA 1656 QUALITY MANAGEMENT

98

NOTES

Anna University Chennai

The History of Six Sigma

The roots of Six Sigma as a measurement standard can be traced back to CarlFrederick Gauss (1777-1855) who introduced the concept of the normal curve. SixSigma as a measurement standard in product variation can be traced back to the 1920’swhen Walter Shewhart showed that three sigma from the mean is the point where aprocess requires correction. Many measurement standards (Cpk, Zero Defects, etc.)later came on the scene but credit for coining the term “Six Sigma” goes to a Motorolaengineer named Bill Smith. (Incidentally, “Six Sigma” is a federally registered trademarkof Motorola).

In the early and mid-1980s with Chairman Bob Galvin at the helm, Motorolaengineers decided that the traditional quality levels — measuring defects in thousandsof opportunities — didn’t provide enough granularities. Instead, they wanted to measurethe defects per million opportunities. Motorola developed this new standard and createdthe methodology and needed cultural change associated with it. Six Sigma helpedMotorola realize powerful bottom-line results in their organization - in fact, theydocumented more than $16 Billion in savings as a result of our Six Sigma efforts.

Since then, hundreds of companies around the world have adopted Six Sigmaas a way of doing business. This is a direct result of many of America’s leaders openlypraising the benefits of Six Sigma. Leaders such as Larry Bossidy of Allied Signal (nowHoneywell), and Jack Welch of General Electric Company. Rumor has it that Larryand Jack were playing golf one day and Jack bet Larry that he could implement SixSigma faster and with greater results at GE than Larry did at Allied Signal. The resultsspeak for themselves.

Six Sigma has evolved over time. It’s more than just a quality system like TQMor ISO. It’s a way of doing business. As Geoff Tennant describes in his book SixSigma: SPC and TQM in Manufacturing and Services: “Six Sigma is many things, andit would perhaps be easier to list all the things that Six Sigma quality is not. Six Sigmacan be seen as: a vision; a philosophy; a symbol; a metric; a goal; a methodology.”

Roles required for implementation

Six Sigma identifies five key roles for its successful implementation.

• Executive Leadership includes CEO and other key top management teammembers. They are responsible for setting up a vision for Six Sigmaimplementation. They also empower the other role holders with the freedomand resources to explore new ideas for breakthrough improvements.

Page 99: DBA1656

DBA 1656 QUALITY MANAGEMENT

99

NOTES

Anna University Chennai

• Champions are responsible for the Six Sigma implementation across theorganization in an integrated manner. The Executive Leadership draws themfrom the upper management. Champions also act as mentor to Black Belts. AtGE this level of certification is now called “Quality Leader”.

• Master Black Belts, identified by champions, act as in-house expert coachfor the organization on Six Sigma. They devote 100% of their time to Six Sigma.They assist champions and guide Black Belts and Green Belts. Apart from theusual rigor of statistics, their time is spent on ensuring integrated deployment ofSix Sigma across various functions and departments.

• Experts This level of skill is used primarily within Aerospace and DefenseBusiness Sectors. Experts work across company boundaries, improvingservices, processes, and products for their suppliers, their entire campuses,and for their customers. Raytheon Incorporated was one of the first companiesto introduce Experts to their organizations. At Raytheon, Experts work notonly across multiple sites, but across business divisions, incorporating lessonslearned throughout the company.

• Black Belts operate under Master Black Belts to apply Six Sigma methodologyto specific projects. They devote 100% of their time to Six Sigma. They primarilyfocus on Six Sigma project execution, whereas Champions and Master BlackBelts focus on identifying projects/functions for Six Sigma.

• Green Belts are the employees who take up Six Sigma implementation alongwith their other job responsibilities. They operate under the guidance of BlackBelts and support them in achieving the overall results.

In many successful modern programs, Green Belts and Black Belts areempowered to initiate, expand, and lead projects in their area of responsibility.Theterms black belt and green belt are borrowed from the ranking systems in variousmartial arts.

Software used for Six Sigma

There are generally two classes of software used to support Six Sigma: analysistools, which are used to perform statistical or process analysis, and program managementtools, used to manage and track a corporation’s entire Six Sigma program. Analysistools include statistical software such as Minitab, JMP, SigmaXL, RapAnalyst orStatgraphics as well as process analysis tools such as iGrafx. Some alternatives includeMicrosoft Visio, Telelogic System Architect, IBM WebSphere Business Modeler, andProforma Corp. ProVision. For program management, tracking and reporting, the most

Page 100: DBA1656

DBA 1656 QUALITY MANAGEMENT

100

NOTES

Anna University Chennai

popular tools are Instantis, PowerSteering, iNexus and SixNet. Other Six Sigma for ITManagement tools include Proxima Technology Centauri, HP Mercury, BMC Remedy.

1. Six Sigma was industry specific

2. The average was very subjective in nature, it was very difficult to define average

3. There were problems in finding whether six sigma has been achieved or not.

Where did the name “Six Sigma” come from?

In my recollection, two recurring questions have dominated the field of sixsigma. The first inquiry can be described by the global question: “Why 6s and not someother level of capability?” The second inquiry is more molecular. It can be summarizedby the question: “Where does the 1.5s shift factor come from – and why 1.5 versussome other magnitude?” For details on this subject, refer: “Harry, M. J. “Resolving theMysteries of Six Sigma: Statistical Constructs and Engineering Rationale.” First Edition2003. Palladyne Publishing. Phoenix, Arizona. (Note: this particular publication will beavailable by October 2003). But until then, we will consider the following thumbnailsketch.

At the onset of six sigma in 1985, this writer was working as an engineer for theGovernment Electronics Group of Motorola. By chance connection, I linked up withanother engineer by the name of Bill Smith (originator of the six sigma concept in 1984).At that time, he suggested Motorola should require 50 percent design margins for all ofits key product performance specifications. Statistically speaking, such a “safety margin”is equivalent to a 6 sigma level of capability.

When considering the performance tolerance of a critical design feature, hebelieved a 25 percent “cushion” was not sufficient for absorbing a sudden shift in processcentering. Bill believed the typical shift was on the order of 1.5s (relative to the targetvalue). In other words, a four sigma level of capability would normally be consideredsufficient, if centered. However, if the process center was somehow knocked off itscentral location (on the order of 1.5s), the initial capability of 4s would be degraded to4.0s – 1.5s = 2.5s. Of course, this would have a consequential impact on defects. Inturn, a sudden increase in defects would have an adverse effect on reliability. As shouldbe apparent, such a domino effect would continue straight up the value chain.

Regardless of the shift magnitude, those of us working this issue fully recognizedthat the initial estimate of capability will often erode over time in a “very natural way” –thereby increasing the expected rate of product defects (when considering a protracted

Page 101: DBA1656

DBA 1656 QUALITY MANAGEMENT

101

NOTES

Anna University Chennai

period of production). Extending beyond this, we concluded that the product defectrate was highly correlated to the long-term process capability, not the short-termcapability. Of course, such conclusions were predicated on the statistical analysis ofempirical data gathered on a wide array of electronic devices.

Thus, we come to understand three things. Firstly, we recognized that theinstantaneous reproducibility of a critical-to-quality characteristic is fully dependent onthe “goodness of fit” between the operating bandwidth of the process and thecorresponding bandwidth of the performance specification. Secondly, the quality ofthat interface can be substantively and consequentially disturbed by process centeringerror. Of course, both of these factors profoundly impact long-term capability. Thirdly,we must seek to qualify our critical processes at a 6s level of short-term capability if weare to enjoy a long-term capbility of 4s.

By further developing these insights through applied research, we were able togreatly extend our understanding of the many statistical connections between such thingsas design margin, process capability, defects, field reliability, customer satisfaction, andeconomic success.

Statistical Six Sigma Definition

Six Sigma at many organizations simply means a measure of quality that strivesfor near perfection. But the statistical implications of a Six Sigma program go wellbeyond the qualitative eradication of customer-perceptible defects. It’s a methodologythat is well rooted in mathematics and statistics.

The objective of Six Sigma Quality is to reduce process output variation so thaton a long term basis, which is the customer’s aggregate experience with our processover time, this will result in no more than 3.4 defect Parts Per Million (PPM) opportunities(or 3.4 Defects Per Million Opportunities – DPMO). For a process with only onespecification limit (Upper or Lower), this results in six process standard deviationsbetween the mean of the process and the customer’s specification limit (hence, 6 Sigma).For a process with two specification limits (Upper and Lower), this translates to slightlymore than six process standard deviations between the mean and each specificationlimit such that the total defect rate corresponds to equivalent of six process standarddeviations.

Page 102: DBA1656

DBA 1656 QUALITY MANAGEMENT

102

NOTES

Anna University Chennai

FIGURE 3.7

Many processes are prone to being influenced by special and/or assignablecauses that impact the overall performance of the process relative to the customer’sspecification. That is, the overall performance of our process as the customer views itmight be 3.4 DPMO (corresponding to Long Term performance of 4.5 Sigma).However, our process could indeed be capable of producing a near perfect output(Short Term capability – also known as process entitlement – of 6 Sigma). The differencebetween the “best” a process can be, measured by Short Term process capability, andthe customer’s aggregate experience (Long Term capability) is known as Shift depictedas Zshift or σshift. For a “typical” process, the value of shift is 1.5; therefore, when onehears about “6 Sigma,” inherent in that statement is that the short term capability of theprocess is 6, the long term capability is 4.5 (3.4 DPMO – what the customer sees) withan assumed shift of 1.5. Typically, when reference is given using DPMO, it denotes theLong Term capability of the process, which is the customer’s experience. The role ofthe Six Sigma professional is to quantify the process performance (Short Term andLong Term capability) and based on the true process entitlement and process shift,establish the right strategy to reach the established performance objective

As the process sigma value increases from zero to six, the variation of theprocess around the mean value decreases. With a high enough value of process sigma,the process approaches zero variation and is known as ‘zero defects.’

Page 103: DBA1656

DBA 1656 QUALITY MANAGEMENT

103

NOTES

Anna University Chennai

Remembering Bill Smith, Father of Six Sigma

Born in Brooklyn, New York, in 1929, Bill Smithgraduated from the U.S. Naval Academy in 1952 andstudied at the University of Minnesota School of Business.In 1987, after working for nearly 35 years in engineeringand quality assurance, he joined Motorola, serving as vicepresident and senior quality assurance manager for theLand Mobile Products Sector.

In honor of Smith’s talents and dedication, Northwestern University’s KelloggGraduate School of Management established an endowed scholarship in Smith’s name.Dean Donald P. Jacobs of the Kellogg School notified Motorola’s Robert Galvin of theschool’s intention less than a month after Smith died. “Bill was an extremely effectiveand inspiring communicator,” Jacobs wrote in his July 27, 1993, letter. “He never failedto impress his audience by the depth of his knowledge, the extent of his personalcommitment, and the level of his intellectual powers.” The school created the scholarshipfund in recognition of Smith’s “contributions to Kellogg and his dedication to the teachingand practice of quality.”

It was a fitting tribute to a man who influenced business students and corporateleaders worldwide with his innovative Six Sigma strategy.

As the one who followed most closely in his footsteps, Marjorie Hook is well-positioned to speculate about Bill Smith’s take on the 2003 version of Six Sigma.“Today I think people sometimes try to make Six Sigma seem complicated and overlytechnical,” she said. “His approach was, ‘If you want to improve something, involve thepeople who are doing the job.’ He always wanted to make it simple so people woulduse it.”

Six Sigma Costs and Savings The financial benefits of implementing SixSigma at your company can be significant.

Many people say that it takes money to make money. In the world of SixSigma quality, the saying also holds true: it takes money to save money using the SixSigma quality methodology. You can’t expect to significantly reduce costs and increasesales using Six Sigma without investing in training, organizational infrastructure and cultureevolution.

Sure you can reduce costs and increase sales in a localized area of a businessusing the Six Sigma quality methodology — and you can probably do it inexpensivelyby hiring an ex-Motorola or GE Black Belt. I like to think of that scenario as a “get richquick” application of Six Sigma

Page 104: DBA1656

DBA 1656 QUALITY MANAGEMENT

104

NOTES

Anna University Chennai

“Companies of all types and sizes are in the midst of a quality revolution. GEsaved $12 billion over five years and added $1 to its earnings per share. Honeywell(AlliedSignal) recorded more than $800 million in savings.”

”GE produces annual benefits of over $2.5 billion across the organization fromSix Sigma.”

”Motorola reduced manufacturing costs by $1.4 billion from 1987-1994.”

”Six Sigma reportedly saved Motorola $15 billion over the last 11 years.”

The above quotations may in fact be true, but pulling the numbers out of thecontext of the organization’s revenues does nothing to help a company figure out if SixSigma is right for them. For example, how much can a $10 million or $100 millioncompany expect to save?

I investigated what the companies themselves had to say about their Six Sigmacosts and savings — I didn’t believe anything that was written on third party websites,was estimated by “experts,” or was written in books on the topic. I reviewed literatureand only captured facts found in annual reports, website pages and presentations foundon company websites.

I investigated Motorola, Allied Signal, GE and Honeywell. I choose these fourcompanies because they are the companies that invented and refined Six Sigma — theyare the most mature in their deployments and culture changes. As the Motorola websitesays, they invented it in 1986. Allied Signal deployed Six Sigma in 1994, GE in 1995.Honeywell was included because Allied Signal merged with Honeywell in 1999 (theylaunched their own initiative in 1998). Many companies have deployed Six Sigmabetween the years of GE and Honeywell — we’ll leave those companies for anotherarticle.

Table 3.8: Companies And The Year They Implemented Six Sigma

Company Name Year BeganSix Sigma

Motorola (NYSE:MOT) 1986

Allied Signal (Merged With Honeywell in 1999) 1994GE (NYSE:GE) 1995Honeywell (NYSE:HON) 1998Ford (NYSE:F) 2000

Table 2 identifies by company, the yearly revenues, the Six Sigma costs (investment)per year, where available, and the financial benefits (savings). There are many blanks,

Page 105: DBA1656

DBA 1656 QUALITY MANAGEMENT

105

NOTES

Anna University Chennai

especially where the investment is concerned. I’ve presented as much information asthe companies have publicly disclosed.

Table 3.9: Six Sigma Cost And Savings By Company

Year Revenue Invested % Revenue Savings % Revenue ($B) ($B) Invested ($B) Savings

Motorola1986-2001 356.9(e) ND - 16 1 4.5

Allied Signal

1998 15.1 ND - 0.5 2 3.3GE1996 79.2 0.2 0.3 0.2 0.2

1997 90.8 0.4 0.4 1 1.1

1998 100.5 0.5 0.4 1.3 1.2

1999 111.6 0.6 0.5 2 1.8

1996-1999 382.1 1.6 0.4 4.4 3 1.2Honeywell

1998 23.6 ND - 0.5 2.2

1999 23.7 ND - 0.6 2.5

2000 25.0 ND - 0.7 2.6

1998-2000 72.3 ND - 1.8 4 2.4

Ford

2000-200243.9 ND - 1 6 2.3 Key: $B = $ Billions, United States (e) = Estimated, Yearly Revenue 1986-1992 Could Not Be Found ND = Not Disclosed Note: Numbers Are Rounded To The Nearest Tenth

Although the complete picture of investment and savings by year is not present,Six Sigma savings can clearly be significant to a company. The savings as a percentageof revenue vary from 1.2% to 4.5%. And what we can see from the GE deployment isthat a company shouldn’t expect more than a breakeven the first year of implementation.Six Sigma is not a “get rich quick” methodology. I like to think of it like my retirement

Page 106: DBA1656

DBA 1656 QUALITY MANAGEMENT

106

NOTES

Anna University Chennai

savings plan — Six Sigma is a get rich slow methodology — the take-away point beingthat you will get rich if you plan properly and execute consistently.

As GE’s 1996 annual report states, “It has been estimated that less than SixSigma quality, i.e., the three-to-four Sigma levels that are average for most U.S.companies, can cost a company as much as 10-15% of its revenues. For GE, thatwould mean $8-12 billion.” With GE’s 2001 revenue of $111.6 billion, this wouldtranslate into $11.2-16.7 billion of savings. Although $2 billion worth of savings in 1999is impressive, it appears that even GE hasn’t been able to yet capture the losses due topoor quality — or maybe they’re above the three-to-four Sigma levels that are theaverage for most U.S. companies?

In either case, 1.2-4.5% of revenue is significant and should catch the eye ofany CEO or CFO. For a $30 million a year company, that can translate into between$360,000 and $1,350,000 in bottom-line-impacting savings per year. It takes moneyto make money.

Complementary Technologies

It is difficult to concisely describe the ways in which Six Sigma may be interwovenwith other initiatives (or vice versa). The following paragraphs broadly capture some ofthe possible interrelationships between initiatives.

Six Sigma and improvement approaches such as CMM‚, CMMISM, PSPSM/TSPSM are complementary and mutually supportive. Depending on current organizational,project or individual circumstances, Six Sigma could be an enabler to launch CMM®,CMMISM, PSPSM, or TSPSM. Or, it could be a refinement toolkit/methodology withinthese initiatives. For instance, it might be used to select highest priority Process Areaswithin CMMISM or to select highest leverage metrics within PSPSM.

Examination of the Goal-Question-Metric (GQM), Initiating-Diagnosing-Establishing-Acting-Leveraging (IDEALSM), and Practical Software Measurement(PSM) paradigms, likewise, shows compatibility and consistency with Six Sigma.GQ(I)M meshes well with the Define-Measure steps of Six Sigma. IDEAL and SixSigma share many common features, with IDEALSM being slightly more focused onchange management and organizational issues and Six Sigma being more focused ontactical, data-driven analysis and decision making. PSM provides a software-tailoredapproach to measurement that may well serve the Six Sigma improvement framework.

Page 107: DBA1656

DBA 1656 QUALITY MANAGEMENT

107

NOTES

Anna University Chennai

Six Sigma Process’ Capability

So how do you know your processes cut the mustard? With Six Sigma, it alldepends on the process’ capability. Process capability is a measure of how much variationor deviation occurs from what normally happens to what is expected to happen. Forexample, normally after you upgrade a system, you have a working system. It you haveone piece working, and a couple other piece broken, well you now have a variationfrom what was expected to happen. Just to elaborate a little further, there are threemain characteristics of process capability.

The requirements are frequently a range of acceptable values.

The process is capable, when its variation consistently falls within that rang.

The process’ sigma level is an indicator of its capability and likelihood of meetingexpectations.

Expanding on this concept a little bit, to assure we all have a clearerunderstanding of how this relates to us. The first characteristic states that the requirementsare frequently a range of acceptable values. For example, the billing system is always tobe available weekdays between the hours of 6:00 am and 6:00 pm. So our customerswill be satisfied when the system is available during these timeframes. The problemreally becomes with clinical systems, when the system availability needs to be 100% ofthe time. That is a difficult range of acceptable values.

The next characteristic of process capability is that a process is consideredcapable when its variation consistently falls within that range of acceptable values. Keepin mind, variation is the process doesn’t behave as anticipated. Consider the above, ifthe billing system is down at 1:00 pm weekly, this is outside the range of acceptabletimes. It is a variation, and makes the process is not considered capable.

The final characteristic of process capability is the process’ sigma level.Remember, the lower the sigma level, the greater the variation. A 1.0 sigma level indicatesthat the process has a good deal of variation and is not meeting requirements. In SixSigma, A 6.0 sigma level is the goal

What Is Six Sigma and the 1.5 shift?

To quote a Motorola hand out from about 1987.

‘The performance of a product is determined by how much margin existsbetween the design requirement of its characteristics (and those of its parts/steps), andthe actual value of those characteristics. These characteristics are produced by processesin the factory, and at the suppliers.

Page 108: DBA1656

DBA 1656 QUALITY MANAGEMENT

108

NOTES

Anna University Chennai

Each process attempts to reproduce its characteristics identically from unit tounit, but within each process some variation occurs. For more processes, such as thosewhich use real time feedback to control outcome, the variation is quite small, and forothers it may be quite large.

A variation of the process is measured in Std. Dev, (Sigma) from the Mean.The normal variation, defined as process width, is +/-3 Sigma about the mean.

Approximately 2700 parts per million parts/steps will fall outside the normalvariation of +/- 3 Sigma. This, by itself, does not appear disconcerting. However,when we build a product containing 1200 parts/steps, we can expect 3.24 defects perunit (1200 x .0027), on average. This would result in a rolled yield of less than 4%,which means fewer than 4 units out of every 100 would go through the entiremanufacturing process without a defect. Thus, we can see that for a product to be builtvirtually defect-free, it must be designed to accept characteristics which are significantlymore than +/- 3 sigma away from the mean.

It can be shown that a design which can accept twice the normal variation ofthe process, or +/- 6 sigma, can be expected to have no more than 3.4 parts per milliondefective for each characteristic, even if the process mean were to shift by as much as+/- 1.5 sigma. In the same case of a product containing 1200 parts/steps, we wouldnow expect only only 0.0041 defects per unit (1200 x 0.0000034). This would meanthat 996 units out of 1000 would go through the entire manufacturing process without adefect. To quantify this, Capability Index (Cp) is used; where:

Design Specification WidthCapability Index Cp =

Process Width

A design specification width of +/- 6 Sigma and a process width of +/- 3 Sigma yieldsa Cp of 12/6 = 2. However, as shown in, the process mean can shift. When the processmean is shifted with respect to design mean, the Capability Index is adjusted with a

factor k, and becomes Cpk. Cpk = Cp(1-k), where:

Process Shiftk Factor =

Design Specification Width

Page 109: DBA1656

DBA 1656 QUALITY MANAGEMENT

109

NOTES

Anna University Chennai

The k factor for a +/- 6 Sigma design with a 1.5 Sigma process shift .

1.5/(12/2) or 1.5/6 = 0.25

and the

Cpk = 2(1- 0.25)=1.5

Six Sigma (6 ) is a business-driven, multi-faceted approach to processimprovement, reduced costs, and increased profits. With a fundamental principle toimprove customer satisfaction by reducing defects, its ultimate performance target isvirtually defect-free processes and products (3.4 or fewer defective parts per million(ppm)). The Six Sigma methodology, consisting of the steps “Define - Measure - Analyze- Improve - Control,” is the roadmap to achieving this goal. Within this improvementframework, it is the responsibility of the improvement team to identify the process, thedefinition of defect, and the corresponding measurements. This degree of flexibilityenables the Six Sigma method, along with its toolkit, to easily integrate with existingmodels of software process implementation.

Six Sigma originated at Motorola in the early 1980s in response to a CEO-driven challenge to achieve tenfold reduction in product-failure levels in five years.Meeting this challenge required swift and accurate root-cause analysis and correction.In the mid-1990s, Motorola divulged the details of their quality improvement framework,which has since been adopted by several large manufacturing companies.

Technical Detail

The primary goal of Six Sigma is to improve customer satisfaction, and therebyprofitability, by reducing and eliminating defects. Defects may be related to any aspectof customer satisfaction: high product quality, schedule adherence, cost minimization.Underlying this goal is the Taguchi Loss Function, which shows that increasing defectsleads to increased customer dissatisfaction and financial loss. Common Six Sigma metricsinclude defect rate (parts per million or ppm), sigma level, process capability indices,defects per unit, and yield. Many Six Sigma metrics can be mathematically related tothe others.

The Six Sigma drive for defect reduction, process improvement and customersatisfaction is based on the “statistical thinking” paradigm [ASQ 00], [ASA 01]:

• Everything is a process

• All processes have inherent variability

• Data is used to understand the variability and drive process improvementdecisions

Page 110: DBA1656

DBA 1656 QUALITY MANAGEMENT

110

NOTES

Anna University Chennai

As the roadmap for actualizing the statistical thinking paradigm, the key steps inthe Six Sigma improvement framework are Define - Measure - Analyze - Improve -Control. Six Sigma distinguishes itself from other quality improvement programsimmediately in the “Define” step. When a specific Six Sigma project is launched, thecustomer satisfaction goals have likely been established and decomposed into subgoalssuch as cycle time reduction, cost reduction, or defect reduction. (This may have beendone using the Six Sigma methodology at a business/organizational level.) The Definestage for the specific project calls for baselining and benchmarking the process to beimproved, decomposing the process into manageable sub-processes, further specifyinggoals/sub-goals and establishing infrastructure to accomplish the goals. It also includesan assessment of the cultural/organizational change that might be needed for success.

Once an effort or project is defined, the team methodically proceeds throughMeasurement, Analysis, Improvement, and Control steps. A Six Sigma improvementteam is responsible for identifying relevant metrics based on engineering principles andmodels. With data/information in hand, the team then proceeds to evaluate the data/information for trends, patterns, causal relationships and “root cause,” etc. If needed,special experiments and modeling may be done to confirm hypothesized relationshipsor to understand the extent of leverage of factors; but many improvement projects maybe accomplished with the most basic statistical and non-statistical tools. It is oftennecessary to iterate through the Measure-Analyze-Improve steps. When the targetlevel of performance is achieved, control measures are then established to sustainperformance. A partial list of specific tools to support each of these steps is shown inFigure.3.10

FIGURE 3.8

Page 111: DBA1656

DBA 1656 QUALITY MANAGEMENT

111

NOTES

Anna University Chennai

An important consideration throughout all the Six Sigma steps is to distinguishwhich process substeps significantly contribute to the end result. The defect rate of theprocess, service or final product is likely more sensitive to some factors than others.The analysis phase of Six Sigma can help identify the extent of improvement needed ineach substep in order to achieve the target in the final product. It is important to remainmindful that six sigma performance (in terms of the ppm metric) is not required for everyaspect of every process, product and service. It is the goal only where it quantitativelydrives (i.e., is a significant “control knob” for) the end result of customer satisfactionand profitability.

The current average industry runs at four sigma, which corresponds to 6210defects per million opportunities. Depending on the exact definition of “defect” in payrollprocessing, for example, this sigma level could be interpreted as 6 out of every 1000paychecks having an error. As “four sigma” is the average current performance, thereare industry sectors running above and below this value. Internal Revenue Service (IRS)phone-in tax advice, for instance, runs at roughly two sigma, which corresponds to308,537 errors per million opportunities. Again, depending on the exact definition ofdefect, this could be interpreted as 30 out of 100 phone calls resulting in erroneous taxadvice. (“Two Sigma” performance is where many noncompetitive companies run.) Onthe other extreme, domestic (U.S.) airline flight fatality rates run at better than six sigma,which could be interpreted as fewer than 3.4 fatalities per million passengers - that is,fewer than 0.00034 fatalities per 100 passengers.

As just noted, flight fatality rates are “better than six sigma,” where “six sigma”denotes the actual performance level rather than a reference to the overall combinationof philosophy, metric, and improvement framework. Because customer demands willlikely drive different performance expectations, it is useful to understand the mathematicalorigin of the measure and the term “six-sigma process.” Conceptually, the sigma level ofa process or product is where its customer-driven specifications intersect with itsdistribution. A centered six-sigma process has a normal distribution with mean=targetand specifications placed 6 standard deviations to either side of the mean. At this point,the portions of the distribution that are beyond the specifications contain 0.002 ppm ofthe data (0.001 on each side). Practice has shown that most manufacturing processesexperience a shift (due to drift over time) of 1.5 standard deviations so that the mean nolonger equals target. When this happens in a six-sigma process, a larger portion of thedistribution now extends beyond the specification limits: 3.4 ppm.

Figure depicts a 1.5 -shifted distribution with “6 ” annotations. Inmanufacturing, this shift results from things such as mechanical wear over time andcauses the six-sigma defect rate to become 3.4 ppm. The magnitude of the shift may

Page 112: DBA1656

DBA 1656 QUALITY MANAGEMENT

112

NOTES

Anna University Chennai

vary, but empirical evidence indicates that 1.5 is about average. Does this shift exist inthe software process? While it will take time to build sufficient data repositories toverify this assumption within the software and systems sector, it is reasonable to presumethat there are factors that would contribute to such a shift. Possible examples are decliningprocedural adherence over time, learning curve, and constantly changing tools andtechnologies (hardware and software).

FIGURE 3.9

Assumptions:

• Normal Distribution

• Process Mean Shift of 1.5 from Nominal is Likely

• Process Mean and Standard Deviation are known

• Defects are randomly distributed throughout units

• Parts and Process Steps are Independent

• For this discussion, original nominal value = target

Key

= standard deviationµ = center of the distribution(shifted 1.5 from its original , on-target location)+/-3 & +/-6 show the specifications relative to the original targetFigure : Six Sigma Process with Mean Shifted from Nominal by 1. 5

Usage Considerations

In the software and systems field, Six Sigma may be leveraged differently basedon the state of the business. In an organization needing process consistency, Six Sigma

Page 113: DBA1656

DBA 1656 QUALITY MANAGEMENT

113

NOTES

Anna University Chennai

can help promote the establishment of a process. For an organization striving to streamlinetheir existing processes, Six Sigma can be used as a refinement mechanism.

In organizations at CMM level 1-3, “defect free” may seem an overwhelmingstretch. Accordingly, an effective approach would be to use the improvement framework(‘Define-Measure-Analyze-Improve-Control’) as a roadmap toward intermediatedefect reduction goals. Level 1 and 2 organizations may find that adopting the SixSigma philosophy and framework reinforces their efforts to launch measurementpractices; whereas Level 3 organizations may be able to begin immediate use of theframework. As organizations mature to Level 4 and 5, which implies an ability to leverageestablished measurement practices, accomplishment of true “six sigma” performance(as defined by ppm defect rates) becomes a relevant goal.

Many techniques in the Six Sigma toolkit are directly applicable to softwareand are already in use in the software industry. For instance, “Voice of the Client” and“Quality Function Deployment” are useful for developing customer requirements (andare relevant measures). There are numerous charting/calculation techniques that can beused to scrutinize cost, schedule, and quality (project-level and personal-level) data asa project proceeds. And, for technical development, there are quantitative methods forrisk analysis and concept/design selection. The strength of “Six Sigma” comes fromconsciously and methodically deploying these tools in a way that achieves (directly orindirectly) customer satisfaction.

As with manufacturing, it is likely that Six Sigma applications in software willreach beyond “improvement of current processes/products” and extend to “design ofnew processes/products.” Named “Design for Six Sigma” (DFSS), this extension heavilyutilizes tools for customer requirements, risk analysis, design decision-making andinventive problem solving. In the software world, it would also heavily leverage re-uselibraries that consist of robustly designed software.

Maturity

Six Sigma is rooted in fundamental statistical and business theory; consequently,the concepts and philosophy are very mature. Applications of Six Sigma methods inmanufacturing, following on the heels of many quality improvement programs, are likewisemature. Applications of Six Sigma methods in software development and other ‘upstream’(from manufacturing) processes are emerging.

Page 114: DBA1656

DBA 1656 QUALITY MANAGEMENT

114

NOTES

Anna University Chennai

Costs and Limitations

Institutionalizing Six Sigma into the fabric of a corporate culture can requiresignificant investment in training and infrastructure. There are typically three differentlevels of expertise cited by companies: Green Belt, Black Belt Practitioner, MasterBlack Belt. Each level has increasingly greater mastery of the skill set. Roles andresponsibilities also grow from each level to the next, with Black Belt Practitionersoften in team/project leadership roles and Master Black Belts often in mentoring/teachingroles. The infrastructure needed to support the Six Sigma environment varies. Somecompanies organize their trained Green/Black Belts into a central support organization.Others deploy Green/Black Belts into organizations based on project needs and relyon communities of practice to maintain cohesion.

Alternatives

In the past years, there have been many instances and evolutions of qualityimprovement programs. Scrutiny of the programs will show much similarity and alsoclear distinctions between such programs and Six Sigma. Similarities include commontools and methods, concepts of continuous improvement, and even analogous steps inthe improvement framework. Differences have been articulated as follows:

• Six Sigma speaks the language of business. It specifically addresses the conceptof making the business as profitable as possible.

• In Six Sigma, quality is not pursued independently from business goals. Timeand resources are not spent improving something that is not a lever for improvingcustomer satisfaction.

• Six Sigma focuses on achieving tangible results.

• Six Sigma does not include specific integration of ISO900 or Malcolm BaldridgeNational Quality Award criteria.

• Six Sigma uses an infrastructure of highly trained employees from many sectorsof the company (not just the Quality Department). These employees are typicallyviewed as internal change agents.

• Six Sigma raises the expectation from 3-sigma performance to 6-sigma. Yet, itdoes not promote “Zero Defects” which many people dismiss as “impossible.”

Page 115: DBA1656

DBA 1656 QUALITY MANAGEMENT

115

NOTES

Anna University Chennai

3.5 RELIABILITY CONCEPTS – DEFINITIONS, RELIABILITY INSERIES AND PARALLEL

Definition

In general, reliability (systemic def.) is the ability of a system to perform andmaintain its functions in routine circumstances, as well as hostile or unexpectedcircumstances.

The IEEE defines it as “. . . the ability of a system or component to perform itsrequired functions under stated conditions for a specified period of time.”

In natural language it may also denote persons who act efficiently in propermoments/circumstances (infallibile).

Importance of Reliability

What is Reliability?

Reliability is a broad term that focuses on the ability of a product to perform itsintended function. Mathematically speaking, assuming that an item is performing itsintended function at time equals zero, reliability can be defined as the probability that anitem will continue to perform its intended function without failure for a specified periodof time under stated conditions. Please note that the product defined here could be anelectronic or mechanical hardware product, a software product, a manufacturing process,or even a service.

Why is Reliability Important?

There are a number of reasons why product reliability is an important productattribute, including:

• Reputation. A company’s reputation is very closely related to the reliability oftheir products. The more reliable a product is, the more likely the company isto have a favorable reputation.

• Customer Satisfaction. While a reliable product may not dramatically affectcustomer satisfaction in a positive manner, an unreliable product will negativelyaffect customer satisfaction severely. Thus high reliability is a mandatoryrequirement for customer satisfaction.

• Warranty Costs. If a product fails to perform its function within the warrantyperiod, the replacement and repair costs will negatively affect profits, as well asgain unwanted negative attention. Introducing reliability analyses is an importantstep in taking corrective action, ultimately leading to a product that is morereliable.

Page 116: DBA1656

DBA 1656 QUALITY MANAGEMENT

116

NOTES

Anna University Chennai

• Repeat Business. A concentrated effort towards improved reliability showsexisting customers that a manufacturer is serious about their product, andcommitted to customer satisfaction. This type of attitude has a positive impacton future business.

• Cost Analysis. Manufacturers may take reliability data and combine it withother cost information to illustrate the cost-effectiveness of their products. Thislife cycle cost analysis can prove that although the initial cost of their productmight be higher, the overall lifetime cost is lower than a competitor’s becausetheir product requires fewer repairs or less maintenance.

• Customer Requirements. Many customers in today’s market demand thattheir suppliers have an effective reliability program. These customers have learnedthe benefits of reliability analysis from experience.

• Competitive Advantage. Many companies will publish their predicted reliabilitynumbers to help gain an advantage over their competition who either does notpublish their numbers or has lower numbers.

Difference Between Quality and Reliability?

Even though a product has a reliable design, when the product is manufacturedand used in the field, its reliability may be unsatisfactory. The reason for this low reliabilitymay be that the product was poorly manufactured. So, even though the product has areliable design, it is effectively unreliable when fielded which is actually the result of asubstandard manufacturing process. As an example, cold solder joints could pass initialtesting at the manufacturer, but fail in the field as the result of thermal cycling or vibration.This type of failure did not occur because of an improper design, but rather it is theresult of an inferior manufacturing process. So while this product may have a reliabledesign, its quality is unacceptable because of the manufacturing process.

Just like a chain is only as strong as its weakest link, a highly reliable product isonly as good as the inherent reliability of the product and the quality of the manufacturingprocess.

Improving Products Reliability

Evaluating and finding ways to attain high product reliability are all aspects ofreliability engineering. There are a number of types of reliability analyses typicallyperformed as part of this discipline.

Page 117: DBA1656

DBA 1656 QUALITY MANAGEMENT

117

NOTES

Anna University Chennai

Quality and Reliability Requirements

Quality is associated with the degree of conformance of the product to customerrequirements, and thus, in a sense, with the degree of customer satisfaction. Implicit inJapanese quality products is an acceptable amount of reliability; that is, the productperforms its intended function over its intended life under normal environmental andoperating conditions. Reliability assessments are incorporated through simulation andqualification functions at the design and prototyping stages. With basic reliability designedin, quality control functions are then incorporated into the production line using in-lineprocess controls and reduced human intervention through automation. Since the mid-1980s, Japanese firms have found that automation leads to improved quality inmanufacturing. They have high reliability because they control their manufacturing lines.

Reliability assurance tasks such as qualification are conducted (1) during theproduct design phase using analytical simulation methods and design-for-assemblysoftware, and (2) during development using prototype or pilot hardware. Once again,it is the role of quality assurance to ensure reliability. Qualification includes activities toensure that the nominal design and manufacturing specifications meet the reliability targets.In some cases, such as the Yamagata Fujitsu hard disk drive plant, qualification of themanufacturing processes and of the pilot lots are conducted together.

Quality conformance for qualified products is accomplished through monitoringand control of critical parameters within the acceptable variations already established,perhaps during qualification. Quality conformance, therefore, helps to increase productyield and consequently to lower product cost.

Quality Assurance in Electronic Packaging

Japan has a long history of taking lower-yield technology and improving it. Inthe United States, companies change technology if yields are considered too low. Thecontinuous improvement of quad flat packs (QFPs) in contrast to the introduction ofball-grid arrays (BGAs) is an example of this difference. Both countries are concernedwith the quality and reliability limits of fine-pitch surface mount products. The Japanesecontinue to excel at surface mount technologies (SMT) as they push fine-pitchdevelopment to its limits. Many Japanese companies are now producing QFP with 0.5mm pitch and expect to introduce 0.3 mm pitch packages within the next several years.As long as current QFP technology can be utilized in the latest product introductions,the cost of manufacturing is kept low and current SMT production lines can be utilizedwith minimal investment and with predictable quality and reliability results.

Page 118: DBA1656

DBA 1656 QUALITY MANAGEMENT

118

NOTES

Anna University Chennai

Japan’s leaders in SMT have introduced equipment for placing very small andfine-pitch devices, for accurate screen printing, and for soldering. They have developedhighly automated manufacturing and assembly lines with a high degree of in-line qualityassurance. Thus, in terms of high-volume, rapid-turn-around, low-cost products, it is intheir best interests to push the limits of surface mount devices. Furthermore, QFPs donot require new assembly methods and are inspectable, a factor critical to ensuringquality products.

The United States is aggressively pursuing BGA technology; Hitachi, howeverappears to be applying an on-demand approach. It has introduced BGA in its recentsupercomputer without any quality problems and feels comfortable in moving to newtechnology when it becomes necessary. Since Hitachi’s U.S. customers are demandingBGA for computer applications, Hitachi plans to provide BGA products. However, Dr.Otsuka of Meisei University, formerly with Hitachi, believes that for Japanese customersthat are still cost driven, QFP packages will reach 0.15 mm pin pitch to be competitivewith BGA in high-pin-count, low-cost applications. Dr. Otsuka believes that Japan’sability to continue using QFP will allow Japan to remain the low-cost electronic packagingleader for the remainder of this decade. Like the United States, Japan is pursuing BGA,but unlike the United States, Japan is continuing to improve SMT with confidence thatit will meet most cost and functional requirements through the year 2000. Matsushitaand Fujitsu are also developing bumped bare-chip technologies to provide for continuedminiaturization demands.

Similar differences in technical concerns exist for wire bonding and knowngood die (KGD) technologies. The U.S. Semiconductor Industry Association’s roadmapsuggests a technology capability limit to wire bonding that is not shared by the Japanesecomponent industry. The Japanese industry continues to develop new wire materialsand attachment techniques that push the limits of wire bonding technologies. The Japaneseconsider concerns with KGD to be a U.S. problem caused by the lack of known goodassembly (KGA); that is, U.S. firms lack the handling and assembly capability to assemblemultiple-die packages in an automated, and thus high-quality, manner.

With productivity and cost reduction being the primary manufacturing goals,increased factory automation and reduced testing are essential strategies. As TDKofficials explained to the JTEC panelists during their visit, inspection is a critical costissue:

Page 119: DBA1656

DBA 1656 QUALITY MANAGEMENT

119

NOTES

Anna University Chennai

“It is TDK’s QA goal to produce only quality products which need no inspection.At TDK, it is our goal to have no inspection at all, either human or machine. Our lowestlabor cost in TDK is 32 yen per minute, or one yen every two seconds. If one multilayersemi-capacitor takes roughly one second to produce, then it costs about 0.6 yen indirect cost. If someone inspects it for two seconds, then we add 1.2 yen in inspectioncost. That means we have to eliminate inspection to stay competitive. If we can reducehuman and machine inspection, we can improve profits. Inspection does not add anyvalue to the product.’’

Quality control is implemented in the manufacturing lines to ensure that theprocesses stay within specified tolerances. Monitoring, verification, and assessment ofassembly and process parameters are integral parts of the manufacturing line. Qualitycontrol ensures that all process variabilities beyond a specified tolerance are identifiedand controlled.

The key focus of parameter variability appears to be in manufacturing processparameters and human errors and inadequacies, rather than in materials or parts.Incoming inspection is negligible because of the view that the quality of suppliers’ productscan be trusted, and perhaps more importantly because the inspection process is notconsidered cost-effective. The global move to ISO 9000 certification helps guaranteesupplier quality to further reduce inspection costs.

Selection of specific quality control methods is dictated by product cost. Hiddencosts associated with scheduling, handling, testing, and production yields become criticalwith increasing global competition. As more components are sourced from outside ofJapan, these cost factors become increasingly crucial in maintaining competitive costs.

Automation and its impact on quality. The Japanese have determined that manuallabor leads to poor-quality output and that automation leads to higher-quality output.Sony’s automation activities have reduced defect rates from 2000 to 20 parts permillion. Quality has, therefore, become a key driver for factory automation in Japan. Inaddition, factory automation also adds the benefits of improving productivity andimproving flexibility in scheduling the production or changeover of product types. Thus,whenever automation is cost-effective, it is used to replace manual assembly, handling,testing, and human inspection activities. This approach is applied to each new productand corresponding production line that is installed. For example, the old printed wiringboard assembly line at Fujitsu’s Yamagata plant used extensive manual inspection, whilethe new line is in a clean room and is totally automated, including final inspection andtesting.

All of Nippondenso’s plants have now implemented factory-wide CIM systems.The system at Kota, uses factory-level data to meet quality standards and delivery

Page 120: DBA1656

DBA 1656 QUALITY MANAGEMENT

120

NOTES

Anna University Chennai

times. Boards are inserted into metal enclosures, sealed and marked, then burned-inand tested before shipping. Out of several hundred thousand units produced each month,only a couple of modules failed testing each month, according to JTEC’s hosts.

Inspection and screening. As noted above, incoming inspection was negligible atmost of the companies that the JTEC panel visited, because of the view that the qualityof suppliers’ products could be trusted. Since the 1950s, the Japanese government hasset quality requirements for any company that exports products from Japan. Suppliershave progressed in status from being fully inspected by their customers to being fullyaccepted. Qualified suppliers are now the standard for Japan, and most problems comefrom non-Japanese suppliers. Akio Morita of Sony lamented that finding quality U.S.suppliers was a major challenge for the company. Japanese suppliers were part of the“virtual” company, with strong customer ties and a commitment to help customerssucceed.

Components were not being screened at any of the printed wiring board (PWB)assembly, hard disk drive, or product assembly plants visited by the JTEC panel. Defectsare seldom found in well-controlled and highly automated assembly lines. Where specificproblems are found, tailored screens are implemented to address specific failuremechanisms at the board or product assembly level. For example, Fujitsu noted thattoday’s components do not require burn-in, although at the board level it conductssome burn-in to detect solder bridges that occur during board assembly. But with theincreasing cost of Japanese labor, the greatest pressure is to avoid unnecessary testingactivities. Suppliers simply have to meet quality conformance standards to keep customerssatisfied. Lack of conformance to requirements would be considered noncompetitive.

With reliable components, assemblers must concentrate their efforts on theassembly process. Within a company’s own production lines, automated inspection iscentral to factory automation activities. Older lines, like the 31/2-inch disk drive line thepanel saw at Fujitsu, have extensive 100% manual inspection of PWBs. Fujitsu’s newline has fully automated inspection and testing. At Ibiden, automated inspection is partof the automated manufacturing process as a technique for alignment and assembly aswell as for tolerance assessment and defect detection. Microscopic mechanicaldimensioning is conducted on a sample basis. The newer the line, the greater theautomation of inspection and testing.

Reliability in Electronic Packaging

In terms of reliability, the Japanese proactively develop good design, usingsimulation and prototype qualification, that is based on advanced materials and packagingtechnologies. Instead of using military standards, most companies use internal commercialbest practices. Most reliability problems are treated as materials or process problems.

Page 121: DBA1656

DBA 1656 QUALITY MANAGEMENT

121

NOTES

Anna University Chennai

Reliability prediction methods using models such as Mil-Hdbk-217 are not used. Instead,Japanese firms focus on the “physics of failure” by finding alternative materials or improvedprocesses to eliminate the source of the reliability problem. The factories visited by theJTEC panel are well equipped to address these types of problems.

Assessment methods. Japanese firms identify the areas that need improvement forcompetitive reasons and target those areas for improvement. They don’t try to fixeverything; they are very specific. They continuously design products for reduced sizeand cost and use new technologies only when performance problems arise. As a result,most known technologies have predictable reliability characteristics.

Infrastructure. The incorporation of suppliers and customers early in the productdevelopment cycle has given Japanese companies an advantage in rapid developmentof components and in effective design of products. This is the Japanese approach toconcurrent engineering and is a standard approach used by the companies the JTECpanel visited. The utilization of software tools like design for assembly allows for rapiddesign and is an integral part of the design team’s activities. At the time of the panel’svisit, design for disassembly was becoming a requirement for markets such as Germany.Suppliers are expected to make required investments to provide the needed componentsfor new product designs. Advanced factory automation is included in the design of newfactories.

Training. The Japanese view of training is best exemplified by Nippondenso. Thecompany runs its own two-year college to train production workers. Managers tend tohold four-year degrees from university engineering programs. Practical training in areassuch as equipment design takes place almost entirely within the company. During thefirst six years of employment, engineers each receive 100 hours per year of formaltechnical training. In the sixth year, about 10% of the engineers are selected for extendededucation and receive 200 hours per year of technical training. After ten years about1% are selected to become future executives and receive additional education. By thistime, employees have earned the equivalent of a Ph.D. degree within the company.Management and business training is also provided for technical managers. Innonengineering fields, the fraction that become managers is perhaps 10%.

Ibiden uses “one-minute” and safety training sessions in every manufacturingsector. “One-minute” discussions are held by section leaders and workers using visualaids that are available in each section. The subjects are specific to facets of the job likethe correct way to use a tool or details about a specific step in the process. The dailyevents are intended to expose workers to additional knowledge and continuous training.

Page 122: DBA1656

DBA 1656 QUALITY MANAGEMENT

122

NOTES

Anna University Chennai

As a consequence, workers assure that production criteria are met. Ibiden also employsa quality patrol that finds and displays examples of poor quality on large bulletin boardsthroughout the plant. Exhibits the panel saw included anything from pictures ofcomponents or board lots sitting around in corners, to damaged walls and floors, toziplock bags full of dust and dirt.

The factory. Japanese factories pay attention to running equipment well, to continuousimprovement, to cost reduction, and to waste elimination. Total preventive maintenance(TPM) is a methodology to ensure that equipment operates at its most efficient leveland that facilities are kept clean so as not to contribute to reliability problems. In fact,the Japan Management Association gives annual TPM awards with prestige similar tothe Deming Prize, and receipt of those awards is considered a required step for companiesthat wish to attain the Japan Quality Prize. No structured quality or reliability techniquesare used - just detailed studies of operations, and automated, smooth-running, efficientproduction.

Safety concerns appeared to the JTEC panel to be secondary to efficiencyconsiderations. While floor markings and signs direct workers to stay away fromequipment, few barriers keep individuals away from equipment. In the newest productionlines, sensors are used to warn individuals who penetrate into machine space, and thesensors even stop machines if individuals approach too close. Factories provide workerswith masks and hats rather than safety protection like eye wear. In most Japanesefactories, street shoes are not allowed.

Most electronic firms the panel visited were in the process of meeting newenvironmental guidelines. Fujitsu removed CFCs from its cleaning processes in October1993. CFCs were replaced by a deionized-water cleaning process. In the old assemblyprocess, the amount of handling required for inspection reduced the impact of cleaning.The new line had no such problems.

To provide high reliability, Japanese firms create new products using fewercomponents, more automation, and flexible manufacturing technologies. For example,TDK is striving for 24-hour, nonassisted, flexible circuit card manufacturing using state-of-the-art high-density surface mounting techniques and integrated multifunctioncomposite chips. It has developed true microcircuit miniaturization technologies thatintegrate 33 active and passive components on one chip. This will reduce the number ofcomponents required by customers during board assembly, thereby reducing potentialassembly defects.

In addition, the application of materials and process know-how provides afundamental competitive advantage in manufacturing products with improved quality

Page 123: DBA1656

DBA 1656 QUALITY MANAGEMENT

123

NOTES

Anna University Chennai

characteristics. Nitto Denko, for example, has developed low-dust pellets for use inmolding compounds. Ibiden has developed an epoxy hardener to enhance peel strength,thus improving reliability of its plating technology. The new process reduces cracking inthe high-stress areas of small vias. Ibiden also uses epoxy dielectric for cost reductionand enhanced thermal conductivity of its MCM-D substrate. At the time of the JTECvisit, the company was also attempting to reduce solder resist height in an effort toimprove the quality and ease of additive board assembly. It believes that a product witha resist 20 mils higher than the copper trace can eliminate solder bridging. Sony developedadhesive bonding technologies in order to improve the reliability and automation of itsoptical pickup head assembly. It set the parameters for surface preparation, bondingagents, and process controls. Sony used light ray cleaning to improve surface wetabilityand selected nine different bonding agents for joining various components in the pickuphead. It now produces some 60% of the world’s optical pickup assemblies. Thecontinuous move to miniaturization will keep the pressure on Japanese firms to furtherdevelop both their materials and process capabilities.

3.6 TOTAL PRODUCTIVE MAINTENANCE (TMP), RELEVANCE TOTQM

What is Total Productive Maintenance (TPM)?

It can be considered as the medical science of machines. Total ProductiveMaintenance (TPM) is a maintenance program which involves a newly defined conceptfor maintaining plants and equipment. The goal of the TPM program is to markedlyincrease production while, at the same time, increasing employee morale and jobsatisfaction.

TPM brings maintenance into focus as a necessary and vitally important part ofthe business. It is no longer regarded as a non-profit activity. Down time for maintenanceis scheduled as a part of the manufacturing day and, in some cases, as an integral partof the manufacturing process. The goal is to hold emergency and unscheduledmaintenance to a minimum.

Why TPM ?

TPM was introduced to achieve the following objectives. The important ones arelisted below.

• Avoid wastage in a quickly changing economic environment.

• Producing goods without reducing product quality.

• Reduce cost.

Page 124: DBA1656

DBA 1656 QUALITY MANAGEMENT

124

NOTES

Anna University Chennai

• Produce a low batch quantity at the earliest possible time.

• Goods sent to the customers must be non defective.

Similarities and differences between TQM and TPM

The TPM program closely resembles the popular Total Quality Management(TQM) program. Many of the tools such as employee empowerment, benchmarking,documentation, etc. used in TQM are used to implement and optimize TPM.Followingare the similarities between the two.

1. Total commitment to the program by upper level management is required inboth programmes

2. Employees must be empowered to initiate corrective action, and

3. A long range outlook must be accepted as TPM may take a year or more toimplement and is an on-going process. Changes in employee mind-set towardtheir job responsibilities must take place as well.

The differences between TQM and TPM is summarized below.

TABLE 3.10

Category TQM TPM

Object Quality Equipment ( Input and(Output and effects ) cause )

Means of attaining Systematize the Employees participationgoal management. and it is hardware

It is software oriented oriented

Target Quality for PPM Elimination of losses andwastes.

Types of maintenance

1. Breakdown maintenance

It means that people wait until equipment fails and repair it. Such a thing couldbe used when the equipment failure does not significantly affect the operation orproduction or generate any significant loss other than repair cost.

Page 125: DBA1656

DBA 1656 QUALITY MANAGEMENT

125

NOTES

Anna University Chennai

2. Preventive maintenance ( 1951 )

It is a daily maintenance ( cleaning, inspection, oiling and re-tightening ), designto retain the healthy condition of equipment and prevent failure through the preventionof deterioration, periodic inspection or equipment condition diagnosis, to measuredeterioration. It is further divided into periodic maintenance and predictive maintenance.Just like human life is extended by preventive medicine, the equipment service life canbe prolonged by doing preventive maintenance.

2a. Periodic maintenance ( Time based maintenance - TBM)

Time based maintenance consists of periodically inspecting, servicing andcleaning equipment and replacing parts to prevent sudden failure and process problems.

2b. Predictive maintenance

This is a method in which the service life of important part is predicted basedon inspection or diagnosis, in order to use the parts to the limit of their service life.Compared to periodic maintenance, predictive maintenance is condition basedmaintenance. It manages trend values, by measuring and analyzing data aboutdeterioration and employs a surveillance system, designed to monitor conditions throughan on-line system.

3. Corrective maintenance ( 1957 )

It improves equipment and its components so that preventive maintenance canbe carried out reliably. Equipment with design weakness must be redesigned to improvereliability or improving maintainability

4. Maintenance prevention ( 1960 )

It indicates the design of a new equipment. Weakness of current machines aresufficiently studied ( on site information leading to failure prevention, easier maintenanceand prevents of defects, safety and ease of manufacturing ) and are incorporated beforecommissioning a new equipment.

TPM - History

TPM is a innovative Japanese concept. The origin of TPM can be traced backto 1951 when preventive maintenance was introduced in Japan. However the conceptof preventive maintenance was taken from USA. Nippondenso was the first companyto introduce plant wide preventive maintenance in 1960. Preventive maintenance is theconcept wherein, operators produced goods using machines and the maintenance groupwas dedicated with work of maintaining those machines, however with the automationof Nippondenso, maintenance became a problem as more maintenance personnel were

Page 126: DBA1656

DBA 1656 QUALITY MANAGEMENT

126

NOTES

Anna University Chennai

required. So the management decided that the routine maintenance of equipment wouldbe carried out by the operators. (This is Autonomous maintenance, one of the features ofTPM ). Maintenance group took up only essential maintenance works.

Thus, Nippondenso which already followed preventive maintenance also addedAutonomous maintenance done by production operators. The maintenance crew wentin the equipment modification for improving reliability. The modifications were made orincorporated in new equipment. This led to maintenance prevention. Thus preventivemaintenance along with Maintenance prevention and Maintainability Improvementgave birth to Productive maintenance. The aim of productive maintenance was tomaximize plant and equipment effectiveness to achieve optimum life cycle cost ofproduction equipment.

By then, Nippon Denso had made quality circles, involving the employeesparticipation. Thus, all employees took part in implementing Productive maintenance.Based on these developments Nippondenso was awarded the distinguished plant prizefor developing and implementing TPM, by the Japanese Institute of Plant Engineers( JIPE ). Thus, Nippondenso of the Toyota group became the first company to obtainthe TPM certification.

TPM Targets:

PObtain Minimum 80% OPE.Obtain Minimum 90% OEE ( Overall Equipment Effectiveness )Run the machines even during lunch. ( Lunch is for operators and not for machines!)

Q

Operate in a manner, so that there are no customer complaints.

CReduce the manufacturing cost by 30%.

D

Achieve 100% success in delivering the goods as required by the customer.

SMaintain a accident free environment.

MIncrease the suggestions by 3 times. Develop multi-skilled and flexible workers.

Page 127: DBA1656

DBA 1656 QUALITY MANAGEMENT

127

NOTES

Anna University Chennai

TABLE 3.11

Motives of TPM 1. Adoption of life cycle approach for improving the

overall performance of production equipment.

2. Improving productivity by highly motivated workerswhich is achieved by job enlargement.

3. The use of voluntary small group activities foridentifying the cause of failure, possible plant and

equipment modifications.

Uniqueness of TPM The major difference between TPM and otherconcepts is that the operators are also involved inthe maintenance process. The concept of “I(Production operators ) Operate, You(Maintenance department ) fix” is not followed.

TPM Objectives 1. Achieve Zero Defects, Zero Breakdown and Zeroaccidents in all functional areas of the organization.

2. Involve people in all levels of organization.

3. Form different teams to reduce defects and selfMaintenance.

Direct benefits of TPM 1. Increase productivity and OPE ( Overall PlantEfficiency ) by 1.5 or 2 times.

2. Rectify customer complaints.

3. Reduce the manufacturing cost by 30%.

4. Satisfy the customers needs by 100 % (Deliveringthe right quantity at the right time, in the requiredquality. )

5. Reduce accidents.

6. Follow pollution control measures.

Indirect benefits of TPM 1. Higher confidence level among the employees.

2. Keep the work place clean, neat and attractive.

3. Favourable change in the attitude of the operators.

Page 128: DBA1656

DBA 1656 QUALITY MANAGEMENT

128

NOTES

Anna University Chennai

4. Achieve goals by working as a team.

5. Horizontal deployment of a new concept in all areasof the organization.

6. Share knowledge and experience.

7. The workers get a feeling of owning the machine.

OEE ( Overall Equipment Efficiency ) :

OEE = A x PE x Q

A - Availability of the machine. Availability is proportion to time. Machine isactually available out of time. It should be available.

A = ( MTBF - MTTR ) / MTBF.MTBF - Mean Time Between Failures = ( Total Running Time ) / Number ofFailures.MTTR - Mean Time To Repair.

PE - Performance Efficiency. It is given by RE X SE.

Rate efficiency (RE) : Actual average cycle time is slower than design cycle timebecause of jams, etc. Output is reduced because of jams

Speed efficiency (SE) : Actual cycle time is slower than design cycle time. Machineoutput is reduced because it is running at reduced speed.

Q - Refers to quality rate. Which is percentage of good parts out of totalproduced sometimes called “yield”.

Steps in introduction of TPM in a organization :

Step A - PREPARATORY STAGE :

STEP 1 - Announcement by Management to all about TPM introduction inthe organization :

Proper understanding, commitment and active involvement of the topmanagement is needed for this step. Senior management should have awarenessprogrammes, after which announcement is made to all. Publish it in the house magazineand put it in the notice board. Send a letter to all concerned individuals if required.

STEP 2 - Initial education and propaganda for TPM :

Training is to be done based on the need. Some need intensive training andsome just an awareness. Take people who matters to places where TPM has alreadybeen successfully implemented.

Page 129: DBA1656

DBA 1656 QUALITY MANAGEMENT

129

NOTES

Anna University Chennai

STEP 3 - Setting up TPM and departmental committees :

TPM includes improvement, autonomous maintenance, quality maintenance etc.,as a part of it. When committees are set up it should take care of all these needs.

STEP 4 - Establishing the TPM working system and target :

Now each area is benchmarked and fix up a target for achievement.

STEP 5 - A master plan for institutionalizing :

Next step is implementation leading to institutionalizing wherein, TPM becomesan organizational culture. Achieving PM award is the proof of reaching a satisfactorylevel.

STEP B - INTRODUCTION STAGE

This is a ceremony and we should invite all, including suppliers as they shouldknow that we want quality supply from them. related companies and affiliated companieswho can be our customers, sister concerns etc. Some may learn from us and some canhelp us and customers will get the communication from us that we care for qualityoutput.

STAGE C - IMPLEMENTATION

In this stage eight activities are carried which are called eight pillars in thedevelopment of TPM activity.

Of these four activities are for establishing the system for production efficiency,one for initial control system of new products and equipment, one for improving theefficiency of administration and are for control of safety, sanitation as working environment.

STAGE D - INSTITUTIONALIZING STAGE

By all these activities, one would have reached maturity stage. Now is the timefor applying for PM award. Also think of a challenging level to which you can take thismovement.

Page 130: DBA1656

DBA 1656 QUALITY MANAGEMENT

130

NOTES

Anna University Chennai

Organization Structure for TPM Implementation :

FIGURE 3.10

Pillars of TPM

FIGURE 3.11

PILLAR 1 - 5S :

TPM starts with 5S. Problems cannot be clearly seen when the work place isunorganized. Cleaning and organizing the workplace helps the team to uncover problems.Making problems visible is the first step of improvement.

Page 131: DBA1656

DBA 1656 QUALITY MANAGEMENT

131

NOTES

Anna University Chennai

TABLE 3. 12

Japanese Term English Translation Equivalent ‘S’ term

Seiri Organization Sort

Seiton Tidiness Systematize

Seiso Cleaning Sweep

Seiketsu Standardization Standardize

Shitsuke Discipline Self - Discipline

SEIRI - Sort out :

This means sorting and organizing the items as critical, important, frequentlyused items, useless, or items that are not needed as of now. Unwanted items can besalvaged. Critical items should be kept for use nearby and items that are not be used inthe near future, should be stored in some place. For this step, the worth of the itemshould be decided based on utility and not cost. As a result of this step, the search timeis reduced.

TABLE 3.13

Priority Frequency of Use How to use

Low Less than once per year, Throw away, Store awayOnce per year< from the workplace

Average At least 2/6 months, Store together but off-lineOnce per month,Once per week

High Once per day Locate at the workplace

SEITON - Organise :

The concept here is that “Each items has a place, and only one place”. Theitems should be placed back after usage at the same place. To identify items easily,name plates and colored tags has to be used. Vertical racks can be used for this purpose,and heavy items occupy the bottom position in the racks.

SEISO - Shine the workplace :

This involves cleaning the work place free of burrs, grease, oil, waste, scrapetc. No loosely hanging wires or oil leakage from machines.

SEIKETSU - Standardization :

Employees has to discuss together and decide on standards for keeping thework place / machines / pathways neat and clean. These standards are implementedfor the whole organization and are tested / Inspected randomly.

Page 132: DBA1656

DBA 1656 QUALITY MANAGEMENT

132

NOTES

Anna University Chennai

SHITSUKE - Self discipline :

Consider 5S as a way of life and bring about self-discipline among the employeesof the organization. This includes wearing badges, following work procedures,punctuality, dedication to the organization etc.

PILLAR 2 - JISHU HOZEN ( Autonomous maintenance ) :

This pillar is geared towards developing operators to be able to take care ofsmall maintenance tasks, thus freeing up the skilled maintenance people to spend timeon more value added activity and technical repairs. The operators are responsible forthe upkeep of their equipment to prevent it from deteriorating.

Policy :

1. Uninterrupted operation of equipments.

2. Flexible operators to operate and maintain other equipments.

3. Eliminating the defects at source through active employee participation.

4. Stepwise implementation of JH activities.

JISHU HOZEN Targets:

1. Prevent the occurrence of 1A / 1B because of JH.

2. Reduce oil consumption by 50%

3. Reduce process time by 50%

4. Increase use of JH by 50%

Steps in JISHU HOZEN :

1. Preparation of employees.

2. Initial cleanup of machines.

3. Take counter measures

4. Fix tentative JH standards

5. General inspection

6. Autonomous inspection

7. Standardization and

8. Autonomous management.

Page 133: DBA1656

DBA 1656 QUALITY MANAGEMENT

133

NOTES

Anna University Chennai

Each of the above mentioned steps is discussed in detail below.

1. Train the Employees : Educate the employees about TPM, its advantages, JHadvantages and Steps in JH. Educate the employees about abnormalities inequipments.

2. Initial cleanup of machines :

o Supervisor and technician should discuss and set a date for implementingstep1

o Arrange all items needed for cleaning

o On the arranged date, employees should clean the equipment completelywith the help of maintenance department.

o Dust, stains, oils and grease has to be removed.

o Following are the things that has to be taken care while cleaning. Theyare Oil leakage, loose wires, unfastened nuts and bolts and worn-outparts.

o After clean up problems are categorized and suitably tagged. Whitetags is place where problems can be solved by operators. Pink tag isplaced where the aid of maintenance department is needed.

o Contents of tag is transferred to a register.

o Make note of area which were inaccessible.

o Finally close the open parts of the machine and run the machine.

3. Counter Measures :

o Inaccessible regions had to be reached easily. E.g., If there are manyscrews to open a fly wheel door, hinge door can be used. Instead ofopening a door for inspecting the machine, acrylic sheets can be used.

o To prevent work out of machine parts necessary action must be taken.

o Machine parts should be modified to prevent accumulation of dirt anddust.

4. Tentative Standard :

o JH schedule has to be made and followed strictly.

Page 134: DBA1656

DBA 1656 QUALITY MANAGEMENT

134

NOTES

Anna University Chennai

o Schedule should be made regarding cleaning, inspection and lubricationand it also should include details like when, what and how.

5. General Inspection :

o The employees are trained in disciplines like pneumatics, electrical,hydraulics, lubricants and coolants, drives, bolts, nuts and safety.

o This is necessary to improve the technical skills of employees and touse inspection manuals correctly.

o After acquiring this new knowledge the employees should share thiswith others.

o By acquiring this new technical knowledge, the operators are now wellaware of machine parts.

6. Autonomous Inspection :

o New methods of cleaning and lubricating are used.

o Each employee prepares his own autonomous chart / schedule inconsultation with supervisor.

o Parts which have never given any problem or part which don’t needany inspection are removed from list permanently based on experience.

o Including good quality machine parts. This avoid defects due to poorJH.

o Inspection that is made in preventive maintenance is included in JH.

o The frequency of cleanup and inspection is reduced based on experience.

7. Standardization :

o Upto the previous step only the machinery / equipment was theconcentration. However in this step the surroundings of machinery areorganized. Necessary items should be organized, such that there is nosearching and searching time is reduced.

o Work environment is modified such that there is no difficulty in gettingany item.

o Everybody should follow the work instructions strictly.

o Necessary spares for equipments is planned and procured.

Page 135: DBA1656

DBA 1656 QUALITY MANAGEMENT

135

NOTES

Anna University Chennai

8. Autonomous Management :

o OEE and OPE and other TPM targets must be achieved by continuousimprovement through Kaizen.

o PDCA ( Plan, Do, Check and Act ) cycle must be implemented forKaizen.

PILLAR 3 - KAIZEN :

“Kai” means change, and “Zen” means good ( for the better ). Basically kaizenis for small improvements, but carried out on a continual basis and involve all people inthe organization. Kaizen is opposite to big spectacular innovations. Kaizen requires noor little investment. The principle behind is that “a very large number of smallimprovements are more effective in an organizational environment than a fewimprovements of large value. This pillar is aimed at reducing losses in the workplacethat affect our efficiencies. By using a detailed and thorough procedure we eliminatelosses in a systematic method using various Kaizen tools. These activities are not limitedto production areas and can be implemented in administrative areas as well.

Kaizen Policy :

1. Practice concepts of zero losses in every sphere of activity.2. Relentless pursuit to achieve cost reduction targets in all resources3. Relentless pursuit to improve over all plant equipment effectiveness.4. Extensive use of PM analysis as a tool for eliminating losses.5. Focus of easy handling of operators.

Kaizen Target :

Achieve and sustain zero losses with respect to minor stops, measurement andadjustments, defects and unavoidable downtimes. It also aims to achieve 30%manufacturing cost reduction.

Tools used in Kaizen :

1. PM analysis2. Why - Why analysis3. Summary of losses4. Kaizen register5. Kaizen summary sheet.

Page 136: DBA1656

DBA 1656 QUALITY MANAGEMENT

136

NOTES

Anna University Chennai

The objective of TPM is maximization of equipment effectiveness. TPM aimsat maximization of machine utilization and not merely machine availability maximization.As one of the pillars of TPM activities, Kaizen pursues efficient equipment, operatorand material and energy utilization, that is the extremes of productivity and aims atachieving substantial effects. Kaizen activities try to thoroughly eliminate 16 major losses.

16 Major losses in a organization:

TABLE 3.14

Loss Category

1. Failure losses - Breakdownloss

2. Setup / adjustment losses3. Cutting blade loss4. Start up loss Losses that impede equipment5. Minor stoppage / Idling loss. efficiency6. Speed loss - operating at low

speeds.7. Defect / rework loss8. Scheduled downtime loss9. Management loss Losses that impede human10. Operating motion loss work efficiency11. Line organization loss12. Logistic loss13. Measurement and adjustment

loss14. Energy loss15. Die, jig and tool breakage loss Losses that impede effective use of16. Yield loss. production resources

Classification of losses :

TABLE 3.15

Aspect Sporadic Loss Chronic Loss

Causation Causes for this failure This loss cannot be easilycan be easily traced. identified and solved.Cause-effect relationship Even if various counteris simple to trace. measures are applied

Remedy Easy to establish a This type of losses areremedial measure caused because of hidden

Page 137: DBA1656

DBA 1656 QUALITY MANAGEMENT

137

NOTES

Anna University Chennai

defects in machine,equipment and methods.

Impact / Loss A single loss can be costly A single cause is a rare - acombination of causes thattends to be a rule

Frequency of The frequency of The frequency of loss isoccurrence occurrence is low and more.

occasional.

Corrective action Usually the line personnel Specialists in processin the production can engineering, qualityattend to this problem. assurance and

maintenance people arerequired.

PILLAR 4 - PLANNED MAINTENANCE :

It is aimed to have trouble free machines and equipments producing defect freeproducts for total customer satisfaction. This breaks maintenance down into 4 “families”or groups which was defined earlier.

1. Preventive Maintenance2. Breakdown Maintenance3. Corrective Maintenance4. Maintenance Prevention

With Planned Maintenance we evolve our efforts from a reactive to a proactivemethod and use trained maintenance staff to help train the operators to better maintaintheir equipment.

Policy :

1. Achieve and sustain availability of machines2. Optimum maintenance cost.3. Reduces spares inventory.4. Improve reliability and maintainability of machines.

Target :

1. Zero equipment failure and break down.2. Improve reliability and maintainability by 50 %

Page 138: DBA1656

DBA 1656 QUALITY MANAGEMENT

138

NOTES

Anna University Chennai

3. Reduce maintenance cost by 20 %4. Ensure availability of spares all the time.

Six steps in planned maintenance :

1. Equipment evaluation and recoding present status.2. Restore deterioration and improve weakness.3. Building up information management system.4. Prepare time based information system, select equipment, parts and

members and map out plan.5. Prepare predictive maintenance system by introducing equipment diagnostic

techniques and6. Evaluation of planned maintenance.

PILLAR 5 - QUALITY MAINTENANCE :

It is aimed towards customer delight through highest quality through defect freemanufacturing. Focus is on eliminating nonconformances in a systematic manner, muchlike Focused Improvement. We gain understanding of what parts of the equipmentaffect product quality and begin to eliminate current quality concerns, then move topotential quality concerns. Transition is from reactive to proactive (Quality Control toQuality Assurance).

QM activities is to set equipment conditions that preclude quality defects, basedon the basic concept of maintaining perfect equipment to maintain perfect quality ofproducts. The conditions are checked and measure in time series to verify whether thatmeasure values are within standard values to prevent defects. The transition of measuredvalues is watched to predict possibilities of defects occurring and to take countermeasures beforehand.

Policy :

1. Defect free conditions and control of equipments.2. QM activities to support quality assurance.3. Focus of prevention of defects at source4. Focus on poka-yoke. ( fool proof system )5. In-line detection and segregation of defects.6. Effective implementation of operator quality assurance.

Target :

1. Achieve and sustain customer complaints at zero2. Reduce in-process defects by 50 %3. Reduce cost of quality by 50 %.

Page 139: DBA1656

DBA 1656 QUALITY MANAGEMENT

139

NOTES

Anna University Chennai

Data requirements :

Quality defects are classified as customer end defects and in-house defects. Forcustomer-end data, we have to get data on,

1. Customer end line rejection

2. Field complaints.

In-house, data include data related to products and data related to processes.

Data related to product :

1. Product wise defects

2. Severity of the defect and its contribution - major/minor

3. Location of the defect with reference to the layout

4. Magnitude and frequency of its occurrence at each stage of measurement

5. Occurrence trend in the beginning and the end of each production/process/changes. (Like pattern change, ladle/furnace lining etc.)

6. Occurrence trend with respect to restoration of breakdown/modifications/periodical replacement of quality components.

Data related to processes:

1. The operating condition for individual sub-process related to men, method,material and machine.

2. The standard settings/conditions of the sub-process

3. The actual record of the settings/conditions during the defect occurrence.

PILLAR 6 - TRAINING :

It is aimed to have multi-skilled revitalized employees whose morale is high andwho is eager to come to work and perform all the required functions effectively andindependently. Education is given to operators to upgrade their skill. It is not sufficientto know only “Know-How” but they should also learn “Know-why”. By experiencethey gain, “Know-How” to overcome a problem, what is to be done. This they dowithout knowing the root cause of the problem and why they are doing so. Hence, itbecomes necessary to train them on knowing “Know-why”. The employees should betrained to achieve the four phases of skill. The goal is to create a factory full of experts.The different phases of skills are

Page 140: DBA1656

DBA 1656 QUALITY MANAGEMENT

140

NOTES

Anna University Chennai

Phase 1 : Do not know.Phase 2 : Know the theory but cannot do.Phase 3 : Can do but cannot teachPhase 4 : Can do and also teach.

Policy :

1. Focus on improvement of knowledge, skills and techniques.

2. Creating a training environment for self-learning based on felt needs.

3. Training curriculum / tools /assessment etc conductive to employee revitalization

4. Training to remove employee fatigue and make work enjoyable.

Target :

1. Achieve and sustain downtime due to want men at zero on critical machines.

2. Achieve and sustain zero losses due to lack of knowledge / skills / techniques

3. Aim for 100 % participation in suggestion scheme.

Steps in educating and training activities :

1. Setting policies and priorities and checking present status of education andtraining.

2. Establish training system for operation and maintenance skill upgradation.

3. Training the employees for upgrading the operation and maintenance skills.

4. Preparation of training calendar.

5. Kick-off of the system for training.

6. Evaluation of activities and study of future approach.

PILLAR 7 - OFFICE TPM :

Office TPM should be started after activating four other pillars of TPM (JH,KK, QM, PM). Office TPM must be followed to improve productivity, efficiency inthe administrative functions and identify and eliminate losses. This includes analyzingprocesses and procedures towards increased office automation. Office TPM addressestwelve major losses. They are:

Page 141: DBA1656

DBA 1656 QUALITY MANAGEMENT

141

NOTES

Anna University Chennai

1. Processing loss

2. Cost loss including in areas such as procurement, accounts, marketing, salesleading to high inventories

3. Communication loss

4. Idle loss

5. Set-up loss

6. Accuracy loss

7. Office equipment breakdown

8. Communication channel breakdown, telephone and fax lines

9. Time spent on retrieval of information

10. Nonavailability of correct on line stock status

11. Customer complaints due to logistics

12. Expenses on emergency dispatches/purchases

How to start office TPM ?

A senior person from one of the support functions e.g. Head of Finance, MIS,Purchase etc should be heading the sub-committee. Members representing all supportfunctions and people from Production & Quality should be included in the sub-committee.TPM co-ordinate plans and guides the sub committee.

1. Providing awareness about office TPM to all support departments

2. Helping them to identify P, Q, C, D, S, M in each function in relation to plantperformance

3. Identify the scope for improvement in each function

4. Collect relevant data

5. Help them to solve problems in their circles

6. Make up an activity board where progress is monitored on both sides - resultsand actions along with Kaizens.

7. Fan out to cover all employees and circles in all functions.

Kobetsu Kaizen topics for Office TPM :

• Inventory reduction

• Lead time reduction of critical processes

• Motion and space losses

Page 142: DBA1656

DBA 1656 QUALITY MANAGEMENT

142

NOTES

Anna University Chennai

• Retrieval time reduction.

• Equalizing the work load

• Improving the office efficiency by eliminating the time loss on retrieval ofinformation, by achieving zero breakdown of office equipment like telephoneand fax lines.

Office TPM and its benefits :

1. Involvement of all people in support functions for focusing on better plantperformance

2. Better utilized work area

3. Reduce repetitive work

4. Reduced inventory levels in all parts of the supply chain

5. Reduced administrative costs

6. Reduced inventory carrying cost

7. Reduction in number of files

8. Reduction of overhead costs (to include cost of non-production/non-capitalequipment)

9. Productivity of people in support functions

10. Reduction in breakdown of office equipment

11. Reduction of customer complaints due to logistics

12. Reduction in expenses due to emergency dispatches/purchases

13. Reduced manpower

14. Clean and pleasant work environment.

P Q C D S M in Office TPM :

P - Production output lost due to want of material, manpower productivity, productionoutput lost due to want of tools.

Q - Mistakes in preparation of cheques, bills, invoices, payroll, customer returns/warrantyattributable to BOPs, rejection/rework in BOP’s/job work, office area rework.

C - Buying cost/unit produced, cost of logistics - inbound/outbound, cost of carryinginventory, cost of communication, demurrage costs.

D - Logistics losses (Delay in loading/unloading)

Page 143: DBA1656

DBA 1656 QUALITY MANAGEMENT

143

NOTES

Anna University Chennai

• Delay in delivery due to any of the support functions

• Delay in payments to suppliers

• Delay in information

S - Safety in material handling/stores/logistics, Safety of soft and hard data.

M - Number of kaizens in office areas.

How office TPM supports plant TPM :

Office TPM supports the plant, initially in doing Jishu Hozen of the machines (aftergetting training of Jishu Hozen), as in Jishu Hozen at the

1. Initial stages, machines are more and manpower is less, so the help ofcommercial departments can be taken, for this

2. Office TPM can eliminate the losses on-line for no material and logistics.

Extension of office TPM to suppliers and distributors :

This is essential, but only after we have done as much as possible internally.With suppliers it will lead to on-time delivery, improved ‘in-coming’ quality and costreduction. With distributors, it will lead to accurate demand generation, improvedsecondary distribution and reduction in damages during storage and handling. In anycase, we will have to teach them based on our experience and practice and highlightgaps in the system which affect both sides. Incase of some of the larger companies,they have started to support clusters of suppliers.

PILLAR 8 - SAFETY, HEALTH AND ENVIRONMENT :

Target :

1. Zero accident,

2. Zero health damage

3. Zero fires.

In this area focus is on to create a safe workplace and a surrounding area thatis not damaged by our process or procedures. This pillar will play an active role in eachof the other pillars on a regular basis.

A committee is constituted for this pillar which comprises representative of officersas well as workers. The committee is headed by Senior Vice-President ( Technical ).Utmost importance to safety is given in the plant. Manager (safety) is looking afterfunctions related to safety. To create awareness among employees various competitions

Page 144: DBA1656

DBA 1656 QUALITY MANAGEMENT

144

NOTES

Anna University Chennai

like safety slogans, quiz, drama, posters, etc. related to safety can be organized atregular intervals.

Conclusion:

Today, with competition in industry high all times, TPM may be the only thingthat stands between success and total failure for some companies. It has been provento be a program that works. It can be adapted to work not only in industrial plants, butin construction, building maintenance, transportation, and in a variety of other situations.Employees must be educated and convinced that TPM is not just another “program ofthe month” and that management is totally committed to the program and the extendedtime frame is necessary for full implementation. If everyone involved in a TPM programdoes his or her part, an unusually high rate of return compared to resources investedmay be expected.

TPM achievements

Many TPM sites have made excellent progress in a number of areas. Theseinclude:

• better understanding of the performance of their equipment (what they areachieving in OEE terms and what are the reasons for non-achievement),

• better understanding of equipment criticality and where it is worth deployingimprovement effort and potential benefits,

• improved teamwork and a less adversarial approach between production andmaintenance,

• improved procedures for changeovers and set-ups, carrying out frequentmaintenance tasks, better training of operators and maintainers, which all leadto reduced costs and better service,

• general increased enthusiasm from involvement of the workforce.

However, the central paradox of the whole TPM Process is that, given thatTPM is supposed to be about doing better maintenance, why do proponents end upwith (largely) the same discredited schedules that they had already (albeit now beingdone by different people)? This is the central paradox - yes, the organization is moreempowered, and re-shaped to allow us to carry out maintenance in the modern arena,but we’re still left with the problem of what maintenance should be done.

The Reliability Centered Maintenance process was evolved within the civilaviation industry to fulfil this precise need. Infact, the definition of RCM is “a process

Page 145: DBA1656

DBA 1656 QUALITY MANAGEMENT

145

NOTES

Anna University Chennai

used to determine the maintenance requirements of physical assets in their presentoperating context”. In essence, we have two objectives; determine the maintenancerequirements of the physical assets within their current operating context, and thenensure that these requirements are met as cheaply and effectively as possible.

RCM is better at delivering objective one; TPM focuses on objective two.

Total Productive Maintenance in India

The Indian Industry is facing a severe global competition and hence manycompanies are finding it very difficult to meet the bottom line.

The past decade has transformed the definition of Market Price, which wasbased on simple assumption under the monopolistic condition as given below:

Production cost + Profit = Market price

However, under the present scenario where all are facing the domestic/globalcompetition, the above definition does not hold good and simply got transformed into:

Market prices - Prod. Cost = Profit

Although, the above two equations mathematically look to be the same, thedifference is obvious as in the present scenario. The customer who has become quitedemanding with respect to cost, quality and variety determines the market price. Thecurrent economic environment automatically brings tremendous pressure on optimizingthe production cost for survival of the unit also.

TPM meets the challenge and provides an effective program in terms of increasedplant efficiency and productivity.

TPM is a means of creating a safe and participative work environment, in whichall employees target the elimination of all kinds of waste generated due to equipmentfailure, frequent breakdowns, defective products including rejections and rework. Thisleads to higher employee morale and greater organizational profitability.

TPM implementation is not a difficult task. However, it requires:

1. Total commitment to the program by Top Management as it has to be TOPDRIVEN to succeed.

2. Total involvement and participation of all the employees.

3. Attitudinal changes and paradigm shift towards job responsibilities.

Page 146: DBA1656

DBA 1656 QUALITY MANAGEMENT

146

NOTES

Anna University Chennai

Unit Steps involved in effective implementation of TPM are given briefly as follows:

1. Appoint a committed and responsible coordinator who is well-versed in TPMconcept and able to convince the entire work force through an educational program.

2. Select a model cell or a machine, which has the maximum potential for improvementin respect of optimizing the profits. Such a model cell or machine will have themaximum problems in respect of

• Product quality

• Equipment failure and frequent breakdowns

• Unsafe conditions causing safety hazards

3. The TPM co-ordinator will head a small team comprising of the concerned personsof selected model cell or machine and would initiate the TPM program by firstrecording meticulously and observing fullest transparency of the problematic areasin respect of product defects, equipment breakdown data and number of accidentsfrom the past data, if available. However, if the reliable data is not available, thesame will have to be built up for the purpose of ‘benchmarking’ and keeping therecords of progress and scheduling the improvements (target) within the time period.

4. The TPM co-ordinator would encourage the team member to initiate the TPMthrough initial cleaning of the machines taken as a model for improvement. Initialcleaning is done to remove ‘shortcomings’ or defects developed over the years,which have remained unnoticed.

5. Initial cleaning activities include removing dust, dirt, fluff etc. Through this process,improper conditions of the machine get detected in the form of:

• Inaccuracies leading to defects in regard to quality

• Defective parts/components leading to the development of defects in the machine

• Detection of excessive wear and tear in the moving parts of the machine leadingto production of defective parts.

• Motion resistance observed due to foreign matters found in moving parts of themachine.

• Detection of defects like loose fasteners, scratches, deformation and leakageetc., remaining invisible in dirty equipment.

Page 147: DBA1656

DBA 1656 QUALITY MANAGEMENT

147

NOTES

Anna University Chennai

The initial cleaning leads to Inspection which in turn helps in detecting theabnormalities in the equipment gathered over the years causing quality defects, equipmentfailures, safety hazards.

The action team would be responsible to ascertain the problem areas, determiningthe sources of generation of abnormalities and those causing forced deterioration.Through deliberation of various techniques the team would detail out a course ofcorrective action and implementing the corrective process. It is quite possible that in thebeginning recognizing problem areas and determining the counter measures eliminatingthe sources of error, the team members may not find easy. They have to continue theirthinking creatively that things could be done better.

Pressure die casting company:

For this company one 660 T HMT m/c was selected for developing it as amodel cell. The m/c was studied and evaluated critically in depth by the team. The teamdecided to set first objective as to achieve ‘ Delightful Working Environment ’which isa pre-requisite to the introduction of practices of TPM.

The team identified 14 cause factors responsible for the present condition ofthe m/c and coming in the way of achieving the ‘DWE’, the first objective.

These cause factors were analyzed and improvement themes were developedsuch as counter measures to dust and dirt, difficult to clean and non-accessible areas,counter measures to leakages, flash and water spray, die coat spray etc. Implementationof solutions resulted into the following major achievements:

• Delightful working environment

• Productivity improvement by 20%

• Metal saving through elimination of flash and coil formation during each stroke.

Similar achievements resulted in the Automobile Company also in the form of breakdown reduction

• Defects reduction

• Space saving

• Productivity improvement

To make these achievements sustainable a very important aspect of TPM isthe establishment of Autonomous Maintenance. The purpose of autonomous inspectionis to teach operators how to maintain their equipment by performing daily the followingin not more than 15 minutes and thus developing an ‘ownership of machine’:

- 4

Page 148: DBA1656

DBA 1656 QUALITY MANAGEMENT

148

NOTES

Anna University Chennai

Ø Daily checks

Ø Lubrication management

Ø Tightening and checking schedule of fasteners

Ø Cleaning schedule

Ø Early detection of abnormal condition of the machine through sound, temperatureand vibration.

In the current Industrial scenario, TPM may be one of the only concepts thatstand between success and total failure for some organizations. It is a program thatworks if it is implemented effectively with all sincerity and dedicated efforts of participativeteam.

3.7 TEROTECHNOLOGY

Terotechnology

A Greek word referring to the study of the costs associated with an assetthroughout its life cycle, from acquisition to disposal. The goals of this approach are totry to reduce the different costs incurred at the various stages of the asset’s life and toderive methods that will help extend the asset’s life span. Also known as “life-cyclecosting”.

Terotechnology uses tools such as net present value, internal rate of return anddiscounted cash flow in an attempt to minimize the costs associated with the asset in thefuture. These costs can include engineering, maintenance, wages payable to operatethe equipment, operating costs and even disposal costs.

For example, let’s say an oil company is attempting to map out the costs of anoff-shore oil platform. They would use terotechnology to map out the exact costsassociated with assembly, transportation, maintenance and dismantling of the platformand finally a calculation of salvage value.

This study is not an exact science as you can imagine. There are many differentvariables that need to be estimated and approximated. However, a company who doesnot use this kind of study may be worse off than one that approaches an asset’s lifecycle in a more ad hoc manner.

Terotechnology: A word derived from the Greek and meaning the study andmanagement of an asset’s life from its very start (acquisition) to its very end (final disposal,perhaps involving dismantling and specialised treatment prior to scrap). One of themost dramatic examples of the full consideration of terotechnology is the construction,use and final decommissioning of an oil platform at sea.

Page 149: DBA1656

DBA 1656 QUALITY MANAGEMENT

149

NOTES

Anna University Chennai

A word derived from the Greek root word “tero” or “I care”, that is now usedwith the term “technology” to refer to the study of the costs associated with an assetthroughout its life cycle - from acquisition to disposal.

Life-Cycle Costing

Estimates of a product’s revenues and expenses over its expected life cycle.The result is to highlight upstream and downstream costs in the cost planning processthat often receive insufficient attention. Emphasis is on the need to price products tocover all costs, not just production costs.

History of life cycle cost analysis

Life cycle cost analysis became popular in the 1960s when the concept wastaken up by U.S. government agencies as an instrument to improve the cost effectivenessof equipment procurement. From that point, the concept has spread to the businesssector, and is used there in new product development studies, project evaluations andmanagement accounting. As there is high interest in life cycle cost analysis in maintenance,the International Electro technical Commission published a standard (IEC 60300) in1996, which lies in the field of dependability management and gives recommendationshow to carry out life cycle costing. This standard was renewed in July 2004.

Realization of a life cycle cost analysis

A life cycle cost analysis calculates the cost of a system or product over itsentire life span. This also involves the process of Product Life Cycle Management sothat the life cycle profits are maximized.

The analysis of a typical system could include costs for:

• planning,• research and development,• production,• operation,• maintenance,• cost of replacement,• disposal or salvage.

This cost analysis depends on values calculated from other reliability analyseslike failure rate, cost of spares, repair times, and component costs.

Sometimes called a “cradle-to-grave analysis”, or “womb-to-tomb”

Page 150: DBA1656

DBA 1656 QUALITY MANAGEMENT

150

NOTES

Anna University Chennai

A life cycle cost analysis is important for cost accounting purposes. In decidingto produce or purchase a product or service, a timetable of life cycle costs helps showwhat costs need to be allocated to a product so that an organization can recover itscosts. If all costs cannot be recovered, it would not be wise to produce the product orservice.

It reinforces the importance of locked-in costs, such as R&D.

It offers three important benefits:- All costs associated with a project/productbecome visible, especially: upstream, R&D; downstream, customer service. - It allowsan analysis of business function interrelationships. Low R&D costs may lead to highcustomer service costs in the future. - Differences in early stage expenditure arehighlighted, enabling managers to develop accurate revenue predictions.

A typical quantitative analysis would involve the use of a statement where aneasy comparison of costs can be seen by having the different products a companyproduces next to each other.

Disambiguation:

Life cycle cost analysis or life cycle costing should not be confused with

• life cycle analysis which is a part of the ISO 14000 series concerned withenvironmental issues

• product life cycle analysis which is a time dependent marketing construct

3.8 BUSINESS PROCESS RE-ENGINEERING (BPR), PRINCIPLES,APPLICATIONS

Business Process Re-engineering is a management approach aiming atimprovements by means of elevating efficiency and effectiveness of the processes thatexist within and across organizations.

Business process re-engineering is also known as BPR, Business ProcessRedesign, Business Transformation, or Business Process Change Management.

In 1990, Michael Hammer, a former professor of computer science at theMassachusetts Institute of Technology (MIT), published an article in the Harvard BusinessReview, in which he claimed that the major challenge for managers is to obliterate non-value adding work, rather than using technology for automating it. This statement implicitlyaccused managers of having focused the wrong issues, namely that technology in general,and more specifically information technology, has been used primarily for automatingexisting work rather than using it as an enabler for making non-value adding workobsolete.

Page 151: DBA1656

DBA 1656 QUALITY MANAGEMENT

151

NOTES

Anna University Chennai

Hammer’s claim was simple: Most of the work being done does not add anyvalue for customers, and this work should be removed, not accelerated throughautomation. Instead, companies should reconsider their processes in order to maximizecustomer value, while minimizing the consumption of resources required for deliveringtheir product or service. A similar idea was advocated by Thomas H. Davenport and J.Short (1990), at that time a member of the Ernst & Young research center, in a paperpublished in the Sloan Management Review the same year as Hammer published hispaper.

This idea, to unbiasedly review a company’s business processes, was rapidlyadopted by a huge number of firms, which were striving for renewed competitiveness,which they had lost due to the market entrance of foreign competitors, their inability tosatisfy customer needs, and their insufficient cost structure. Even well-establishedmanagement thinkers, such as Peter Drucker and Tom Peters, were accepting andadvocating BPR as a new tool for (re-)achieving success in a dynamic world. Duringthe following years, a fast growing number of publications, books as well as journalarticles, was dedicated to BPR, and many consulting firms embarked on this trend anddeveloped BPR methods. However, the critics were fast to claim that BPR was a wayto dehumanize the work place, increase managerial control, and to justify downsizing,i.e., major reductions of the work force (Greenbaum 1995, Industry Week 1994), anda rebirth of Taylorism under a different label.

Despite this critique, reengineering was adopted at an accelerating pace and by1993, as many as 65% of the Fortune 500 companies claimed to either have initiatedre-engineering efforts, or to have plans to do so.

Definition of BPR

Different definitions can be found. This section contains the definition providedin notable publications in the field.

Hammer and Champy (1993) define BPR as “... the fundamental rethinkingand radical redesign of business processes to achieve dramatic improvements in criticalcontemporary measures of performance, such as cost, quality, service, and speed.”

Thomas H. Davenport (1993), another well-known BPR theorist, uses theterm process innovation, which he says ”encompasses the envisioning of new workstrategies, the actual process design activity, and the implementation of the change in allits complex technological, human, and organizational dimensions”.

Page 152: DBA1656

DBA 1656 QUALITY MANAGEMENT

152

NOTES

Anna University Chennai

Additionally, Davenport points out the major difference between BPR andother approaches to organization development (OD), especially the continuousimprovement or TQM movement, when he states:

“Today firms must seek not fractional, but multiplicative levels of improvement– 10x rather than 10%.”

Finally, Johansson et al. (1993) provide a description of BPR relative to otherprocess-oriented views, such as Total Quality Management (TQM) and Just-in-time(JIT), and state:

“Business Process Reengineering, although a close relative, seeks radical ratherthan merely continuous improvement. It escalates the efforts of JIT and TQM to makeprocess orientation a strategic tool and a core competence of the organization. BPRconcentrates on core business processes, and uses the specific techniques within theJIT and TQM ”toolboxes” as enablers, while broadening the process vision.”

In order to achieve the major improvements BPR is seeking for, the change ofstructural organizational variables, and other ways of managing and performing work isoften considered as being insufficient. For being able to reap the achievable benefitsfully, the use of information technology (IT) is conceived as a major contributing factor.While IT traditionally has been used for supporting the existing business functions, i.e.,it was used for increasing organizational efficiency, it now plays a role as enabler of neworganizational forms, and patterns of collaboration within and between organizations.

BPR derives its existence from different disciplines, and four major areas canbe identified as being subjected to change in BPR - organization, technology, strategy,and people - where a process view is used as common framework for consideringthese dimensions.

Business strategy is the primary driver of BPR initiatives and the other dimensionsare governed by strategy’s encompassing role. The organization dimension reflects thestructural elements of the company, such as hierarchical levels, the composition oforganizational units, and the distribution of work between them. Technology is concernedwith the use of computer systems and other forms of communication technology in thebusiness. In BPR, information technology is generally considered as playing a role asenabler of new forms of organizing and collaborating, rather than supporting existingbusiness functions. The people / human resources dimension deals with aspects such aseducation, training, motivation and reward systems. The concept of business processes- interrelated activities aiming at creating a value added output to a customer - is thebasic underlying idea of BPR. These processes are characterized by a number ofattributes: Process ownership, customer focus, value-adding, and cross-functionality.

Page 153: DBA1656

DBA 1656 QUALITY MANAGEMENT

153

NOTES

Anna University Chennai

The role of information technology

Information technology (IT) plays an important role in the reengineering concept.It is considered as a major enabler for new forms of working and collaborating withinan organization and across organizational borders.

The early BPR literature, e.g., Hammer & Champy (1993), identified severalso called disruptive technologies that were supposed to challenge traditional wisdomabout how work should be performed.

1. Shared databases, making information available at many places

2. Expert systems, allowing generalists to perform specialist tasks

3. Telecommunication networks, allowing organizations to be centralized anddecentralized at the same time

4. Decision-support tools, allowing decision-making to be a part of everybody’sjob

5. Wireless data communication and portable computers, allowing field personnelto work office independent

6. Interactive videodisk, to get in immediate contact with potential buyers

7. Automatic identification and tracking, allowing things to tell where they are,instead of requiring to be found

8. High performance computing, allowing on-the-fly planning and revisioning

In the mid 1990s, especially workflow management systems were consideredas a significant contributor to improved process efficiency. Also ERP (Enterprise ResourcePlanning) vendors, such as SAP, positioned their solutions as vehicles for businessprocess redesign and improvement.

Methodology

Although the names and steps being used differ slightly between the differentmethodologies, they share the same basic principles and elements. The followingdescription is based on the PRLC (Process Reengineering Life Cycle) approachdeveloped by Guha et.al. (1993). A more detailed description can be found here.

Page 154: DBA1656

DBA 1656 QUALITY MANAGEMENT

154

NOTES

Anna University Chennai

FIGURE 3.12

Simplified schematic outline of using a business process approach, examplified forpharmceutical R&D:

1. Structural organization with functional units

2. Introduction of New Product Development as cross-functional process

3. Re-structuring and streamlining activities, removal of non-value adding tasks

1. Envision new processes

1. Secure management support

2. Identify re-engineering opportunities

3. Identify enabling technologies

4. Align with corporate strategy

2. Initiating change

1. Set up re-engineering team

2. Outline performance goals

3. Process diagnosis

1. Describe existing processes

2. Uncover pathologies in existing processes

Page 155: DBA1656

DBA 1656 QUALITY MANAGEMENT

155

NOTES

Anna University Chennai

4. Process redesign

1. Develop alternative process scenarios

2. Develop new process design

3. Design HR architecture

4. Select IT platform

5. Develop overall blueprint and gather feedback

5. Reconstruction

1. Develop/install IT solution

2. Establish process changes

6. Process monitoring

1. Performance measurement, including time, quality, cost, ITperformance

2. Link to continuous improvement

BPR - a rebirth of scientific management?

By its critics, BPR is often accused to be a re-animation of Taylor’s principlesof scientific management, aiming at increasing productivity to a maximum, but disregardingaspects such as work environment and employee satisfaction. It can be agreed thatTaylor’s theories, in conjunction with the work of the early administrative scientistshave had a considerable impact on the management discipline for more than 50 years.However, it is not self-evident that BPR is a close relative to Taylorism and this proposedrelation deserves a closer investigation.

In the late 19th century Frederick Winslow Taylor, a mechanical engineer, startedto develop the idea of management as a scientific discipline. He applied the principlethat work and its organizational environment could be considered and designed uponscientific principles, i.e., that work processes could be studied in detail using a positivistanalytic approach. Upon the basis of this analysis, an optimal organizational structureand way of performing all work tasks could be identified and implemented. However,he was not the one to originally invent the concept. In 1886, a paper entitled “TheEngineer as Economist”, written by Henry R. Towne for the American Society ofMechanical Engineers, had laid the bedrock for the development of scientificmanagement.

The basic idea of scientific management was that work could be studied froman objective scientific perspective and that the analysis of the gathered information

Page 156: DBA1656

DBA 1656 QUALITY MANAGEMENT

156

NOTES

Anna University Chennai

could be used for increasing productivity, especially of blue-collar work, significantly.Taylor (1911) summarized his observations in the following four principles:

• Observation and analysis through time study to set the optimal production rate.In other words, develop a science for each man’s task–a one best way.

• Scientifically select the best man for the job and train him in the procedures heis expected to follow.

• Co-operate with the man to ensure that the work is done as described. Thismeans establishing a differential rate system of piece work and paying the manon an incentive basis, not according to the position.

• Divide the work between managers and workers so that managers are giventhe responsibility for planning and preparation of work, rather than the individualworker.

Scientific management’s main characteristic is the strict separation of planningand doing, which was implemented by the use of a functional foremanship system. Thismeans, that a worker, depending on the task his is performing, can report to differentforeman, each of them being responsible for a small, specialized area.

Taylor’s ideas had a major impact on manufacturing, but also administration.One of the most well-known examples is Ford Motor Co., which adopted the principlesof scientific management at an early stage, and built its assembly line for the T-modelbased on Taylor’s model of work and authority distribution, thereby, giving name toFordism.

Successes

BPR, if implemented properly, can give huge returns. BPR has helped giantslike Procter and Gamble Corporation and General Motors Corporation succeed afterfinancial drawbacks due to competition. It helped American Airlines somewhat getback on track from the bad debt that is currently haunting their business practice. BPRis about the proper method of implementation.

General Motors Corporation implemented a 3-year plan to consolidate theirmultiple desktop systems into one. It is known internally as “Consistent OfficeEnvironment”. This re-engineering process involved replacing the numerous brands ofdesktop systems, network operating systems and application development tools into amore manageable number of vendors and technology platforms. According to DonaldG. Hedeen, director of desktops and deployment at GM and manager of the upgradeprogram, he says that the process “lays the foundation for the implementation of a

Page 157: DBA1656

DBA 1656 QUALITY MANAGEMENT

157

NOTES

Anna University Chennai

common business communication strategy across General Motors.” (Booker, 1994).Lotus Development Corporation and Hewlett-Packard Development Company,formerly Compaq Computer Corporation, received the single largest non-governmentsales ever from General Motors Corporation. GM also planned to use Novell NetWareas a security client, Microsoft Office and Hewlett-Packard printers. According to DonaldG. Hedeen, this saved GM 10% to 25% on support costs, 3% to 5% on hardware,40% to 60% on software licensing fees, and increased efficiency by overcomingincompatibility issues by using just one platform across the entire company.

Southwest Airlines offers another successful example of reengineering theircompany and using Information Technology the way it was meant to be implemented.In 1992, Southwest Airlines had a revenue of $1.7 billion and an after-tax profit of $91million. American Airlines, the largest U.S. carrier, on the other hand had a revenue of$14.4 billion dollars but lost $475 million and has not made a profit since 1989 (Fureyand Diorio, 1994). Companies like Southwest Airlines know that their formula forsuccess is easy to copy by new start-ups like Morris, Reno, and Kiwi Airlines. In orderto stay in the game of competitive advantage, they have to continuously re-engineertheir strategy. BPR helps them be original.

Critique

The most frequent and harsh cirtique against BPR concerns the strict focus onefficiency and technology and the disregard of people in the organization that is subjectedto a re-engineering initiative. Very often, the label BPR was used for major workforcereductions. Thomas Davenport, an early BPR proponent, stated that “When I wroteabout “business process redesign” in 1990, I explicitly said that using it for cost reductionalone was not a sensible goal. And consultants Michael Hammer and James Champy,the two names most closely associated with re-engineering, have insisted all along thatlayoffs shouldn’t be the point. But the fact is, once out of the bottle, the re-engineeringgenie quickly turned ugly.” (Davenport, 1995)

Michael Hammer similarly admitted that

“I wasn’t smart enough about that. I was reflecting my engineering backgroundand was insufficient appreciative of the human dimension. I’ve learned that’s critical.”(White, 1996)

Other criticism brought forward against the BPR concept include

• lack of management support for the initiative and thus poor acceptance in theorganization.

Page 158: DBA1656

DBA 1656 QUALITY MANAGEMENT

158

NOTES

Anna University Chennai

• exaggerated expectations regarding the potential benefits from a BPR initiativeand consequently failure to achieve the expected results.

• underestimation of the resistance to change within the organization.

• implementation of generic so-called best-practice processes that do not fitspecific company needs.

• overtrust in technology solutions.

• performing BPR as a one-off project with limited strategy alignment and long-term perspective.

• poor project management.

Development after 1995

With the publication of critical articles by some of the founding fathers of theBPR concept in 1995 and 1996 the re-engineering hype was effectively over. Sincethen, considering business processes as a starting point for business analysis and redesignhas become a widely accepted approach and is a standard part of the changemethodology portfolio, but is typically performed in a less radical way as originallyproposed.

More recently, the concept of Business Process Management (BPM) has gainedmajor attention in the corporate world and can be considered as a successor to theBPR wave of the 1990s, as it is evenly driven by striving for process efficiency supportedby information technology. Equivalently to the critique brought forward against BPR,BPM is now accused of focusing on technology and disregarding the people aspects ofchange.

Application Re-Engineering

Business Process Re-engineering (BPR) combines aspects of re-engineeringand business process outsourcing, and rationalizes them to return value in a short spanof time. This, of course, maximizes long-term prospects for your business.

Infosys provides Business Process Re-engineering (BPR) solutions that assistyou to fundamentally rethink and redesign how your organization will meet its strategicobjectives. Emphasis is on innovation, flexibility, quality service delivery, and cost controlby re-engineering business methods and supporting processes using state-of-the-artBPR tools and methodologies.

Page 159: DBA1656

DBA 1656 QUALITY MANAGEMENT

159

NOTES

Anna University Chennai

Business Process re-engineering services address the need to leverage newertechnology platforms, frameworks, and software products to transform IT systems andapplications. Business Process Re-engineering applications help you:

• Scale up to handle a larger user base

• Effectively address operational or performance issues with current applicationportfolio

• Achieve a higher degree of maintainability

• Alleviate licensing and support issues with the older technologies

• Improve user friendliness and portability of applications

• Reduce costs associated with maintaining old and poorly documented legacysystem

Infosys’ Business Process Re-engineering services are backed by deliveryexcellence, robust consulting capabilities and proprietary tools and frameworks.

How can it be dramatically improved?

• Business and technology capabilities: Teams of dedicated business andtechnology specialists work with project teams to understand systems andarchitect robust technology solutions for re-engineering or migration.

• Re-engineering process: Coupled with CMM Level 5 re-engineering process,Infosys achieves one of the best metrics in the industry.

• Proprietary InFlux methodology: Influx methodology aligns IT solutions tobusiness requirements. It prescribes a methodical, process-centric approachto translate business requirements into clear IT solutions. It is a systematic andrepeatable process that derives every part of the solution from the businessprocess that will ultimately use it. Influx uses models, methods, techniques,tools, patterns and frameworks to achieve a smooth translation of enterprisebusiness objectives into an effective IT solution.

• Project management capability: Our proprietary project managementprocesses, well integrated project management tools, and senior managementinvolvement at every stage of the project ensure that you will benefit from de-risked project management.

Page 160: DBA1656

DBA 1656 QUALITY MANAGEMENT

160

NOTES

Anna University Chennai

• Business Process Re-engineering: Re-engineer your business processesto help to achieve performance improvements.

• Flexible Architecture Definition: Define the new solution architecture tomake any enhancement easy to implement.

• Strong knowledge management disciplines: Infosys has clearly-definedknowledge management process and support infrastructure, and has a numberof knowledge assets on re-engineering in different technology platforms.

• Risk Management: Strong project planning and management processesreduce operational and other business risks.

• Smooth rollout: Strong project management and re-engineering processesensure a smooth rollout of new technology platforms, organization-wide.

3.9 RE-ENGINEERING PROCESS, BENEFITS AND LIMITATIONS

Re-engineering

Re-engineering is the radical redesign of an organization’s processes, especiallyits business processes. Rather than organizing a firm into functional specialties (likeproduction, accounting, marketing, etc.) and looking at the tasks that each functionperforms, we should, according to the re-engineering theory, be looking at completeprocesses from materials acquisition, to production, to marketing and distribution. Thefirm should be re-engineered into a series of processes.

The main proponents of re-engineering were Michael Hammer and James A.Champy. In a series of books including Re-engineering the Corporation, Re-engineering Management, and The Agenda, they argue that far too much time iswasted passing-on tasks from one department to another. They claim that it is far moreefficient to appoint a team who are responsible for all the tasks in the process. In TheAgenda they extend the argument to include suppliers, distributors, and other businesspartners.

Re-engineering is the basis for many recent developments in management. Thecross-functional team, for example, has become popular because of the desire to re-engineer separate functional tasks into complete cross-functional processes. Also, manyrecent management information systems developments aim to integrate a wide numberof business functions. Enterprise resource planning, supply chain management,knowledge management systems, groupware and collaborative systems, human resourcemanagement systems and customer relationship management systems all owe a debt tore-engineering theory.

Page 161: DBA1656

DBA 1656 QUALITY MANAGEMENT

161

NOTES

Anna University Chennai

Criticisms of re-engineering

It has earned a bad reputation because such projects have often resulted inmassive layoffs. This reputation is not altogether unwarranted, since companies haveoften downsized under the banner of re-engineering.

Further, re-engineering has not always lived up to its expectations. The mainreasons seem to be that:

• re-engineering assumes that the factor that limits organization’s performance isthe ineffectiveness of its processes (which may or may not be true) and offersno means of validating that assumption

• re-engineering assumes the need to start the process of performanceimprovement with a “clean slate”, i.e. totally disregard the status quo

• according to Eliyahu M. Goldratt (and his theory of constraints) re-engineeringdoes not provide an effective way to focus improvement efforts on theorganization’s constraint.

There was considerable hype surrounding the book’s introduction (partiallydue to the fact that the authors of Re-engineering the Corporation reportedly boughtnumbers of copies to promote it to the top of bestseller lists).

Abrahamson (1996) showed that fashionable management terms tend to followa lifecycle, which for Re-engineering peaked between 1993 and 1996 (Ponzi and Koenig2002). While arguing that Re-engineering was in fact nothing new (as e.g., when HenryFord implemented the assembly line in 1908, he was in fact re-engineering, radicallychanging the way of thinking in an organization), Dubois (2002) highlights the value ofsignaling terms as Reengineering, giving it a name, and stimulating it. At the same time,there can be a danger in usage of such fashionable concepts as mere ammunition toimplement particular reforms.

Reengineering by Paul A. Strassmann

excerpted from The Politics of Information Management

The Information Economics Press, 1995

Early in 1993, an epochal event took place in the US. For the first time inhistory, “white-collar” unemployment exceeded “blue-collar” unemployment. In theexperience of older generations, a college education entitles one to a job with an excellentearning potential, long-term job security and opportunity to climb a career ladder. Ifthere was an economic downturn, unemployment was something that happened toothers.

Page 162: DBA1656

DBA 1656 QUALITY MANAGEMENT

162

NOTES

Anna University Chennai

Large-scale white collar unemployment should not have come as a surprise.Since 1979, the US. information workforce has kept climbing and in 1993 stood at54% of total employment. Forty million new information workers had appeared since1960.

What do these people do? They are very busy and end up as either as corporateor social overhead if they work in the public sector. They are lawyers, consultants,coordinators, clerks, administrators, managers, executives and experts of all sorts. Theexpansion in computer-related occupations greatly increased the amount of informationthat these people could process and therefore demand from others. It is the characteristicsof information work that it breeds additional information work at a faster rate thannumber of people added to the information payroll. Computers turned out to be greatermultipliers of work than any other machine ever invented.

However, the greatest growth has been in government which now employsmore people than the manufacturing sector. Government workers predominantly areengaged in passing information and redistributing money which requires compliancewith complex regulations.

Who pays for this growth in overhead? Everybody does either in higher pricesor as increased taxes. As long as US. firms could raise prices, there was always roomfor more overhead. When international economic competition started cutting into marketshare starting in the 1980s corporations had to reduce staff costs.

Blue collar labor essential to manufacture goods was either outsourced to foreignlands, or automated, using proven industrial engineering methods to substitute capitalfor labor. By the mid 1980s major cost cuts could come only from reductions in overhead.

Overhead Cost Reduction

Early attempts announcing 20% or more across-the board layoffs in majorcorporations misfired. The most valuable experts left first to start up business ventures,most often with the knowledge they gained while the large firms lingered in bringinginnovations into the marketplace. Much of the dynamic growth of the Silicon Valley andof the complexes surrounding Boston have their origins in the entrepreneurial exploitationof huge research and development investments of large corporations.

The next wave was even more wasteful, because overhead was reduced byimposing cost cutting targets without the benefit of redesigning any of the businessprocesses. Companies that resorted to these crude methods did not have the experiencehow to measure the value-added of information workers. Therefore, they resorted tomethods that may have been somewhat effective for controlling “blue collar” employees.That was not successful because the same treatment that was acceptable for factory

Page 163: DBA1656

DBA 1656 QUALITY MANAGEMENT

163

NOTES

Anna University Chennai

workers made the remaining management staff act in defensive and counterproductiveways to protect their positions. Such methods disoriented and demoralized many whowere responsible for managing customer service.

This is where re-engineering came in. It applies well known industrial engineeringmethods of process analysis, activity costing and value-added measurement which havebeen around for at least 50 years.

Appearance of Re-engineering

The essence of re-engineering is to make the purge of recent excess staffingbinges more palatable to managers. These executives became accustomed to increasingtheir own staff support as a means for towards gaining greater organizational clout. Anunspoken convention used by of officials at high government and corporate levels, isthat a “position” in a hierarchy exists independently of whether something useful isdelivered to customers. The primary purpose of high level staffs is to act as guardians ofthe bureaucracy’s budget, privileges and influence.

If you want to perform surgery on management overhead, do not do it in a darkroom with a machete. First, you must gain acceptance from those who know how tomake the organization work well. Second, you must elicit their cooperation in tellingyou where the cutting will do the least damage. Third, and most importantly, they mustbe willing to share with you insights where removal of an existing business process willactually improve customer services.

Budget cutters who do little else than seek out politically unprotected components,cannot possibly know what are the full consequences of their actions.

Re-engineering offers to them an easy way out. Re-engineering calls for throwingout everything that exists and recommends reconstituting a workable organization onthe basis of completely fresh ideas. The new business model is expected to spring forthfrom the inspired insights of a new leadership team.

Re-engineering is a contemporary repackaging industrial engineering methodsfrom the past, rather something that is totally original. This cure is now administered inlarge doses to business enterprises that must instantly show improved profits to survive.However, re-engineering differs from the incremental and carefully analytic methods ofthe past. In political form it is much closer to a coup d’état than to the methods of aparliamentary democracy.

Page 164: DBA1656

DBA 1656 QUALITY MANAGEMENT

164

NOTES

Anna University Chennai

Re-engineering in the Public Sector

Despite admirable pronouncements about reengineering the US. government,it remains to be seen if that may be a smoke screen to justify more spending. As long asthe federal government continues to increasing taxes - an easy way out of any costpressures - the prospects of reinventing the government will remain dim.

Reinventing government does not deliver savings if meanwhile you keepexpanding its scope. You can have less bureaucracy only of you eliminate functions thathave demonstrably failed, such as loan guarantees, public housing, diverting schoolsfrom education to social experimentation, managing telecommunications and prescribinghealth care. Except for defense, justice, foreign relations and similar tasks which areessential instruments of governance, public sector attempts at economic engineeringhave always failed.

The latest Washington reengineering campaign may turn out to be a retrogressioninstead of an improvement. You do not enhance a stagnating economy by claiming tosave a probable $108 billions so that you can add over a trillion dollars of economiccontrol to the public sector.

An emetic will be always an emetic, regardless of the color and shape of thebottle it comes from. It does not do much for those who keep up a healthy diet byeating only what their body can use. A cure claiming to be an emetic but which neverthelessfattens will increase obesity.

Extremists

Re-engineering is a great idea and a clever new buzzword. There is not amanager who would not support to the idea of taking something that is defective andthen fixing it. Industrial engineers, methods analysts and efficiency experts have beendoing that for a long time.

The recently introduced label of efficiency through re-engineering covers theadoption of radical means to achieve corrective actions. This extremism offers whatappears to be instant relief from the pressures on corporate executives to show immediateimprovements. Re-engineering, as recently promoted, is a new label that covers someconsultants’ extraordinary claims.

To fully understand the intellectual roots of re-engineering, let the most vocaland generally acknowledged “guru” of reengineering speak for himself.

“American managers ...must abandon the organizational and operationalprinciples and procedures they are now using and create entirely new ones.” “Business

Page 165: DBA1656

DBA 1656 QUALITY MANAGEMENT

165

NOTES

Anna University Chennai

re-engineering means starting all over, starting from scratch.” “It means forgetting howwork was done...old titles and old organizational arrangements...cease to matter. Howpeople and companies did things yesterday doesn’t matter to the business reengineer.”

“Re-engineering...can’t be carried out in small and cautious steps. It is an all-or-nothing proposition that produces dramatically impressive results.”

The Contributions of Mike Hammer

When Hammer was queried “How do managers contemplating a big re-engineering effort get everyone inside their company to join up?” he answered in termsthat reflect the violent point of view of all extremists in how to achieve any progress:“...On this journey we...shoot the dissenters.” The theme of turning destruction on yourown people remains a persistent motive: “...It’s basically taking an ax and a machinegun to your existing organization.”

In view of the widespread popularity of Hammer I wonder how executives cansubscribe to such ferocious views while preaching about individual empowerment,teamwork, partnership, participative management, knowledge-driven enterprise, learningcorporation, employee gain sharing, fellow-worker trust, common bond, shared values,people-oriented leadership, cooperation and long-term career commitment.

I usually match the ideas of new prophets with past patterns. It helps tounderstand whether the what’s proposed is repackaging of what has been tried before.I find Hammer’s sentence structure as well as his dogmatic pronouncements as somethingthat resonates with the radical views put forth by political hijackers like Robespierre,Lenin, Mao and Guevara. Just replace some of the nouns, and you can produce slogansthat have been attributed to those who gained power by overthrowing the existingorder.

It is no coincidence that the most widely read book on re-engineering carriesthe provocative subtitle, “A Manifesto for Business Revolution” and claims to be a“seminal” book comparable to Adam Smith’s The Wealth of Nations - the intellectualunderpinning of capitalism. All you have to remember is that there is another book, alsobearing the title “Manifesto,” that successfully spread the premise that the only way toimprove capitalism is to obliterate it.

What is at issue here is much more than reenginering, which has much tocommend to itself. The question is one of morality of commerce against the morality ofwarfare.

Page 166: DBA1656

DBA 1656 QUALITY MANAGEMENT

166

NOTES

Anna University Chennai

The morality of warfare, of vengeance, violent destruction and the use of mighthas been with us every since primitive tribes had to compete for hunting grounds. Societieshave recognized the importance of warfare by sanctioning a class that was allowed,subject to some rules, to kill, while the redeeming code of loyalty and self sacrifice forthe good of all would prevail.

The morality of commerce has been with us at least since 500 BC. It is basedon shunning force, coming up with voluntary agreements, collaboration even with strangersand aliens, respecting contracts and promoting exchanges that benefit both to the buyerand the sellers.

Just about every major national tragedy in the last few centuries can be tracedto the substitution of the morality of warfare for the morality of commerce, under theguise that this will lead to greater overall prosperity. Mike Hammer’s adoption of thenon-redeeming expressions of military morality have crossed the line what ought to beacceptable. Reengineering is and should remain an activity in the commercial domainand should be bound by its morality. Leave the military language and thinking to thosewho have to deal with the difficult choices one faces when confronting the prospects ofgetting shot at.

Effectiveness of Revolutionary Changes

I have listened carefully to the extremists who are the most prominent promotersof the proven old ideas now repackaged as a managerial innovation. Their well financedpromotional efforts have succeeded in gaining at least temporary respectability for“reengineering.” I have found that they have successfully translated the radicalism of the1960’s, with its socially unacceptable slogan “Do not reform, obliterate!” into afashionable, money-making proposition. The clarion call for overthrowing the statusquo is similar to that trumpeted by the radical students who occupied the Dean’s office.Now the same arguments can be fashioned into more lucrative consulting services. Ifyou ask what many of the radical proponents of reengineering what they did while theywere the University during the 1960’s, you will find a surprising number who pridethemselves as erstwhile anti-establishment “revolutionaries.”

If you look at political revolutionary movements back in time to the FrenchRevolution, you will find their leaders motivated by a fixation on seizing power from theEstablishment under whatever slogan that could be sold to those who hoped to improvejustice, freedom or profit. Revolutionary leaders in the past 200 years, who were mostlyintellectuals who hardly ever delivered anything other than pamphlets and speeches,have been consistent in making conditions worse after they take over the Establishment.

Page 167: DBA1656

DBA 1656 QUALITY MANAGEMENT

167

NOTES

Anna University Chennai

There is one thing that all past revolutionary movements have in common with theextremist views of “reengineering.” In each case, the leaders call for complete anduncompromising destruction of the institutions as they exist. It is only through this kindof attack on customs, habits and relationships that newcomers can gain influence withoutmuch opposition. The common characteristic of the elite that agitates destructively forpositions of leadership is an arrogance that they are the only ones with superior insightwho can be trusted in what to do.

I am in favor of making evolutionary improvements in the way people work. Ifyou want to call that reengineering, that’s OK, though I prefer to call it “business processredesign” because the other label has become tainted by extremism. Besides, you cannotreengineer something that has not been engineered to begin with. Organizations evolvebecause it is impossible to design complex human relationships as if they were machineparts.

What matters is not the label, but by what means you help organizations toimprove. The long record of miscarriages of centrally planned radical reforms, and thedismal record of reengineering as acknowledged by Mike Hammer himself, suggestthat an evolutionary approach will deliver better and more permanent improvements.

Views on Business Improvement

Lasting improvements in business processes can be made only with the supportof those who know your business. Creating conditions for continuous, incremental andadaptive change is the primary task of responsible leadership. Cut-backs that respondabruptly to a steadily deteriorating financial situation are a sure sign that managementhas been either incompetent or asleep.

Evolutionary change stimulates the imagination and the morale. It createsconditions for rewarding organizational learning and for inspiring employees to discoverinnovative ways for dealing with competitive challenges and with adversity.

Dismissing employees on a large scale, accompanied by incentives for long-time employees to resign voluntarily, will paralyze those who are left with fear and anaversion to taking any initiatives. It will force out those who are the most qualified tofind employment elsewhere. You will end up with an organization that will suffer fromself-inflicted wounds while the competition is gaining on you. If you lose your bestpeople, you will have stripped yourself of your most valuable assets. Getting rid ofpeople because they have obsolete skills is a reflection of past neglect of the organizationto innovate and learn. Liquidating a company is easy and profitable, but somebodyought to also start thinking about how to rebuild it for growth. That is the challenge ofleading today’s losers to tomorrow’s winners.

Page 168: DBA1656

DBA 1656 QUALITY MANAGEMENT

168

NOTES

Anna University Chennai

How do you perform business process redesign under adverse conditions?How do you motivate your people to give you their best so that they may prosperagain, even though some positions of privilege will change or cease to exist?

Business Process Redesign

To be successful, business process redesign depends on the commitment andimaginative cooperation of your employees. Business process redesign must demonstratethat only by working together they can improve their long-term opportunities. Businessprocess redesign relies primarily on the accumulated know-how of existing employeesto find conditions that will support the creation of new jobs, even if that means that inthe short run many of the existing jobs will have to cease to exist.

In business process redesign, the people directly affected by the potential changesstudy the “as-is” conditions and propose “to be” alternatives to achieve the desiredimprovements. In business process redesign everybody with an understanding of thebusiness will be asked to participate. External help is hired only for expertise that doesnot already exist anywhere internally.

Business process redesign calls for applying rigorous methods to charting, pricingand process flow analysis of “as-is” conditions. Process redesign is never finished duringthe lifetime of a company. After implementing any major improvement new payoffopportunities will always emerge from what has just been learned. The primary objectiveof the business process improvement is to create a learning environment in which renewaland gain will be an ongoing process instead of just a one time shock therapy. Adoptingformal business process flow methods and a consistent technique for keeping track oflocal improvements allows combining later on processes that were initially isolated forshort-term delivery of local gains in productivity.

Business process redesign balances the involvement of information managers,operating managers and subject matter experts. Cooperative teams are assembled undernon-threatening circumstances in which much time is spent and perhaps wasted indiscussing different points of view. Unanimity is not what business process is all about.Differences are recorded, debated and passed on to higher levels of management forresolution.

Business process redesign requires that you perform a business case analysis,which calculates not only payoffs but also reveals the risks of each proposed alternative.This is not popular because the current methods for performing business case analysis

Page 169: DBA1656

DBA 1656 QUALITY MANAGEMENT

169

NOTES

Anna University Chennai

of computerization projects call for calculations that do not have the integrity for makingthem acceptable to financial executives.

The overwhelming advantage of business process redesign, as compared with“re-engineering,” lies in its approach to managing organizational change. The relativelyslow and deliberate process redesign effort is more in tune with the approach thatpeople normally use to cope major changes. Every day should be process redesignday, because that is how organizational learning takes place and that is how you gainthe commitment of your people. At each incremental stage of process design, yourpeople can keep up the pace with their leaders, because they can learn how to sharethe same understanding of what is happening to the business. They are allowed theopportunity to think about what they are doing. They are not intimidated by precipitouslayoffs that inhibit their sharing of ideas how to use their own time and talent moreeffectively.

Character of Re-engineering

Re-engineering, as currently practiced, primarily by drastic dictate and withreliance on outsiders to lead it, assumes that your own people cannot be trusted to fixwhatever ails your organization. re-engineering accepts primarily what the experts,preferably newcomers to the scene, have to offer.

In re-engineering the consultants will recommend to you what the “to be”conditions ought to look like, without spending much time understanding the reasonsfor the “as-is” conditions. The credo of re-engineering is to forget what you knowabout your business and start with a clean slate to “reinvent” what you would like to be.What applies to individuals or nations, certainly applies to corporations: you can nevertotally disregards your people, your relationships with customers, your assets, theaccumulated knowledge and your reputation. Versions of the phrase “...throw historyinto the dustbin and start anew” has been attributed to every failed radical movement inthe last two hundred years.

Re-engineering proponents do not worry much about formal methods. Theypractice techniques of emergency surgery, most often by an amputation. If amputationis not feasible, they resort to tourniquet-like remedies to stopping the flow of red ink.Radical re-engineering may apply under emergency conditions of imminent danger aslong as someone considers that this will most likely leave us with a patient that maynever recover to full health again because of demoralization of the workforce. It is

Page 170: DBA1656

DBA 1656 QUALITY MANAGEMENT

170

NOTES

Anna University Chennai

much swifter than the more deliberate approach of those who practice business processredesign. No wonder, the simple and quick methods are preferred by the impatient andthose who may not have to cope with the unforeseen long term consequences on whathappens to the quality and the dedication of the workforce.

In re-engineering participation by most of the existing management is superfluous,because you are out to junk what is in place anyway. Under such conditions, for instance,bringing in an executive who was good in managing a cookie company to run a computercompany makes perfect sense.

In re-engineering debates are not to be encouraged since the goal is to producea masterful stroke of insight that suddenly will turn everything around. Autocratic managersthrive on an opportunity to preside over an reengineering effort. reengineering alsooffers a new lease on the careers of chief information officers with propensities to forgeahead with technological means as a way of introducing revolutionary changes. A numberof spokesmen in recent meetings of computer executives offered reengineering as theantidote to the slur that CIO stands for “Career Is Over.”

Re-engineering conveys a sense of urgency that does not dwell on much financialanalysis, and certainly not formal risk assessment. Managers who tend to rely on boldstrokes rebel against analytic disciplines. When it comes to business case analysis wehave the traditional confrontation of the tortoise and the hare - the plodders vs. the hip-shooters. Sometimes the hip-shooters win, but the odds are against them in an endurancecontest.

Re-engineering does not offer the time or the opportunities for the much neededadaptation of an organization to changing conditions. It imposes changes swiftly by fiat,usually from a new collection of people imported to make long overdue changes. Evenif the new approach may be a superior one for jarring an organization out of its ingrownbad habits, it will be hard to implement because those who are supposed to act differentlywill now have a negative attitude to do their creative best in support of the transitionfrom the old to the new.

Re-engineering has the advantage of being a choice of last resort when there isno time left to accomplish business process redesign. In this sense, it is akin to sayingthat sometimes dictatorship is more effective than community participation. Withoutprobing why the leadership of an enterprise ever allowed such conditions to occur, I amleft with a nagging doubt if the drastic cure does not ultimately end up causing worsedamage than the disease.

Page 171: DBA1656

DBA 1656 QUALITY MANAGEMENT

171

NOTES

Anna University Chennai

Constitutional democracies, despite occasional reversals in fortune, have neverwillingly accepted dictatorship as the way out of their troubles. On the other hand, therecord of attempts to deal with the crises in governance by drastic solutions is dismal.Though occasionally you may find remarkable short term improvements, extremesolutions that have destroyed past accumulation of human capital have always resultedin viewing an era of violence as times of retrogression.

Dr. Michael Hammer, a Massachusetts Institute of Technology computer sciencesprofessor, and James Champy, Chairman of CSC Index, gave new life and vigor to theconcept of reengineering in the early 1990s with the release of their book Reengineeringthe Corporation: A Manifesto for Business Revolution. Now, over a decade old,Business Process Reengineering (BPR) is no longer the latest and hottest managementtrend. However, as BPR enters a new century, it has begun to undergo a resurgence inpopularity.

Companies have seen real benefit in evaluating processes before they implementexpensive technology solutions. A process can span several departmental units, includingaccounting, sales, production, and fulfillment. By deconstructing processes and gradingthe activities in terms of whether or not they add value, organizations can pinpoint areasthat are wasteful and inefficient. As organizations continue to implement EnterpriseResource Planning (ERP) systems, they realize that many systems were built based ondepartmental needs, rather than being geared to a specific process.

The Essence of BPR

Hammer and Champy noted that in the business environment nothing is constantor predictable—not market growth, customer demand, product life spans, technologicalchange, nor the nature of competition. As a result, customers, competition, and changehave taken on entirely new dynamics in the business world. Customers now have choice,and they expect products to be customized to their unique needs. Competition, nolonger decided by “best price” alone, is driven by other factors such as quality, selection,service, and responsiveness. In addition, rapid change has diminished product andservice life cycles, making the need for inventiveness and adaptability even greater.

This mercurial business environment requires a switch from a task orientationto a process orientation, and it requires re-inventing how work is to be accomplished.As such, reengineering focuses on fundamental business processes as opposed todepartments or organizational units.

Page 172: DBA1656

DBA 1656 QUALITY MANAGEMENT

172

NOTES

Anna University Chennai

Re-engineering Defined

“Re-engineering is the fundamental rethinking and radical redesign of businessprocesses to achieve dramatic improvements in critical, contemporary measures ofperformance, such as cost, quality, service, and speed.”—Hammer and Champy, 1993

The National Academy of Public Administration recast this definition forgovernment:

“Government business process reengineering is a radical improvementapproach that critically examines, rethinks, and redesigns mission product andservice processes within a political environment. It achieves dramatic missionperformance gains from multiple customer and stakeholder perspectives. It is akey part of a process management approach for optimal performance thatcontinually evaluates, adjusts or removes processes.” —NAPA, 1995

Some have argued that government activities are often policy generators oroversight mechanisms that appear to add no value, yet cannot be eliminated. Theyquestion how reengineering could have applicability in the public sector. Governmentonly differs from the commercial sector in terms of the kinds of controls and customersit has. It still uses a set of processes aimed at providing services and products to itscustomers.

The Principles of Re-engineering

In Hammer and Champy’s original Manifesto reengineering was by definitionradical; it could not simply be an enhancement or modification of what went before. Itexamined work in terms of outcomes, not tasks or unit functions, and it expected dramatic,rather than marginal improvements. The authors suggested seven principles ofreengineering that would streamline work processes, achieve savings, and improveproduct quality and time management.

Seven principles of re-engineering

1. Organize around outcomes, not tasks.

2. Identify all processes in an organization and prioritize them in order of redesignurgency.

3. Integrate information processing work into the real work that producesinformation.

4. Treat geographically dispersed resources as though they are centralized.

Page 173: DBA1656

DBA 1656 QUALITY MANAGEMENT

173

NOTES

Anna University Chennai

5. Link parallel activities in the workflow instead of just integrating their results.

6. Put the decision point where the work is performed, and build control into theprocess.

7. Capture information once and at the source.

The Benefits of Re-engineering

The hard task of re-examining mission and how it is being delivered on a day-to-day basis will have fundamental impacts on an organization, especially in terms ofresponsiveness and accountability to customers and stakeholders. Among the manyrewards, reengineering:

• Empowers employees

• Eliminates waste, unnecessary management overhead, and obsolete or inefficientprocesses

• Produces often significant reductions in cost and cycle times

• Enables revolutionary improvements in many business processes as measuredby quality and customer service

• Helps top organizations stay on top and low-achievers to become effectivecompetitors.

Re-engineering: A Functional Management Approach

Implementation of a re-engineering initiative usually has considerable impact acrossorganizational boundaries, as well as impacts on suppliers and customers. Re-engineeringcan generate a significant change in:

• Product and service requirements

• Controls or constraints imposed on a business process

• The technological platform that supports a business process.

For this reason, it requires a sensitivity to employee attitudes as well as to theimpact of change on their lives.

What is a business process?

“A business process is a structured, measured set of activities designed toproduce a specified output for a particular customer or market.”

Page 174: DBA1656

DBA 1656 QUALITY MANAGEMENT

174

NOTES

Anna University Chennai

Selecting a process

Wise organizations will focus on those core processes that are critical to theirperformance, rather than marginal processes that have little impact. There are severalcriteria reengineering practitioners can use for determining the importance of the process:

• Is the process broken?

• Is it feasible that reengineering of this process will succeed?

• Does it have a high impact on the agency’s strategic direction?

• Does it significantly impact customer satisfaction?

• Is it antiquated?

• Does it fall far below “Best-in-Class”?

DoD has suggested that the following six tasks be part of any functionalmanagement approach to reengineering projects:

1. Define the framework. Define functional objectives; determine themanagement strategy to be followed in streamlining and standardizing processes;and establish the process, data, and information systems baselines from whichto begin process improvement.

2. Analyze. Analyze business processes to eliminate non-value added processes;simplify and streamline processes of little value; and identify more effective andefficient alternatives to the process, data, and system baselines.

3. Evaluate. Conduct a preliminary, functional, economic analysis to evaluatealternatives to baseline processes and select a preferred course of action.

4. Plan. Develop detailed statements of requirements, baseline impacts, costs,benefits, and schedules to implement the planned course of action.

5. Approve. Finalize the functional economic analysis using information from theplanning data, and present to senior management for approval to proceed withthe proposed process improvements and any associated data or systemchanges.

6. Execute. Execute the approved process and data changes, and providefunctional management oversight of any associated information system changes.

Page 175: DBA1656

DBA 1656 QUALITY MANAGEMENT

175

NOTES

Anna University Chennai

Ensuring Re-engineering Success

Much research has been conducted to determine why many reengineeringprojects fail or miss the mark. DoD has indicated that organizations successful inreengineering planning have a number of common traits:

• They are strongly supported by the CEO

• They break re-engineering into small or medium-sized elements

• Most have a willingness to tolerate change and to withstand the uncertaintiesthat change can generate

• Many have systems, processes, or strategies that are worth hiding fromcompetitors.

Six critical success factors from government experience

In a publication for the National Academy of Public Administration, author Dr.Sharon L. Caudle identified six critical success factors that ensure government re-engineeringinitiatives achieve the desired results:

1. Understand re-engineering.

o Understand business process fundamentals.

o Know what re-engineering is.

o Differentiate and integrate process improvement approaches.

2. Build a business and political case.

o Have necessary and sufficient business (mission delivery) reasons forre-engineering.

o Have the organizational commitment and capacity to initiate and sustainre-engineering.

o Secure and sustain political support for re-engineering projects.

3. Adopt a process management approach.

o Understand the organizational mandate and set mission-strategicdirections and goals cascading to process-specific goals and decision-making across and down the organization.

o Define, model, and prioritize business processes important for missionperformance.

Page 176: DBA1656

DBA 1656 QUALITY MANAGEMENT

176

NOTES

Anna University Chennai

o Practice hands-on senior management ownership of processimprovement through personal involvement, responsibility, and decision-making.

o Adjust organizational structure to better support process managementinitiatives.

o Create an assessment program to evaluate process management.

4. Measure and track performance continuously.

o Create organizational understanding of the value of measurement andhow it will be used.

o Tie performance management to customer and stakeholder current andfuture expectations.

5. Practice change management and provide central support.

o Develop human resource management strategies to supportreengineering.

o Build information resources management strategies and a technologyframework to support process change.

o Create a central support group to assist and integrate reengineeringefforts and other improvement efforts across the organization.

o Create an overarching and project-specific internal and externalcommunication and education program.

6. Manage re-engineering projects for results.

o Have a clear criteria to select what should be re-engineered.

o Place the project at the right level with a defined reengineering teampurpose and goals.

o Use a well-trained, diversified, expert team to ensure optimum projectperformance.

o Follow a structured, disciplined approach for re-engineering.

Applying re-engineering principles to health care

Business process re-engineering has its roots in commercial manufacturing.Development work in the 1970s and 1980s in a range of commercial settings showedsignificant benefits from a systematic approach to the analysis and restructuring ofmanufacturing processes. It was widely adopted as a means of improving manufacturing

Page 177: DBA1656

DBA 1656 QUALITY MANAGEMENT

177

NOTES

Anna University Chennai

output to produce significant improvements in quality, capacity and cost. The approachattracted the interest of managers in the NHS and two projects (at Leicester RoyalInfirmary and at Kings Healthcare in London) were funded by the Department of Healthto test its application in a health care setting.

The work started in Leicester in 1994. It involved a significant programme of140 separate projects. A Framework for Defining Success was established to ensurethat the impact of the work in its widest sense could be measured. Over the last fiveyears significant gains have been made in the quality of services offered to patients, aswell as in teaching and research. Indeed, few departments in the hospital were untouchedby the initiative.

The key lesson from the work at Leicester has been that change is typicallycreated bottom-up in contrast to the top-down approach championed by the academicsupporters of re-engineering. Clinical and management leaders have to create the rightconditions for improvement. Redesigning health care differs in significant ways fromthat which can be applied in industrial settings: it involves several distinct steps:

• Identifying specific patient groups - targets for service process redesign.

• Ensuring that those involved in service provision are involved in service redesign.

• Being clear about the tools and techniques available.

• Analysing the current process to identify strengths and weaknesses: what addsvalue and what doesn’t?

• Creating a model for the redesigned service.

• Establishing performance measures.

• Testing the new process - being honest - does it or doesn’t it work?

• Then doing what works.

Leicester Royal Infirmary has created a tool-kit that describes the tools andtechniques used for patient process redesign. The tool-kit provides the basis for aseries of Re-engineering Masterclasses that have attracted clinical and managerialinterest within the UK and internationally, with visitors from health services in NewZealand, Sweden and Denmark taking part. The Leicester Royal Infirmary’sdissemination work has been recognised with the granting of specialist Learning Centrestatus, thus confirming their role as an integral part of the growing NHS Learning

Page 178: DBA1656

DBA 1656 QUALITY MANAGEMENT

178

NOTES

Anna University Chennai

Using business process re-engineering principles in educational reform?

Introduction

The objective of this paper is to apply business re-engineering principles toreform primary and secondary education. In particular, using information technology(especially telecommunications and networking), educational reforms can be proactivelyaccomplished despite the dynamic environment and demands.

Efforts to reform education through information technology (Corcoran, 1993)have entered the mainstream of society and of the corporation. Reforming educationthrough information technology bears a remarkable resemblance to efforts to reformbusiness organizations through information technology! (Business Week, April 1983)It is therefore appropriate to re-examine educational reform in terms of business’ languageand concepts as used in business process re-engineering. Re-engineeering strives tobreak away from the old ways and rules about how we organize and conduct business.It involves recognizing and rejecting some of the old ways and then finding imaginativenew ways to accomplish work. (Hammer, pp. 104-105) In this way, we might learnlessons from the corporate experience to apply information technology to reformeducational structure. This perspective may also enable leaders of the corporatecommunity to better understand the context of educational reforms and their necessaryrole in promoting and assisting those reforms.

Can we apply the language (and experience) of business process re-engineeringto educational reform? In his landmark paper on re engineering, Hammer (p. 105)asserts:

In a time of rapidly changing technologies and ever- shorter product life cycles,product development often proceeds at a glacial pace.

In an age of the customer, order fulfillment has high error rates and customerinquiries go unanswered for weeks.

In a period when asset utilization is critical, inventory levels exceed many monthsof demand. Small changes in wording to adapt these assertions to education reveal aremarkable analogy!

In a time of rapidly changing history and ever-shorter political and economiccycles, curriculum development often proceeds at a glacial pace.

Page 179: DBA1656

DBA 1656 QUALITY MANAGEMENT

179

NOTES

Anna University Chennai

In an age of keen competition and higher standards, student achievement hashigh failure rates and student needs go unanswered.

In a period of limited resources, educational costs continue to climb but areoften a diminishing proportion of infrastructure investment. These analogies shouldtherefore provide us with new perspective to re-engineer education using advancedinformation technology.

Re-engineering primary and secondary education

The world has changed, but education hasn’t necessarily adapted to thesechanges. At a recent Principals’ Conference in Singapore, John Yip, Director ofEducation, was quoted (Leong):

“It is crucial that we have a good education system which is relevant to thetimes. Change is inevitable.... Help students to develop attitudes and skills with whichthey can independently seek knowledge, process information and apply it to tackleissues.”

We still have “industrial age” schools that are unable to meet the needs of ouremerging “information age” society! Davis (p. 24) claims that all organizations based onthe industrial model are created for “businesses” that either no longer exist or are in theprocess of going out of existence.

Again, quoting Hammer (p. 107), we can also draw another analogy betweenthe business climate and the educational climate:

Quality, innovation, and service are now more important than cost, growth,and control.

It should come as no surprise that our business processes and structures areoutmoded and obsolete: our work structures and processes have not kept pace withthe changes in technology, demographics, and business objectives.

Again, small changes in wording to adapt these statements to education reveala remarkable analogy!

Quality, innovation, and creativity are now more important than cost andstandardized test scores.

It should come as no surprise that our school processes and structures areoutmoded and obsolete: our teaching structures and processes have not kept pace withthe changes in technology, demographics, and societal conditions.

Page 180: DBA1656

DBA 1656 QUALITY MANAGEMENT

180

NOTES

Anna University Chennai

Educational problems

Corcoran (p. 66) remarks that “networks are changing the way teachers teachand students learn.” Can we apply networking to re-engineer primary and secondaryeducation? What are the problems of education today in our dynamic contemporaryworld?

Recognizing the traditional isolation of teachers (and students), Newman (p.49) argues that we must make a choice between systems that (merely) deliver traditionalinstruction from a central repository and systems that enable teachers and students toaccess and gather information from distributed resources and communities. Theexperience of teacher Sandra McCourtney (Corcoran, p. 67) demonstrates that anetwork can bring children the excitement of the outside world. Even independentresearch by students is possible as recognized by Bob Hughes (Corcoran, p. 66),Boeing’s corporate director of education relations, who sees computer networks askey to turning out students who adapt to change and who solve problems by seekingout and applying new ideas.

In one of the recent flood of articles on networking in the mainstream press,Markoff laments inequities such as Different levels of access between information“have” vs. “have-not” and Prejudices due to professional rank, gender, race, religion,national origin, or physical ability.

Hunter (p. 44) offers us hope that assumptions of the present educational systemwhere some learners and populations are “underserved” because they live in particularplaces, and that learning opportunities are necessarily tied to local resources, are opento rethinking in a highly networked environment. As a progressive force for change,equity, and restructuring primary and secondary education, information technology hasbeen offered as a mechanism for fostering change. (Gillman, 1989) More specifically,many proponents have identified networking as a mechanism for change. In particular,efforts to promote the US National Research and Education Network (NREN) for usein primary and secondary education have been most representative of this point ofview! Perhaps, the most ambitious effort is the National School Network Testbed(Bernstein, et. al.), a national research and development resource in which schools,school districts, community organizations, state education agencies, technologydevelopers, and industry partners are experimenting with applications that bring significantnew educational benefits to teachers and students. The Consortium for SchoolNetworking (CoSN) has been most active in promoting this movement through its on-

Page 181: DBA1656

DBA 1656 QUALITY MANAGEMENT

181

NOTES

Anna University Chennai

line Internet discussion ([email protected]) and other activities, e.g. gophercosn.org . For membership information, send mail to [email protected] .

Indeed, the need for educational reform is generally recognized. Applying theconcept of internetworking to educational innovation, it becomes possible for everyindividual and group involved in educational change and research to be a directcontributor to the collective process of innovation. Examples of opportunities for directcontribution include (Hunter, p. 44):

Improved communications among school district personnel and Sharingexpertise among teachers of different disciplines and geographical locations. Criticalshifts in application of information technology.

The literature on organizational change and the future abounds with dramaticpredictions of the need for organizations to adapt to profound change. Sproull andKiesler (p. 116) claim that the networked organization differs from the conventionalworkplace with respect to both time and space so that managers can use networks tofoster new kinds of task structures and reporting relationships. Davis claims that whilethe new economy is in the early decades of its unfolding, businesses continue to useorganization models that were more appropriate to previous times than to current needs.(p. 5)

In terms of information technology, Tapscott and Caston (pp. 14-17) identifyorganizational changes that are enabled by information technology and network access,in particular. Networking enables the informal web of relations people developed witheach other inside the organization (Davis, p. 86) so as to get things done. Likewise,examples of how educational organizations might adopt these network innovations arenot difficult to imagine. Hunter (1993) provides many examples of how network accesschanges the nature of teaching and learning. Hunter (p. 42) claims that new models oflearning and teaching are made possible by the assumption that learners and teachersas individuals and groups can interact with geographically and institutionally distributedhuman and information resources.

Integrated organization

With the evolution from system islands to integrated systems, network accessenables inter-disciplinary instruction. Network access “flattens” the instructionaldevelopment process by giving teachers access to previously inaccessible informationand teaching resources. Hunter (p. 42) speculates that application of the concepts andtechnology of internetworking may make it possible for separate reform efforts of diverse

Page 182: DBA1656

DBA 1656 QUALITY MANAGEMENT

182

NOTES

Anna University Chennai

groups and individuals to contribute to the building of a new educational system providingmore accessible, higher-quality learning opportunities for everyone.

Extended enterprise

With the evolution from internal to inter-enterprise computing, more people,e.g. parents and business people, will become active in schools by “dropping in”electronically for a short time every day. Furthermore, students will leave the confinesof the classroom and make electronic visits to museums, libraries, businesses, andgovernments around the world. Does business process re-engineering model fiteducational problems?

So, how ought we to apply the re engineering model to reform education?Hammer (pp. 108-111) offers principles of re engineering. Could these principles beapplied to education? Let’s see! Organize around outcomes, not tasks Have one personperform all the steps in a process and design that person’s job around an objective oroutcome instead of a single task (Hammer, p. 108)

In the business sense, the idea (Hammer, p. 106) is to sweep away existing jobdefinitions and departmental boundaries and to create a new position throughempowerment.

In the world of educational telecommunications, the experience of teacher EdBarry (Corcoran, p. 68) reveals that the role of teacher changes to “manager, not adispenser of information” because the teacher is empowered! Have those who use theoutput of the process perform the process Opportunities exist to re engineer processesso that the individuals who need the result of a process can do it themselves (Hammer,p. 109)

In the business sense, when the people closest to the process perform it, thereis little need for the overhead associated with managing it (Hammer, p. 109) By thesame token, Kay (p. 146) has discovered that children learn in the same way as adults,in that they learn best when they can ask questions, seek answers in many places,consider different perspectives, exchange views with others and add their own findingsto existing understandings.

Subsume information-processing work into the real work that produces theinformation

Move work from one person or department to another (Hammer, p. 110)

Empower students to search for the “answers” in heretofore inaccessible places.

Page 183: DBA1656

DBA 1656 QUALITY MANAGEMENT

183

NOTES

Anna University Chennai

As a means to reduce the isolation of classroom teachers, Hunter (1993, p. 43) claimsthat a thread woven throughout most networked learning innovations is the idea thatschooling can be more closely linked to work in the real world.

Treat geographically dispersed resources as though they were centralized

Use databases, telecommunication networks, and standardized processingsystems to get the benefits of scale and coordination while maintaining the benefits offlexibility and service (Hammer, p. 110)

Use databases, telecommunication networks, and computer- supportedcourseware to get the benefits of sharing curriculum development while maintaining thebenefits of individualized learning and customization. Link parallel activities instead ofintegrating their results

Forge links between parallel functions and coordinate them while their activitiesare in process rather than after they are completed (Hammer, p. 110)

Often, separate units perform different activities that must eventually cometogether. For example, curriculum developers often prepare instructional materialsindependently for teachers to use. Instead, enabling teachers and students tocommunicate easily with curriculum developers during development and testing ofmaterials ought to lead to better materials produced in a shorter time. Put the decisionpoint where the work is performed and build control into the process People who dothe work should make the decisions and that the process itself can have built-in controls(Hammer, p. 111)

In general, we should “empower” teachers and students! For example,opportunities exist to re engineer teaching so that teachers can tap the expertise ofcurriculum developers and subject matter experts. Then, teachers can become self-managing and self-controlling. In a project called Learning Through CollaborativeVisualization (Hunter, p. 43), students and teachers are working directly with scientistsat the University of Michigan, the Exploratorium (museum) in San Francisco, the NationalCenter for Supercomputer

Applications in Urbana-Champaign (Illinois, USA), and the Technical EducationResearch Center (Cambridge, MA USA) on inquiries in atmospheric science, usingtwo-way audio-video technology being developed by Bellcore and Ameritech.

Page 184: DBA1656

DBA 1656 QUALITY MANAGEMENT

184

NOTES

Anna University Chennai

Current research efforts

In hope of making a case for the reforms described here, two research projectson educational telecommunications are experimenting with these principles. CommonKnowledge: PittsburghCommon Knowledge: Pittsburgh is a US National Science

Foundation-funded project to test the impact of Internet access on the Pittsburgh(Pennsylvania) Public Schools. Singapore pilot project Singapore’s Ministry of Educationis pioneering Internet access in several junior colleges (grades 11-12) and a secondaryschool. These projects are conducting experiments to address (throughtelecommunications) the educational problems described earlier. Collaborative efforts

Collaboration between these international partners is intended to enable acomparative evaluation of the impact of educational telecommunications on two verydifferent educational systems. The United States is generally recognized for its strongerclimate for innovation with notable educational experiments such as:

Apple Vivarium Program (Kay)

Cityspace (Markoff)

On the other hand, Singapore and other Asian nations

(Hirsch) are generally recognized for their stronger climate

for teaching the fundamentals.

Does the model fit?

Re-engineering triggers changes of many kinds, not just of the business processitself. Job designs, organizational structures, management systems-anything associatedwith the process-must be refashioned in an integrated way. In other words, re engineeringis a tremendous effort that mandates change in many areas of the organization. (Hammer,p. 112) Surely, primary and secondary education deserve the same attention andinformation technology has as much potential to reorganize education as well as work!

The business re-engineering model is remarkably apt for educational reform.Perhaps, this novel (and somewhat provocative) approach may encourage fresh ideasin this difficult task.

Page 185: DBA1656

DBA 1656 QUALITY MANAGEMENT

185

NOTES

Anna University Chennai

SUMMARY

The importance and indispensability of control in the production process is

emphasized in this unit. The Statistical Process Control has its own evolution character

and that is prescribed. Various control charts along with their applications are dealt

elaborately. The process capability and its significance for bringing out a quality product

are discussed in detail. The concept of six sigma, the methodology of adopting them are

presented for the use of the readers. Bill Smith, the father six sigma has evolved the

concept and its application in different industries and their outcome are placed for the

consumption of the readers. The reliability concepts, their importance are explained

with application in various industries. Product Life Characteristics Curve with its phases

and quality requirements are deliberated. The Total Productive Maintenance has lot of

overlapping on TQM. The eight pillars of TQM, viz tha 5S Components, Jishu Hozen

(Autonomous Maintenance), Kaizen, Planned Maintenance, Quality Maintenance,

Training, Office TPM, Safety, Health and Environment are elaborated for better

understanding. The Life-Cycle Costing, otherwise popularly known as terotechnology

is deliberated with a focus on how to realize it. The Business Process Reengineering,

fundamentals and methodolgy are handled in this unit. Deliberations on whether BPR is

a fad or rebirth of scientific management is also conducted. The reengineering process

is elaborately dealt in conjunction with BPR. Examples on various sectors are also

highlighted in this.

REVIEW QUESTIONS

1. Explain why the statistical process control is of utmost significance to the

quality ensuring process.

2. What are the types of Control Charts and explain the construction of any one

of them?

3. Highlight the meaning, significance and measurement of process capability.

4. What is DMAIC and DMADV? Expalin its steps.

Page 186: DBA1656

DBA 1656 QUALITY MANAGEMENT

186

NOTES

Anna University Chennai

5. Explain the contributions of Bill Smith and explain how it is used in cost saving.

6. Describe the process of process capability measurement.

7. Explain the role of TPM on product life cycle.

8. Detail the steps in the introduction of TPM in an organization.

9. What is Life Cycle Costing? Using example, demonstrate it.

10. “BPR – a rebirth of scientific management” – Critically examine.

11. Discuss the criticisms and benefits of re-engineering.

Page 187: DBA1656

DBA 1656 QUALITY MANAGEMENT

187

NOTES

Anna University Chennai

TOOLS AND TECHNIQUES FORQUALITY MANAGEMENT

INTRODUCTION

To achieve in quality Management, one need to be in possession of varioustools and techniques. To establish a house of quality, we need to have an umbrella likecoverage in the form of quality function deployment. Loss minimization, failure reductionprocess reliability improvement are possible to be achieved through the application ofvarious statiscal and management tools –old and new. This exercise has to be done byidentifying a benchmark and trying hard to surpass it. This unit deals with Quality FunctionsDevelopment (QFD), Benefits, Voice of Customer, Information Organization, Houseof Quality (HOQ), Building a HOQ, QFD Process, Failure Mode Effect Analysis(FMEA), Requirements of reliability, Failure Rate, FMEA stages, design, process anddocumentation, Taguchi Techniques – Introduction, Loss function, Parameter andTolerance Design, Signal to Noise Ratio, Seven Old (Statistical Tools), Seven newmanagement tools, Benchmarking, poka yoke.

LEARNING OBJECTIVES

Upon completion of this unit, you will be able to:

• Build a house of quality• Appreciate the importance of QFD and the process of deployment• Use different analytical tools to detect and prevent failure and losses• Compare the old and new management tools pertaining to Quality Management.

4.1 QUALITY FUNCTIONS DEPLOYMENT (QFD), BENEFITS

VOICE OF CUSTOMER

House of Quality is a graphic tool for defining the relationship between customerdesires and the firm/product capabilities. It is part of the quality functiondeployment(QFD) and it utilizes a planning matrix to relate customer wants to how afirm (that produce the products)is going to meet those wants. It looks like a house with

UNIT IV

Page 188: DBA1656

DBA 1656 QUALITY MANAGEMENT

188

NOTES

Anna University Chennai

correlation matrix as its roof, customer wants Vs product features as the main part,competitor evalution as the porch etc.

The House of Quality is the first matrix in a four-phase QFD (Quality FunctionDeployment) process. It’s called the House of Quality because of the correlation matrixthat is roof shaped and sits on top of the main body of the matrix.The correlation matrixevaluates how the defined product specifications optimize or sub-optimize each other.

The House of Quality is commonly associated with QFD, and in the minds ofmany who learned the topic from obsolete examples and books is the only thing thatneed be done.

Glenn Mazur, executive director, QFD Insititute, explains this way:

“Think of The House of Quality (HOQ) like a highway interchange betweenthe voice of the customer (VOC) and the Voice of the Engineer(VOE).

4.2 INFORMATION ORGANIZATION

Data and information

• Many people use the terms “data” and “information” as synonyms but these twoterms actually convey very distinct concepts

• “Data” is defined as a body of facts or figures, which have been gatheredsystematically for one or more specific purposes

o Data can exist in the forms ofLinguistic expressions (e.g. name, age, address, date, ownership)Symbolic expressions (e.g. traffic signs)Mathematical expressions (e.g. E = mc2)Signals (e.g. electromagnetic waves)

• “Information” is defined as data which have been processed into a form that ismeaningful to a recipient and is of perceived value in current or prospectivedecision-making

o Although data are ingredients of information, not all data make usefulinformation

Data not properly collected and organized are a burden ratherthan an asset to an information userData that make useful information for one person may not beuseful to another person

o Information is only useful to its recipients when it isRelevant (to its intended purposes and with appropriate level ofrequired detail)Reliable, accurate and verifiable (by independent means)

Page 189: DBA1656

DBA 1656 QUALITY MANAGEMENT

189

NOTES

Anna University Chennai

Up-to-date and timely (depending on purposes)Complete (in terms of attribute, spatial and temporal coverage)Intelligible (i.e. comprehensible by its recipients)Consistent (with other sources of information)Convenient/easy to handle and adequately protected

• The function of an information system is to change “data” into “information”,using the following processes:

o Conversion — transforming data from one format to another, from oneunit of measurement to another, and/or from one feature classification toanother

o Organization — organizing or re-organizing data according to databasemanagement rules and procedures so that they can be accessed cost-effectively

o Structuring — formatting or re-formatting data so that they can beacceptable to a particular software application or information system

o Modeling — including statistical analysis and visualization of data that willimprove user’s knowledge base and intelligence in decision making

• The concepts of “organization” and “structure” are crucial to the functioning ofinformation systems — without organization and structure it is simply impossibleto turn data into information

The Information Domain

• An information system is designed to process data, i.e. to accept input (data),manipulate it in some way, and produce output (information)

• It is also designed to process events — an event represents a problem or systemcontrol which triggers data processing procedures in an information system

• The information domain of an information system therefore includes both data(i.e., characters, numbers, images and sound) and events (i.e., problem andcontrol)

• There are three different components of the information domaino Information organization (also referred to as information structure)

— the internal organization of various data and event itemsThe design and implementation of information organization isreferred to as data structure

o Information contents and relationships — the attributes relating to thedata and the events, and the relationships with one another

Page 190: DBA1656

DBA 1656 QUALITY MANAGEMENT

190

NOTES

Anna University Chennai

The process of identifying information contents and relationshipsis known as data modeling in information system design

o Information flow — the ways by which data and events change as theyare processed by the information system

The process of identifying information flow is known as processmodeling in information system design

• The above views of information domain provides the conceptual framework thatlinks database management and application development in information systems

o It signifies that information organization and data structure are not onlyimportant for the management of data, but also for the development ofsoftware applications that utilize these data

Information Organization

• Information organization can be understood from four perspectives:o A data perspectiveo A relationship perspectiveo An operating system (OS) perspectiveo An application architecture perspective

The data perspective of information organization

• The information organization of geographic data must be considered in terms oftheir descriptive elements and graphical elements because

o These two types of data elements have distinctly different characteristicso The have different storage requirementso They have different processing requirements

Information organization of descriptive data

• For descriptive data, the most basic element of information organization is calleda data item

o A data item represents an occurrence or instance of a particularcharacteristic pertaining to an entity (which can be a person, thing, eventor phenomenon)

It is the smallest unit of stored data in a database, commonlyreferred to as an attribute

In database terminology, an attribute is also referred toas a stored field

Page 191: DBA1656

DBA 1656 QUALITY MANAGEMENT

191

NOTES

Anna University Chennai

The value of an attribute can be in the form of a number (integer orfloating-point), a character string, a date or a logical expression(e.g. T for ‘true’ or ‘present”; F for ‘false’ or ‘absent’)Some attributes have a definite set of values known as permissiblevalues or domain of values (e.g. age of people from 1 to 150; thecategories in a land use classification scheme; and the academicdepartments in a university)

• A group of related data items form a recordo By related data items, it means that the items are occurrences of different

characteristics pertaining to the same person, thing, event or phenomenon(e.g. in a forest resource inventory, a record may contain related data itemssuch as stand identification number, dominant tree species, average heightand average breast height diameter)

o a record may contain a combination of data items having different types ofvalues (e.g. in the above example, a record has two character stringsrepresenting the stand identification number and dominant tree species; aninteger representing the average tree height rounded to the nearest meter;and a floating-point number representing the average breast height diameterin meters)

In database terminology, a record is always formally referred to asa stored recordIn relational database management systems, records are calledtuples

• A set of related records constitutes a data fileo By related records, it means that the records represent different occurrences

of the same type or class of people, things, events and phenomenaA data file made up of a single record type with single-valued dataitems is called a flat fileA data file made up of a single record type with nested repeatinggroups of items forming a multi-level organization is called ahierarchical file

o A data file is individually identified by a filenameo A data file may contain records having different types of data values or

having a single type of data valueA data file containing records made up of character strings is calleda text file or ASCII file

Page 192: DBA1656

DBA 1656 QUALITY MANAGEMENT

192

NOTES

Anna University Chennai

A data file containing records made up of numerical values in binaryformat is called a binary file

o In data processing literature, collections of data items or records aresometimes referred to by other terms other than “data file” according totheir characteristics and functions

An array is a collection of data items of the same size and type(although they may have different values)

A one-dimensional array is called a vectorA two-dimensional array is called a matrix

A table is a data file with data items arranged in rows and columnsData files in relational databases are organized as tablesSuch tables are also called relations in relational databaseterminology

A list is a finite, ordered sequence of data items (known as elements)By “ordered”, it means that each element has a position inthe listAn ordered list has elements positioned in ascending orderof values; while an unordered list has no permanent relationbetween element values and positionEach element has a data typeIn the simple list implementation, all elements must havethe same data type but there is no conceptual objection tolists whose elements have different data types

A tree is a data file in which each data item is attached to one ormore data items directly beneath it

The connections between data items are called branchesTrees are often called inverted trees because they arenormally drawn with the root at the topThe data items at the very bottom of an inverted tree arecalled leaves; other data items are called nodesA binary tree is a special type of inverted tree in whicheach element has only two branches below it

A heap is a special type of binary tree in which the value of eachnode is greater than the values of its leaves

Page 193: DBA1656

DBA 1656 QUALITY MANAGEMENT

193

NOTES

Anna University Chennai

Heap files are created for sorting data in computerprocessing — the heap sort algorithm works by firstorganizing a list of data into a heap

A stack is a collection of cards in Apple Computer’s Hypercardsoftware system

• The concept of database is the approach to information organization in computer-based data processing today

o A database is defined as an automated, formally defined and centrallycontrolled collection of persistent data used and shared by different usersin an enterprise

Above definition excludes the informal, private and manual collectionof data“Centrally controlled” does not mean “physically centralized” —databases today tend to be physically distributed in differentcomputer systems, at the same or different locationsa database is set up to serve the information needs of an organizationData sharing is key to the concept of databaseData in a database are described as “permanent” in the sense thatthey are different from “transient” data such as input to and outputfrom an information system

The data usually remain in the database for a considerablelength of time, although the actual content of the data canchange very frequently

o The use of database does not mean the demise of data filesData in a database are still organized and stored as data filesThe use of database represents a change in the perception ofdata, the mode of data processing and the purposes of using thedata (Table 1), rather than physical storage of the data

o Databases can be organized in different ways known as databasemodels

The three conventional database models are: relational,network and hierarchical

Relational — data are organized by records in relationswhich resemble a tableNetwork — data are organized by records which areclassified into record types, with 1:n pointers linkingassociated records

Page 194: DBA1656

DBA 1656 QUALITY MANAGEMENT

194

NOTES

Anna University Chennai

Hierarchical — data are organized by records on a parent-child one-to-many relations

The emerging database model is object-orientedData are uniquely identified as individual objects that areclassified into object types or classes according to thecharacteristics (attributes and operations) of the object

Information organization of graphical data

• For graphical data, the most basic element of information organization is called abasic graphical element

o There are three basic graphical elementsPointLine, also referred to as arcPolygon, also referred to as area

o These basic graphical elements can be individually used to representgeographic features or entities

For example: point for a well; line for a road segment and polygonfor a lake)

o They can also be used to construct complex featuresFor example: the geographic entity “Hawaii” on a map is representedby a group of polygons of different sizes and shapes

• The method of representing geographic features by the basic graphical elements ofpoints, lines and polygon is said to be the vector method or vector data model,and the data are called vector data

o Related vector data are always organized by themes, which are also referredto as layers or coverage

Examples of themes: geodetic control, base map, soil, vegetationcover, land use, transportation, drainage and hydrology, politicalboundaries, land parcel and others

o For themes covering a very large geographic area, the data are alwaysdivided into tiles so that they can be managed more easily

A tile is the digital equivalent of an individual map in a map seriesA tile is uniquely identified by a file name

o A collection of themes of vector data covering the same geographic areaand serving the common needs of a multitude of users constitutes the spatialcomponent of a geographical database

Page 195: DBA1656

DBA 1656 QUALITY MANAGEMENT

195

NOTES

Anna University Chennai

o The vector method of representing geographic features is based on theconcept that these features can be can be identified as discrete entities orobjects

This method is therefore based on the object view of the realworldThe object view is the method of information organization inconventional mapping and cartography

• Graphical data captured by imaging devices in remote sensing and digital cartography(such as multi-spectral scanners, digital cameras and image scanners) are made upof a matrix of picture elements (pixels) of very fine resolution

o Geographic features in such form of data can be visually recognized but notindividually identified in the same way that geographic features are identifiedin the vector method

o They are recognizable by differentiating their spectral or radiometriccharacteristics from pixels of adjacent features

For example, a lake can be visually recognized on a satellite imagebecause the pixels forming it are darker than those of the surroundingfeatures; but the pixels forming the lake are not identified as a singlediscrete geographic entity, i.e. they remain individual pixelsSimilarly, a highway can be visually recognized on the same satelliteimage because of its particular shape; but the pixels forming thehighway do not constitute a single discrete geographic entity as inthe case of vector data

• The method of representing geographic features by pixels is called the raster methodor raster data model, and the data are described as raster data

o The raster method is also called the tessellation methodo A raster pixel is usually a square grid cell but there are there are several

variants such as triangles and hexagonso A raster pixel represents the generalized characteristics of an area of specific

size on or near the surface of the EarthThe actual ground size depicted by a pixel is dependent on theresolution of the data, which may range from smaller than a squaremeter to several square kilometers

o Raster data are organized by themes, which is also referred to as layersFor example, a raster geographic database may contain thefollowing themes: bed rock geology, vegetation cover, land use,topography, hydrology, rainfall, temperature

Page 196: DBA1656

DBA 1656 QUALITY MANAGEMENT

196

NOTES

Anna University Chennai

o Raster data covering a large geographic area are organized by scenes (forremote sensing images) of by raster data files (for images obtained bymap scanning)

o The raster method is based on the concept that geographic features arerepresented as surfaces, regions or segments

o This method is therefore based on the field view of the real worldo The field view is the method of information organization in image analysis

systems in remote sensing and geographic information systems for resource-and environmental-oriented applications

• In the past, the vector and raster methods represented two distinct approaches toinformation systems

o They were based on different concepts of information organization anddata structure

o They used different technologies for data input and output• Recent advances in computer technologies allow these two types of data to be

used in the same applicationso Computers are now capable of converting data from the vector format to

the raster format (rasterization) and vice versa (vectorization)o Computers are now able to display vector and raster simultaneouslyo The old debate on the usefulness of these two approaches to information

organization does not seem to be relevant any moreo Vector and raster data are largely seen as complimentary to, rather than

competing against, one another in geographic data processing

The relationship perspective of information organization

• Relationships represent a important concept in information organization — it describesthe logical association between entities

o Relationships can be categorical or spatial, depending on whether theydescribe location or other characteristics

Categorical relationships

• Categorical relationships describe the association among individual features in aclassification system

o The classification of data is based on the concept of scale of measuremento There are four scales of measurement:

Nominal — a qualitative, non-numerical and non-ranking scalethat classifies features on intrinsic characteristics

Page 197: DBA1656

DBA 1656 QUALITY MANAGEMENT

197

NOTES

Anna University Chennai

For example, in a land use classification scheme, polygonscan be classified as industrial, commercial, residential,agricultural, public and institutional

Ordinal — a nominal scale with ranking which differentiates featuresaccording to a particular order

For example, in a land use classification scheme, residentialland can be denoted as low density, medium density andhigh density

Interval — an ordinal scale with ranking based on numerical valuesthat are recorded with reference to an arbitrary datum

for example, temperature readings in degrees centigradeare measured with reference to an arbitrary zero (i.e. zerodegree temperature does not mean no temperature)

Ratio — an interval scale with ranking based on numerical valuesthat are measured with reference to an absolute datum

For example, rainfall data are recorded in mm withreference to an absolute zero (i.e. zero mm rainfall meanno rainfall)

• Categorical relationships based on ranking are hierarchical or taxonomic in natureo This means that data are classified into progressively different levels of

detailData in the top level are represented by a limited broad basiccategoriesData in each basic category are then classified into different sub-categories, which can be further classified into another level ifnecessary

o The classification of descriptive data is typically based on categoricalrelationships

Spatial relationships

• Spatial relationships describe the association among different features in spaceo Spatial relationships are visually obvious when data are presented in the

graphical formo However, it is difficult to build spatial relationships into the information

organization and data structure of a databaseThere are numerous types of spatial relationships possible amongfeatures

Page 198: DBA1656

DBA 1656 QUALITY MANAGEMENT

198

NOTES

Anna University Chennai

Recording spatial relationships implicitly demands considerablestorage spaceComputing spatial relationships on-the-fly slows down dataprocessing particularly if relationship information is requiredfrequently

• There are two types of spatial relationshipso Topological — describes the property of adjacency, connectivity and

containment of contiguous featureso Proximal — describes the property of closeness of non-contiguous features

• Spatial relationships are very important in geographical data processing and modelingo The objective of information organization and data structure is to find a

way that will handle spatial relationships with the minimum storage andcomputation requirements

The operating system (OS) perspective of information organization

• From the operating system perspective, information is organized in the form ofdirectories

o Directories are a special type of computer files used to organize other filesinto a hierarchical structure

Directories are also referred to as folders, particularly in systemsusing graphical user interfaces

o A directory may also contain one of more directoriesThe topmost directory in a computer is called the root directoryA directory that is below another directory is referred to as a sub-directoryA directory that is above another directory is referred to as a parentdirectory

o Directories are designed for bookkeeping purposes in computer systemsA directory is identified by a unique directory nameComputer files of the same nature are usually put under the samedirectoryA data file can be accessed in a computer system by specifying apath that is made up of the device name, one or more directorynames and its own file name

For example:c:\project101\mapdata\basemap\nw2367.dat

o The concept of workspace used by many geographic information systemsoftware packages is based on the directory structure of the host computer

Page 199: DBA1656

DBA 1656 QUALITY MANAGEMENT

199

NOTES

Anna University Chennai

A workspace is a directory under which all data files relating to aparticular project are stored

The application architecture perspective of information organization

• Computer applications nowadays tend to be constructed on the client/serversystems architecture

• Client/server is primarily a relationship between processes running in the samecomputer or, more commonly, in separate computers across a telecommunicationnetwork

o The client is a process that requests servicesThe dialog between the client and the server is always initiatedby the clientA client can request services from many servers at the same time

o The server is a process that provides the serviceA server is primarily a passive service providerA server can service many clients at the same time

• There are many ways of implementing a client/server architecture but from theperspective of information organization, the following five are most important

o File servers — the client requests specific records from a file; and theserver returns these records to the client by transmitting them across thenetwork

o Database servers — the client sends structured query language(SQL) requests to the server; the server finds the required information byprocessing these requests and then passes the results back to the client

o Transaction servers — the client invokes a remote procedure thatexecutes a transaction at the server side; the server returns the resultback to the client via the network

o Web server — communicating interactively by the Hypertext TransferProtocol (HTTP) over the Internet, the Web server returns documentswhen clients ask for them by name

o Groupware servers — this particular type of servers provides a set ofapplications that allow clients (and their users) to communicate with oneanother using text, images, bulletin boards, video and other forms ofmedia

• From the application architecture perspective, the objective of informationorganization and data structure is to develop a data design strategy that willoptimize system operation by

Page 200: DBA1656

DBA 1656 QUALITY MANAGEMENT

200

NOTES

Anna University Chennai

o Balancing the distribution of data resources between the client and theserver

Databases are typically located on the server to enable data sharingby multiple usersStatic data that are used for reference are usually allocated to theclient

o Ensuring the logical allocation of data resources among different serversData that are commonly used together should be placed in thesame serverData that have common security requirements should be placed inthe same serverData intended for a particular purpose (file service, database query,transaction processing, Web browsing or groupware applications)should be placed in the appropriate server

o Standardizing and maintaining metadata (i.e. data about data) to facilitatethe search for the availability and characteristics of existing data

4.3 HOUSE OF QUALITY (HOQ), BUILDING A HOQ

QFD was first put forth in 1966 in Quality Assurance work done by Prof. YojiAkao and Mr. Oshiumi of Bridgestone Tire. Its purpose was to show the connectionsbetween true quality, quality characteristics, and process characteristics. This was doneusing the Fishbone Diagram, with true quality in the heads and quality and processcharacteristics in the bones. For more complex products, Mitsubishi Heavy IndustryKobe Shipyards combined these many fishbones into a matrix. In 1979, Mr. Sawadaof Toyota Auto Body used the matrix in a reliability study which permitted him to addresstechnical trade-offs in the quality characteristics. This was done by adding a “roof” tothe top of the matrix, which he then dubbed the “House of Quality.”

Page 201: DBA1656

DBA 1656 QUALITY MANAGEMENT

201

NOTES

Anna University Chennai

FIGURE 4.1Building a House of Quality:

The House of Quality is actually an assembly of other deployment hierarchiesand tables. These include the Demanded Quality Hierarchy (rows), QualityCharacteristics Hierarchy (columns), the relationships matrix which relates them usingany one of several distribution methods, the Quality Planning Table (right side room),and Design Planning Table (bottom room).

Many people, who haphazardly learned the over-simplified, obsolete versionof QFD decades ago and failed to update their knowledge since then, refer to theserooms by undifferentiated terms such as Whats, Hows, etc. Sadly, this includes manybook authors, professors, and consultants. This is not a wise way to do QFD becauseit limits your ability to apply QFD only in the most elementary form. It could be evendetrimental for today’s businesses that operate in complex environments. It isrecommended that such terms be abandoned and that users refer to the actual data byname. This makes sense when there are multiple matrices used and proper namingconventions add clarity to the process.

Critical Tool for Design for Six Sigma Black Belts

The House of Quality has become a critical tool for Design for Six Sigma(DFSS). It serves the purpose of displaying complex transfer functions Y=f (X), whereY are the Critical to Customer Satisfaction factors and X the Critical to Quality factors.

Page 202: DBA1656

DBA 1656 QUALITY MANAGEMENT

202

NOTES

Anna University Chennai

Other matrices can perform lower level transfer functions as well. Objectivemeasures, target specifications, tolerances, and DPMO can also be added to the DesignPlanning Table. KPOV and KPIV can also be related in similar matrix formats.

The Myth about the House of Quality

Most interesting is that in many QFD studies, the House of Quality (HOQ) isnot the starting point and can even be unnecessary. That “the House of Quality is theQFD” is a myth that is still propagated by many people and books of outdated QFDknowledge, even though Dr. Yoji Akao (founder of QFD) has repeatedly warned it isnot QFD by itself.

4.4 QFD PROCESS

Quality Function Deployment (QFD)

QFD is a rigorous method for translating customer needs, wants, and wishesinto step-by-step procedures for delivering the product or service. While deliveringbetter designs tailored to customer needs, Quality Function Deployment also cutsthe normal development cycle by 50%, making you faster to market.

QFD uses the “QFD House of Quality” (a template in the QI Macros) tohelp structure your thinking, making sure nothing is left out.

FIGURE 4.2

There are four key steps to QFD thinking:

1. Product Planning- Translating what the customer wants (in their language, e.g.,portable, convenient phone service) into a list of prioritized product/service designrequirements (in your language, e.g., cell phones) that describes how the product works.It also compares your performance with your competition’s, and sets targets forimprovement to differentiate your product/service from your competitor’s.

Page 203: DBA1656

DBA 1656 QUALITY MANAGEMENT

203

NOTES

Anna University Chennai

2. Part Planning - Translating product specifications (design criteria from step 1) intopart characteristics (e.g., light weight, belt-clip, battery-driven, not-hardwired but radio-frequency based).

3. Process Planning - Translating part characteristics (from step 2) into optimal processcharacteristics that maximize your ability to deliver Six Sigma quality (e.g., ability to“hand off” a cellular call from one antenna to another without interruption).

4. Production Planning - Translating process characteristics (from step 3) intomanufacturing or service delivery methods that will optimize your ability to deliver SixSigma quality in the most efficient manner (e.g., cellular antennas installed with overlappingcoverage to eliminate dropped calls).

Even in my small business, I often use the Quality Function Deploymenttemplate to evaluate and design a new product or service. It helps me think throughevery aspect of what my customers want and how to deliver it. It saves me a lot of“clean up” on the backend. It doesn’t always mean that I get everything right, but I getmore of it right, which translates into greater sales and higher profitability with lessrework on my part.

That’s the power of QFD.

4.5 FAILURE MODE EFFECT ANALYSIS (FMEA)

FMEA Design and Process

FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique andquality method that enables the identification and prevention of process or producterrors before they occur. Within healthcare, the goal is to avoid adverse events thatcould potentially cause harm to patients, families, employees or others in the patientcare setting.

As a tool embedded within Six Sigma methodology, FMEA can help identifyand eliminate concerns early in the development of a process or new service delivery. Itis a systematic way to examine a process prospectively for possible ways in whichfailure can occur, and then to redesign the processes so that the new model eliminatesthe possibility of failure. Properly executed, FMEA can assist in improving overallsatisfaction and safety levels. There are many ways to evaluate the safety and quality ofhealthcare services, but when trying to design a safe care environment, a proactiveapproach is far preferable to a reactive approach.

Page 204: DBA1656

DBA 1656 QUALITY MANAGEMENT

204

NOTES

Anna University Chennai

Definitions of FMEA

FMEA evolved as a process tool used by the United States military as early as1949, but application in healthcare didn’t occur until the early 1990s, around the timeSix Sigma began to emerge as a viable process improvement methodology.

One of several reliability evaluation and design analysis tools, FMEA also canbe defined as:

• A problem prevention tool used to identify weak areas of the process anddevelop plans to prevent their occurrence.

• A semi-quantitative, inductive bottom-up approach executed by a team.

• A tool being recommended for Joint Commission on Accreditation of HealthcareOrganizations (JCAHO) Standard LD.5.2.

• A structured approach to identify the ways in which a process can fail to meetcritical customer requirement.

• A way to estimate the risk of specific causes with regard to these failures.

• A method for evaluating the current control plan for preventing failures fromoccurring.

• A prioritization process for actions that should be taken to improve the situation.

Why Do a FMEA?

Historically, healthcare has performed root cause analysis after sentinel events,medical errors or when a mistake occurs. With the added focus on safety and errorreduction, however, it is important to analyze information from a prospective point ofview to see what could go wrong before the adverse event occurs. Examining the entireprocess and support systems involved in the specific events – and not just the recurrenceof the event – requires rigor and proven methodologies.

Here are some potential targets for a FMEA application:

• New processes being designed

• Existing processes being changed

• Carry-over processes for use in new applications or new environments

• After completing a problem-solving study (to prevent recurrence)

Page 205: DBA1656

DBA 1656 QUALITY MANAGEMENT

205

NOTES

Anna University Chennai

• When preliminary understanding of the processes is available (for a ProcessFMEA)

• After system functions are defined, but before specific hardware is selected(for a System FMEA)

• After product functions are defined, but before the design is approved andreleased to manufacturing (for a Design FMEA)

Roles and Responsibilities

The FMEA team members will have various responsibilities. In healthcare, theterms multi-disciplinary or collaboration teams are used to refer to members from differentdepartments or professions. Leaders must lay the groundwork conducive to improvementfor the team initiative, with empowerment to make the changes and recommendationsfor change, plus time to do the work.

The FMEA team should not exceed 6 to 10 people, although this may dependon the process stage. Each team should have a leader and/or facilitator, record keeperor scribe, time keeper and a champion. In the data gathering or sensing stage, extensivevoice of the customer may be required. During the FMEA design meeting, however,the team must have members knowledgeable about the process or subject matter. It isadvisable to include facilitators with skills in team dynamics and rapid decision-making. Ground rules help define the scope and provide parameters to work within.

The team should consider questions such as: What will success look like? Whatis the timeline? The FMEA provides the metrics or control plan. The goal of thepreparation is to have a complete understanding of the process you are analyzing.What are the steps? What are its inputs and outputs? How are they related?

Techniques for Accelerating Change

While Six Sigma is based on solid principles and well-founded data, withoutdepartmental or organizational acceptance of change, Six Sigma solutions and toolssuch as FMEA may not be effective. Teams may decide to use change managementtools such as CAP (Change Acceleration Process) to help build support and facilitaterapid improvement. Careful planning, communication, participation and ensuring thatsenior leaders are well-informed throughout the process will greatly increase the chancefor a smoother implementation.

Approach the FMEA process with a clear understanding of the challenges, aneffective approach to overcome those challenges, and a plan to demonstrate a solid

Page 206: DBA1656

DBA 1656 QUALITY MANAGEMENT

206

NOTES

Anna University Chennai

track record of results. To gain leadership support, clearly define the value and returnon investment for required resources

Supporting FMEA Using Influence Strategy – Once key stakeholders are knownand their political, technical or cultural attitudes have been discussed (and verified), thetask is to build an effective strategy for influencing them to strengthen, or at a minimum,maintain their level of support. This simple tool helps the team assess stakeholder issuesand concerns, identifying and creating a strategy for those who must be “moved” to ahigher level of support.

Benefits of FMEA

Here are the benefits of FEMA:

• Captures the collective knowledge of a team

• Improves the quality, reliability, and safety of the process

• Logical, structured process for identifying process areas of concern

• Reduces process development time, cost

• Documents and tracks risk reduction activities

• Helps to identify Critical-To-Quality characteristics (CTQs)

• Provides historical records; establishes baseline

• Helps increase customer satisfaction and safety

FMEA reduces time spent considering potential problems with a design concept,and keeps crucial elements of the project from slipping through the cracks. As eachFMEA is updated with unanticipated failure modes, it becomes the baseline for the nextgeneration design. Reduction in process development time can come from increasedability to carry structured information forward from project to project, and this candrive repeatability and reproducibility across the system.

Types of FMEA

Process FMEA: Used to analyze transactional processes. Focus is on failure to produceintended requirement, a defect. Failure modes may stem from causes identified.

System FMEA: A specific category of Design FMEA used to analyze systems andsubsystems in the early concept and design stages. Focuses on potential failure modesassociated with the functionality of a system caused by design.

Page 207: DBA1656

DBA 1656 QUALITY MANAGEMENT

207

NOTES

Anna University Chennai

Design FMEA: Used to analyze component designs. Focuses on potential failuremodes associated with the functionality of a component caused by design. Failure modesmay be derived from causes identified in the System FMEA.

Other types:

• FMECA (Failure Mode, Effects, Criticality Analysis): Considers every possiblefailure mode and its effect on the product/service. Goes a step above FMEAand considers the criticality of the effect and actions, which must be taken tocompensate for this effect. (critical = loss of life/product).

• A d-FMEA evaluates how a product can fail, and likelihood that the proposeddesign process will anticipate and prevent the problem.

• A p-FMEA evaluates how a process can fail, and the likelihood that theproposed control will anticipate and prevent the problem.

FMEA Requires Teamwork

A cause creates a failure mode and a failure mode creates an effect on thecustomer. Each team member must understand the process, sub-processes andinterrelations. If people are confused in this phase, the process reflects confusion. FMEArequires teamwork: gathering information, making evaluations and implementing changeswith accountability. Combining Six Sigma, change management and FMEA you canachieve:

• Better quality and clinical outcomes

• Safer environment for patients, families and employees

• Greater efficiency and reduced costs

• Stronger leadership capabilities

• Increased revenue and market share

• Optimized technology and workflow

Understanding how to use the right process or facilitation tool at the right time in healthcarecan help providers move quality up, costs down and variability out. And that leads topreventing one failure before it harms one individual.

Anticipate Problems and Minimize Their Occurrence and Impact

Failure Modes and Effects Analysis (FMEA) is one of the most widely usedand effective tools for developing quality designs, processes, and services.

Page 208: DBA1656

DBA 1656 QUALITY MANAGEMENT

208

NOTES

Anna University Chennai

When criticality is considered, a FMEA is often times referred to as a FMECA(Failure Modes, Effects, and Criticality Analysis). In this document, the term FMEA isused in a general sense to include both FMEAs and FMECAs.

Developed during the design stage, FMEAs are procedures by which:

• Potential failure modes of a system are analyzed to determine their effects onthe system.

• Potential failure modes are classified according to their severity (FMEAs) or totheir severity and probability of occurrence (FMECAs).

• Actions are recommended to either eliminate or compensate for unacceptableeffects.

When introduced in the late 1960s, FMEAs were used primarily to assess thesafety and reliability of system components in the aerospace industry. During the late1980s, FMEAs were applied to manufacturing and assembly processes by Ford MotorCompany to improve production. Today, FMEAs are being used for the design ofproducts and processes as well as for the design of software and services in virtually allindustries. As markets continue to become more intense and competitive, FMEAs canhelp to ensure that new products, which consumers demand be brought to marketquickly, are both highly reliable and affordable.

The principle objectives of FMEAs are to anticipate the most important designproblems early in the development process and either to prevent these problems fromoccurring or to minimize their consequences as cost effectively as possible. In addition,FMEAs provide a formal and systematic approach for design development and actuallyaid in evaluating, tracking, and updating both design and development efforts. Becausethe FMEA is begun early in the design phase and is maintained throughout the life of thesystem, the FMEA becomes a diary of the design and all changes that affect systemquality and reliability.

Types of FMEAs

All FMEAs focus on design and assess the impact of failure on systemperformance and safety. However, FMEAs are generally categorized based on whetherthey analyze product design or the processes involved in manufacturing and assemblingthe product.

• Product FMEAs. Examine the ways that products (typically hardware orsoftware) can fail and affect product operation. Product FMEAs indicate whatcan be done to prevent potential design failures.

Page 209: DBA1656

DBA 1656 QUALITY MANAGEMENT

209

NOTES

Anna University Chennai

• Process FMEAs. Examine the ways that failures in manufacturing and assemblyprocesses can affect the operation and quality of a product or service. ProcessFMEAs indicate what can be done to prevent potential process failures priorto the first production run.

Although FMEAs can be initiated at any system level and use either a top-down or bottom-up approach, today’s products and processes tend to be complex.As a result, most FMEAs use an inductive, bottom-up approach, starting the analysiswith the failure modes of the lowest level items of the system and then successivelyiterating through the next higher levels, ending at the system level. Regardless of thedirection in which the system is analyzed, all potential failure modes are to be identifiedand documented on FMEA worksheets (hard copy or electronic), where they are thenclassified in relation to the severity of their effects.

In a very simple product FMEA, for example, a computer monitor may have acapacitor as one of its components. By looking at the design specifications, it can bedetermined that if the capacitor is open (failure mode), the display appears with wavylines (failure effect). And, if the capacitor is shorted (failure mode), the monitor goesblank (failure effect). When assessing these two failure modes, the shorted capacitorwould be ranked as more critical because the monitor becomes completely unusable.On the FMEA worksheet, ways in which this failure mode can either be prevented orits severity lessened would be indicated.

Approaches to FMEAs

Product and process FMEAs can be further categorized by the level on which thefailure modes are to be presented.

• Functional FMEAs. Focus on the functions that a product, process, or serviceis to perform rather than on the characteristics of the specific implementation.When developing a functional FMEA, a functional block diagram is used toidentify the top-level failure modes for each functional block on the diagram.For a heater, for example, two potential failure modes would be: “Heater failsto heat” and “Heater always heats.” Because FMEAs are best begun duringthe conceptual design phase, long before specific hardware information isavailable, the functional approach is generally the most practical and feasibleapproach by which to begin a FMEA, especially for large, complex productsor processes that are more easily understood by function than by the details oftheir operation. When systems are very complex, the analysis for functionalFMEAs generally begins at the highest system level and uses a top-downapproach.

Page 210: DBA1656

DBA 1656 QUALITY MANAGEMENT

210

NOTES

Anna University Chennai

• Interface FMEAs. Focus on the interconnections between system elementsso that the failures between them can be determined and recorded andcompliance to requirements can be verified. When developing interface FMEAs,failure modes are usually developed for each interface type (electrical cabling,wires, fiber optic lines, mechanical linkages, hydraulic lines, pneumatics lines,signals, software, etc.). Beginning an interface FMEA as soon as the systeminterconnections are defined ensures that proper protocols are used and that allinterconnections are compliant with design requirements.

• Detailed FMEAs. Focus on the characteristics of specific implementations toensure that designs comply with requirements for failures that can cause loss ofend-item function, single-point failures, and fault detection and isolation. Onceindividual items of a system (piece-parts, software routines, or process steps)are uniquely identified in the later design and development stages, FMEAs canassess the failure causes and effects of failure modes on the lowest level systemitems. Detailed FMEAs for hardware, commonly referred to as piece-partFMEAs, are the most common FMEA applications. They generally begin atthe lowest piece-part level and use a bottom-up approach to check designverification, compliance, and validation.

Variations in design complexity and data availability will dictate the analysisapproach to be used. Some cases may require that part of the analysis be performed atthe functional level and other portions at the interface and detailed levels. In othercases, initial requirements may be for a functional FMEA that is to later progress to aninterface FMEA, and then finally progress to a detailed FMEA. Thus, FMEAs completedfor more complex systems often include worksheets that employ all three approachesto FMEA development.

Failure mode and effects analysis

Failure mode and effects analysis (FMEA) is a method (first developedfor systems engineering) that examines potential failures in products or processes. Itmay be used to evaluate risk management priorities for mitigating known threat-vulnerabilities.

FMEA helps select remedial actions that reduce cumulative impacts of life-cycle consequences (risks) from a systems failure (fault).

By adapting hazard tree analysis to facilitate visual learning, this methodillustrates connections between multiple contributing causes and cumulative (life-cycle)consequences. It is used in many formal quality systems such as QS-9000 or ISO/TS16949.

Page 211: DBA1656

DBA 1656 QUALITY MANAGEMENT

211

NOTES

Anna University Chennai

The basic process is to take a description of the parts of a system, and list theconsequences if each part fails. In most formal systems, the consequences are thenevaluated by three criteria and associated risk indices:

• severity (S),

• likelihood of occurrence (O), and (Note: This is also often known as probability(P))

• inability of controls to detect it (D)

Each index ranges from 1 (lowest risk) to 10 (highest risk). The overall risk ofeach failure is called Risk Priority Number (RPN) and the product of Severity (S),Occurrence (O), and Detection (D) rankings: RPN = S × O × D. The RPN (rangingfrom 1 to 1000) is used to prioritize all potential failures to decide upon actions leadingto reduce the risk, usually by reducing likelihood of occurrence and improving controlsfor detecting the failure.

Applications

FMEA is most commonly applied but not limited to design (Design FMEA)and manufacturing processes (Process FMEA).

Design failure modes effects analysis (DFMEA) identifies potential failuresof a design before they occur. DFMEA then goes on to establish the potential effects ofthe failures, their cause, how often and when they might occur and their potentialseriousness.

Process failure modes effects analysis (PFMEA) is a systemized group ofactivities intended to:

1. Recognize and evaluate the potential failure of a product/process and its effect,

2. Identify actions which could eliminate or reduce the occurrence, or improvedetectability,

3. Document the process, and

4. Track changes to process-incorporated to avoid potential failures.

FMEA Analysis is very important for dynamic positioning systems.

FMEA Analysis is often applied through the use of a FMEA table combinedwith a rating chart to allow designers to assign values to the:

1. Severity of potential failures.

2. Likelihood of a potential failure occurring.

3. The chance of detection within the process

Page 212: DBA1656

DBA 1656 QUALITY MANAGEMENT

212

NOTES

Anna University Chennai

The numbers are then summed to create the Risk Priority Number (RPN)

This data can then be collected in a table such as:

TABLE 4.1

Part

Func

tion

Failu

reEf

fect

ofSe

veri

tyPo

tent

ial

Occ

urre

nce

Poss

ible

Det

ectio

nPr

even

tativ

eM

ode

Failu

re R

atin

gC

ause

ofR

atin

gM

eans

of R

atin

gR

PNA

ctio

ns to

be

Failu

reD

etec

tion

Take

n

Page 213: DBA1656

DBA 1656 QUALITY MANAGEMENT

213

NOTES

Anna University Chennai

DISADVANTAGES

FMEA is useful mostly as a survey method to identify major failure modes in asystem. It is not able to discover complex failure modes involving multiple failures orsubsystems, or to discover expected failure intervals of particular failure modes. Forthese, a different method called fault tree analysis is used.

Elementary Method “Failure Mode Effect Analysis” (FMEA)

Objective and Purpose

The Failure Mode Effect Analysis (FMEA) is a method used for theidentification of potential error types in order to define its effect on the examined object(System, Segment, SW/HW Unit) and to classify the error types with regard to criticalityor persistency. This is to prevent errors and thus weak points in the design which mightresult in a endangering or loss of the system/software and/or in an endangering of thepersons connected with the system/software. The FMEA is also to furnish results forcorrective measures, for the definition of test cases and for the determination of operatingand application conditions of the system/software.

Means of Representation

Means to represent the FMEA are e. g.:

• sequence descriptions of critical functions

• functional and reliability block diagrams

• error trees

• error classification lists

• lists of critical functions and items

Operational Sequence

The basic principle is that both in the functional hierarchy and in the programlogic defined success or error criteria are systematically (functionally and chronologically)queried: what happens if? This analysis and evaluation has to be realized for all operatingphases and operating possibilities.

The FMEA process consists of the following main steps:

• FMEA planning to identify the FMEA goals and levels

• definition of specific procedures, basic rules, and criteria for the FMEArealization

Page 214: DBA1656

DBA 1656 QUALITY MANAGEMENT

214

NOTES

Anna University Chennai

• analysis of the system with regard to functions, interfaces, operating phases,operating modes, and environment

• design and analysis of functional and reliability block diagrams or error treediagrams to illustrate the processes, interconnections, and dependencies

• identification of potential error types

• evaluation and classification of the error types and its effects

• identification of measures to prevent errors and to check errors

• evaluation of the effects of suggested measures

• documentation of the results

Limits of the Methods Application

Within the scope of submodel PM, the application of FMEA is limited to projectswith much restrictive planned data or high requirements; a general application of FMEAwould not be appropriate, considering the required effort and costs in comparison withthe achieved results.

Within the scope of the submodels SD and QA the method FMEA is appliedif the reliability requirements to the System/to functional units are high.

Failure Modes and Effects Analysis (FMEA)

Also called: potential failure modes and effects analysis; failure modes, effectsand criticality analysis (FMECA).

Description

Failure modes and effects analysis (FMEA) is a step-by-step approach foridentifying all possible failures in a design, a manufacturing or assembly process, or aproduct or service.

“Failure modes” means the ways, or modes, in which something might fail.Failures are any errors or defects, especially ones that affect the customer, and can bepotential or actual.

“Effects analysis” refers to studying the consequences of those failures.

Failures are prioritized according to how serious their consequences are, howfrequently they occur and how easily they can be detected. The purpose of the FMEAis to take actions to eliminate or reduce failures, starting with the highest-priority ones.

An FMEA also documents current knowledge and actions about the risks offailures, for use in continuous improvement. FMEA is used during design to prevent

Page 215: DBA1656

DBA 1656 QUALITY MANAGEMENT

215

NOTES

Anna University Chennai

failures. Later it’s used for control, before and during ongoing operation of the process.Ideally, FMEA begins during the earliest conceptual stages of design and continuesthroughout the life of the product or service.

Begun in the 1940s by the U.S. military, FMEA was further developed by theaerospace and automotive industries. Several industries maintain formal FMEAstandards.

What follows is an overview and reference. Before undertaking an FMEAprocess, learn more about standards and specific methods in your organization andindustry through other references and training.

When to Use

• When a process, product or service is being designed or redesigned, afterquality function deployment.

• When an existing process, product or service is being applied in a new way.

• Before developing control plans for a new or modified process.

• When improvement goals are planned for an existing process, product or service.

• When analyzing failures of an existing process, product or service.

• Periodically throughout the life of the process, product or service

Process

(Again, this is a general procedure. Specific details may vary with standards of yourorganization or industry.)

1. Assemble a cross-functional team of people with diverse knowledge about theprocess, product or service and customer needs. Functions often included are:design, manufacturing, quality, testing, reliability, maintenance, purchasing (andsuppliers), sales, marketing (and customers) and customer service.

2. Identify the scope of the FMEA. Is it for concept, system, design, process orservice? What are the boundaries? How detailed should we be? Use flowchartsto identify the scope and to make sure every team member understands it indetail. (From here on, we’ll use the word “scope” to mean the system, design,process or service that is the subject of your FMEA.)

Page 216: DBA1656

DBA 1656 QUALITY MANAGEMENT

216

NOTES

Anna University Chennai

3. Fill in the identifying information at the top of your FMEA form. Figure showsa typical format. The remaining steps ask for information that will go into thecolumns of the form.

FMEA example

TABLE 4.2

Page 217: DBA1656

DBA 1656 QUALITY MANAGEMENT

217

NOTES

Anna University Chennai

1. Identify the functions of your scope. Ask, “What is the purpose of this system,design, process or service? What do our customers expect it to do?” Name it witha verb followed by a noun. Usually you will break the scope into separate subsystems,items, parts, assemblies or process steps and identify the function of each.

2. For each function, identify all the ways failure could happen. These are potentialfailure modes. If necessary, go back and rewrite the function with more detail to besure the failure modes show a loss of that function.

3. For each failure mode, identify all the consequences on the system, related systems,process, related processes, product, service, customer or regulations. These arepotential effects of failure. Ask, “What does the customer experience because ofthis failure? What happens when this failure occurs?”

4. Determine how serious each effect is. This is the severity rating, or S. Severity isusually rated on a scale from 1 to 10, where 1 is insignificant and 10 is catastrophic.If a failure mode has more than one effect, write on the FMEA table only thehighest severity rating for that failure mode.

5. For each failure mode, determine all the potential root causes. Use tools classifiedas cause analysis tool, as well as the best knowledge and experience of the team.List all possible causes for each failure mode on the FMEA form.

6. For each cause, determine the occurrence rating, or O. This rating estimates theprobability of failure occurring for that reason during the lifetime of your scope.Occurrence is usually rated on a scale from 1 to 10, where 1 is extremely unlikelyand 10 is inevitable. On the FMEA table, list the occurrence rating for each cause.

7. For each cause, identify current process controls. These are tests, procedures ormechanisms that you now have in place to keep failures from reaching the customer.These controls might prevent the cause from happening, reduce the likelihood thatit will happen or detect failure after the cause has already happened but before thecustomer is affected.

8. For each control, determine the detection rating, or D. This rating estimates howwell the controls can detect either the cause or its failure mode after they havehappened but before the customer is affected. Detection is usually rated on a scalefrom 1 to 10, where 1 means the control is absolutely certain to detect the problemand 10 means the control is certain not to detect the problem (or no control exists).On the FMEA table, list the detection rating for each cause.

Page 218: DBA1656

DBA 1656 QUALITY MANAGEMENT

218

NOTES

Anna University Chennai

9. (Optional for most industries) Is this failure mode associated with a criticalcharacteristic? (Critical characteristics are measurements or indicators that reflectsafety or compliance with government regulations and need special controls.) If so,a column labeled “Classification” receives a Y or N to show whether special controlsare needed. Usually, critical characteristics have a severity of 9 or 10 and occurrenceand detection ratings above 3.

10. Calculate the risk priority number, or RPN, which equals S × O × D. Also calculateCriticality by multiplying severity by occurrence, S × O. These numbers provideguidance for ranking potential failures in the order they should be addressed.

11. Identify recommended actions. These actions may be design or process changes tolower severity or occurrence. They may be additional controls to improve detection.Also note who is responsible for the actions and target completion dates.

12. As actions are completed, note results and the date on the FMEA form. Also, notenew S, O or D ratings and new RPNs.

4.6 FMEA SIAGES DESIGN AND DOCUMENTATION

Step 1 - Perform Preparatory Work

Before beginning any analysis, it is important to do some preliminary prep work.This analysis is no different. The first thing that needs to be accomplished is to select asystem to analyze. For instance, we may want to select a small subset of the facility, asopposed to selecting the entire facility, as our system.

Once we know what system we want to work on, we must define failure. Thismay seem trivial, but it is an essential step in the analysis. If we were to ask 100 peopleto define failure, we would probably get 100 different definitions. This would make ouranalysis far to broad. We need to focus, not on everything, but on the things that aremost important to our business at that point in time. For instance, if utilization is criticalto our business today, we should center our definition around utilization; if our priorityissue is quality than our definition should center around quality.

Let’s take a look at some examples of common failure definitions:

Failure is any loss that interrupts the continuity of production.

Failure is a loss of asset availability.

Failure is the unavailability of equipment.

Failure is a deviation from the status quo.

Page 219: DBA1656

DBA 1656 QUALITY MANAGEMENT

219

NOTES

Anna University Chennai

Failure is not meeting target expectations.

Failure is any secondary defect.

The definitions above are some common industrial failure definitions. Pleasenote that there are no perfect failure definitions. For instance, “Failure is any loss thatinterrupts the continuity of production” has to include planned shutdowns, rate reductionsfor decreased sales, etc. It would not pick up failures on equipment that is spared sinceit does not interrupt the continuity of production.

A precise failure definition is important since it focuses the facility on the priorityissues. It fosters good communications since everyone knows what is important and italso provides a basis for a common understanding of what the facility’s needs are. Notto mention, it is an essential step in the development of a “Significant Few” failure list.

There are few rules of thumb to consider when developing a failure definition. Itmust be concise and easily understandable. If it is not, it will leave too much room forinterpretation. It should not have to be interpreted. It must only address one topic. Thisis important to maintain the focus of the analysis. If we include too many topics ourtarget becomes too large. Finally, it should be approved and signed by someone inauthority so that everyone in the organization sees that it is a priority issue.

The next step in the preparation process is to develop a contact flow diagram.The contact flow diagram will allow you to break down your system into smaller, moremanageable subsystems. The rule for this diagram is to map all of the process units thatcome into contact with the product. This diagram, as well as the failure definition, willbe used when we begin to collect the data for the analysis.

The next thing we need to accomplish before we begin our FMEA is to performa gap analysis. In other words, we need to uncover the disparity between what we areproducing now and what is our potential. This will give us some indication as to thepotential opportunity in our facility. For instance, we produce widgets in our facility,and we currently produce 150,000 per year. However, our potential is 300,000 peryear. Now we have a gap of 150,000 widgets per year.

The final step in the preparation stage is to design a preliminary interview sheetand a schedule of people to interview to collect the data. This will be the form to assistyou in collecting the data from your interviews.

To put this all into perspective, the following is a checklist of items to be coveredprior to beginning a FMEA.

Page 220: DBA1656

DBA 1656 QUALITY MANAGEMENT

220

NOTES

Anna University Chennai

TABLE 4.3

FMEA Preparatory Steps Completed (Y/N)

Define the system to analyze

Define failure

Draw a contact diagram

Calculate the gap

Develop data worksheets

Develop preliminaryinterview schedule

FMEA Preparation Checklist

Step 2 – Collection and Documentation of Data

There are a couple of ways of collecting the data for this analysis. You can relyon your computer data systems (i.e. Maintenance Management System) or you can goto the people who are closest to the work and get their input. Although each has itsadvantages, interviewing is probably the best since the information will be coming straightfrom the source. If you have enough confidence in your data systems, then it will beuseful to use that information to later validate your interviews.

At this point let’s discuss how you would use interviews to collect the data foryour analysis. The process is really quite simple. Let’s look at a simple scenario ....

You send out a message to all of the people that you would like to interview.You state the date, time and a brief description of the FMEA process for the interviewees.Note: it is important to interview at least 2 or 3 people in each session so that theinterviewees can bounce ideas off of each other. Once in the room, you will need todisplay a large copy of the contact flow diagram and the failure definition so that theyare in clear view of the interviewees. Now you will begin the process of asking yourquestions. There really is only one initiating question that needs to be asked; “Whatevents or conditions satisfy the definition of failure within each of the subsystems in thecontact flow diagram?”. At this point the interviewees will begin to brainstorm all of thefailure events that they have experienced within each of the subsystems. Once you haveexhausted all of the possibilities, ask the interviewees what the frequency and impact is,on each of the failure events. The frequency should be based on the number of

Page 221: DBA1656

DBA 1656 QUALITY MANAGEMENT

221

NOTES

Anna University Chennai

occurrences per year. The interviewees, however, will give you the information in themeasurement units that make most sense to them. For instance, they may say it happensonce per shift. It is your job to later translate that figure into the number of occurrencesper year. The impact should include items such as manpower requirements, materialcosts and any downtime that might have been experienced. This is all there is to it!

When you begin the interview process, it is best to interview the people whoare closest to the work (i.e. mechanics and operators). You should also talk withsupervisors and possibly managers but certainly not to the extent that you would formechanics and operators.

As a principal analyst, you will also need to be the principal interviewer. Thismeans that you have to explain the process to the interviewees, ask the questions andcapture the information on your log sheet. This can be a difficult job. If it is feasible, itwould be advantageous to have an associate interviewer to assist you by recording theinformation on the log sheets. This allows you to focus on the questions and theinterviewees.

The job of interviewing can be quite an experience, particularly if you do nothave a lot of experience in conducting them. It tends to be more of an art form than ascience. Below is a listing of some tips that may be useful when you begin to conductyour FMEA interviews.

Interview Tips

Be very careful to ask the exact same lead questions to each of the interviewees.This will eliminate the possibility of having different answers depending on the interpretationof the question. Later you can expand on the questions, if further clarification is necessary.

Make sure that the participants know what a FMEA is as well as the purposeand structure of the interviews. If you are not careful, the process may begin to lookmore like an interrogation than an interview to the interviewees. You want the intervieweesto be comfortable.

Allow the interviewees to see what you are writing. This will set them at easesince they can see that the information they are providing is being recorded correctly.never use a tape recorder in a FMEA session because it tends to make peopleuncomfortable and less likely to share information.

Never argue with an interviewee. Even if you do not agree with the person, it isbest to accept what they are saying at face value and double check it with the information

Page 222: DBA1656

DBA 1656 QUALITY MANAGEMENT

222

NOTES

Anna University Chennai

from other interviews. The minute you become argumentative, it reduces the amount ofinformation that you can get from that person.

Always be aware of interviewees names. There is nothing sweeter to a personsears than the sound of their own name. If you have trouble remembering, simply writethe names down in front of you so that you can always refer to them.

It is important to develop a strategy to draw out quiet participants. There aremany quiet people in our workforce who have a wealth of data to share but are notcomfortable sharing it with others. We have to make sure that we draw out these quietinterviewees in a gentle and inquiring manner.

Be aware of the body language of interviewees. There is an entire sciencebehind body language. It is not important that you become an expert in this area.However, it is important to know that a substantial portion of human communication isthrough body language. Let the body language talk to you.

In any set of interviews, there will be a number of people who are able tocontribute more to the process than the others. It is important to make a note of theextraordinary contributors so that they can assist you later in the analysis. They will beextremely helpful if you need additional information, for validating your finished FMEA,as well as assisting you when you begin your actual Root Cause Failure Analysis (RCFA).

Remember to use your failure definition and block diagram to keep intervieweeson track if they begin to wander off of the subject.

Step 3 - Summarize & Encode

At this point we have conducted a series of separate interviews and we need tolook through our data to reduce redundant entries. Then we convert frequencies fromthe interviewees measurement units into occurrences per year (i.e. 2 per month wouldtranslate into 24 times per year).

The easiest way to summarize this information is to input the information into anelectronic spreadsheet. There are many products on the market that you could use.Microsoft Excel, Lotus 123 or Borland’s Quattro Pro are just a few of the more popularspreadsheet programs you should consider. Once the information is input, you can useyour spreadsheet to sort the raw data first by sub-system and then by failure event. Thiswill give you a closer look at the events that are redundant. As far as making theconversions to numbers of times per year, your more advanced spreadsheets can domany of these tasks for you. Consult your users manual for creating lookup tables.

Page 223: DBA1656

DBA 1656 QUALITY MANAGEMENT

223

NOTES

Anna University Chennai

The following example should give you an idea of what is meant by summarizing yourdata:

TABLE 4.4

Sub-System Failure Event Failure Mode Frequency Impact

Recovery Recirculation Bearing Fails 1 per month 1 shiftPump Fails

Recovery Recirculation Oil Contami- 1 per 2 1 dayPump Fails nation months

Recovery Recirculation Bearing Locks 1 per 12 hoursPump Fails Up month

Recovery Recirculation Shaft Fractures 1 per year 1 dayPump Fails

This data suggests that the first three items are the same since they each impactthe bearings and have fairly consistent frequencies and impacts. The last item is alsorelated to bearings but went one step beyond the others since we not only lost thebearings but we also suffered a fractured shaft. This would indicate a separate mode offailure. A summarization of this data might look something like this:

TABLE 4.5

Sub-System Failure Event Failure Mode Frequency Impact

Recovery Recirculation Bearing 12 per year 12 hoursPump Fails Problems

Recovery Recirculation Shaft Fractures 1 per year 1 dayPump Fails

Completed FMEA failure event summarization

Step 4 - Calculate Loss

At this point, we want to do a simple calculation to generate our total loss foreach event in the analysis. The calculation is as follows:

Frequency x Loss Per Occurrence(Impact) = Total Loss Per Yea

Let’s look at an example of just how to apply this:

Page 224: DBA1656

DBA 1656 QUALITY MANAGEMENT

224

NOTES

Anna University Chennai

TABLE 4.6Sub-System Failure Event Failure Frequency Impact Total Loss

Mode (hrs./yr.)

Recovery Recirculation Bearing 12 per year 12 lost 144 lost hrs.Pump Fails Fails hrs. of prod.

Compressor Seal Failure Blown 4 per year 24 lost 96 lostSeals hrs. hrs.of prod.

Mixers Filter Switches Filters 26 per year 2 lost 52 lostClogged hrs. hrs.of prod.

Vent Pressure Leaks 33 per year 24 lost 8 lostCondensers Gauge Leaks Due To hrs. hrs.of prod.

Corrosion

Completed Loss Calculation Example

What we need to do is multiply the frequency times the impact to get our totalloss. In the first event, we have a failure occurring once per month or 12 times per year.We lose a total of 12 hours production every time this occurs. So we simply multiply 12occurrences times 12 hours of lost production to get a total loss of 144 hours per year.If you decide to use an electronic spreadsheet all of these calculations can be performedautomatically by multiplying the frequency and impact columns. Refer to the section inyour software’s user manual that concerns multiplying columns.

It is important to make sure that total loss is communicated in the mostappropriate units. For example, we used hours of downtime per year in the exampleabove. Hours of downtime might not mean much to some people. So it might be moreadvantageous to convert that number from hours per year to dollars per year sinceeveryone can relate to dollars. In other words, use the units that will get the mostattention from everyone involved.

Step 5 - Determining the “Significant Few”

The concept of the “Significant Few” is derived from a famous Italian Economistname Vilfredo Pareto. Pareto said that “In any set or collection of objects, ideas, peopleand events, a FEW within the sets or collections are MORE SIGNIFICANT than theremaining majority”. Consider these examples:

80% of a bank’s assets are representative of 20% or less of its customers.

80% of the care given in a hospital is received by 20% or less of its patients.

Page 225: DBA1656

DBA 1656 QUALITY MANAGEMENT

225

NOTES

Anna University Chennai

Well it is no different in industry. 80% of the losses in a manufacturing facilityare represented by 20% or less of its failure events. This means that we only have toperform root cause failure analysis on 20% or less of our failure events to reduce oreliminate 80% of our facilities losses. Now that is significant!!!

In order to determine the significant few you must perform a few simple steps:

Total all of the failure events in the analysis to create a global total loss.

Sort the total column in descending order (i.e. highest to lowest)

Multiply the global total loss column by 80% or .80. This will give you the“Significant Few” loss figure that you will need to determine what the “Significant Few”failures are in your facility.

Go to the top of the total loss column and begin adding the top events from topto bottom. When the sum of these losses is equal to or greater than the “SignificantFew” loss figure than those events are your “Significant Few” failure events.

Let’s take a look at how this applies to our discussion on FMEA.

TABLE 4.7

Sub System Failure Event Failure Mode Freq. Impact Total LossSub System 3 Failure Event 1 Failure Mode 1 2000 $850 $1,700,000

Sub System 2 Failure Event 2 Failure Mode 2 1000 $1,250 $1,250,000

Sub System 4 Failure Event 3 Failure Mode 3 4 $75,000 $300,000

Sub System 2 Failure Event 4 Failure Mode 4 18 $6,000 $108,000

Sub System 3 Failure Event 5 Failure Mode 5 6 $12,000 $72,000

Sub System 2 Failure Event 6 Failure Mode 6 52 $1,000 $52,000

Sub System 3 Failure Event 7 Failure Mode 7 80 $500 $40,000

Sub System 3 Failure Event 8 Failure Mode 8 12 $3,000 $36,000

Sub System 4 Failure Event 9 Failure Mode 9 365 $75 $27,375

Sub System 3 Failure Event 10 Failure Mode 10 24 $1,000 $24,000

Sub System 1 Failure Event 11 Failure Mode 11 12 $1,300 $15,600

Sub System 2 Failure Event 12 Failure Mode 12 40 $300 $12,000

Sub System 1 Failure Event 13 Failure Mode 13 12 $1,000 $12,000

Sub System 2 Failure Event 14 Failure Mode 14 10 $1,000 $10,000

Page 226: DBA1656

DBA 1656 QUALITY MANAGEMENT

226

NOTES

Anna University Chennai

Sub System 1 Failure Event 15 Failure Mode 15 48 $200 $9,600

Sub System 3 Failure Event 16 Failure Mode 16 3 $2,000 $6,000

Sub System 2 Failure Event 17 Failure Mode 17 6 $1,000 $6,000

TotalGlobal Loss $3,680,575

SignificantFew Losses $2,944,460

In the example above, we have totaled the loss column and have a total globalloss of $3,680,575. The total loss column has been sorted in descending order so thatit is easy to identify the “Significant” failure events. Our “Significant Few” loss figurethat we are looking for is $2,944,460 ($3,680,575 x .80). Now all we have to do issimply go to the top of the total loss column and begin adding from top to bottom untilwe reach the “Significant Few” loss figure of $2,944,460. It turns out that the first 2failure events represent approximately 80% of our losses ($2,950,000 ) or our“Significant Few” failure list. Now, instead of doing Root Cause Failure Analysis oneverything, we are only going to do it on the ones in our “Significant Few” failure list.

Step 6 - Validate Your Results

There are a few validations that should be performed to make sure that ouranalysis is correct. You can use the gap analysis to make sure that all of the events addup to +/- 10% of the gap. If it ends up being less, you have probably left some importantfailure events off the listing. If you have more than the gap then you probably have notsummarized your results well enough. There may be some redundancies in your list.

A second validation that you can use is having a group of experienced peoplefrom your facility review your findings. This will help ensure that you are not too far offbase. A third, and final, validation would be to use your computerized data systems tosee if the events closely match the data in your maintenance management system. Thiswill give you further confidence in your analysis. Do not worry if your list varies fromyour maintenance management system (MMS), since you will pick a lot of events thatare never even recorded in the work order system (i.e. those events that may take onlya few minutes to repair).

Step 7 - Issue a Report

As with any analysis, it is important to communicate your findings to all interestedparties. Your report should include the following items:

An explanation of the analysis technique.

Page 227: DBA1656

DBA 1656 QUALITY MANAGEMENT

227

NOTES

Anna University Chennai

The failure definition that was utilized.

The contact flow diagram that was utilized.

The results displayed graphically as well as the supporting spreadsheet lists.

Recommendations of which failures are candidates for Root Cause Failure Analysis.

A listing of everyone involved in the analysis including all of the interviewees.

Last but not least, make sure that you communicate the results of the analysisback to the interviewees who participated, so that everyone can feel a sense ofaccomplishment and ownership.

In summary, FMEA is a fantastic tool for limiting your analysis work to onlythose things that are of significant importance to the facility. You cannot perform RootCause Failure Analysis on everything. However, you can use this tool to help narrowour focus to what is “most” important.

4.7 REQUIREMENTS OF RELIABILITY

Organization for Reliability

The product reliability should be the objective of everyone in the organization anda well entrenched reliability and quality assurance culture must be prevalent at all levelsstarting at the conceptual stage as design reviews, reliability improvement efforts mustcontinue through fabrication, testing and operational phases. In the initial stages designreviews are dominated by design engineers but in the later phases, participation, byindependent reliability and quality assurance offices increases. In order to ensureobjectivity in assessing the reliability outside specialists and consultants participate inthe reviews to improve the reliability.

4.8 FAILURE RATE

Failure Analysis

Failure Analysis is the cornerstone of reliability engineering. In this, technicalfailure analyses the cause and extent of failures. They are carried out by corrosionengineering, equipment inspection, plant engineering, R and D or customer service.For instance, a corrosion engineer may expose various alloys to cooling water to determinewhich fail and by what mechanism. An equipment inspector may measure pits in atower top to determine remaining wall thickness.

The number of operating hours between failures is a measure of the equipmentsreliability. In this context a failure is defined as any defect or malfunction which preventssuccessful accomplishment of a desired performance.

Page 228: DBA1656

DBA 1656 QUALITY MANAGEMENT

228

NOTES

Anna University Chennai

The failure rate function provides a natural way characterizing failure distributions.It is also known as ‘hazard rate or force of mortality’. The failure rate is usually expressedas failures per hour, month or other time interval. Usually the exponential distributiondescribes the probability of survival / reliability during the normal operation period.MTBF, it may be noticed, is the reciprocal of the failure rate. Failure rates are additive,while MTBF is not. The exponential distribution describes the probability of survival P(t) (reliability) during the normal operation period. Is (t) = Exp. (-rt) where r is failureper hour and ti time in hours. For example, if r is .01 and t is 1000 hours then it is 10failures in the interval. The poisson distribution (discussed earlier) is related closely tothe exponential distribution and in many situations the frequency distribution of failurerate follows the poisson distribution whose equation is P(x) = e-rt rtx / x! where x is thevariable with possible values from 0 to ?. In this the mean is the expectation rt, and thevariance is equal to the mean. For large rt, poisson distribution may be approximatedto the normal pattern. The exponential and poisson distribution describe the mostrandom situations possible.

While the exponential distribution describes constant failure rate situations occurduring the normal operation period, the normal distribution describes increasing failuresthat occur during the wear out period. The equation of density function of normaldistribution is given by

Page 229: DBA1656

DBA 1656 QUALITY MANAGEMENT

229

NOTES

Anna University Chennai

V is known as the characteristic age at failure. For different componentsoperating in a given environment, larger the V, greater the reliability. The value of Kfixes the shape of P(t). It measures the dispersion and is used to calculate the variance.As K increases; this shape tends to the normal distribution.

Bath Tub Curve

An Advisory Group or Reliability of Electronic Equipment – AGREE wasformed by the U.S. Department of Defence in 1952 primarily to study the reliabilityrequirements of critical military hardware. Their report submitted in 1957 gave a majorimpetus to the reliability movement. The group formulated the well known ‘bath-tub-curve’ of reliability. The probability of failure of a manufactured product is similar to themortality rate of human beings – high during the initial and terminal stages with a steadylow level in between.

FIGURE 4.3

In view of the shape of the curve, reliability of an item can be determined bytaking into consideration only the normal usage of the product. The useful working lifeof an item is normally indicated by the flat and uniform failure rate period of the bath tubcurve.

The useful life of a product is divided into three separate periods known aswear-in, normal operation and wear-out. These three periods are usually distinct butsometimes they overlap. The figure above shows the typical bath tub shaped failurerate (indicated on the vertical Y-axis) – curve that an equipment experiences during thelife time (shown in the horizontal X-axis). The three stages are I wear-in; II NormalOperations and III wear-out period.

The wear-in period; also known as the infant mortality period is characterizedby a decreasing failure rate. This is usually witnessed by starting new plants. In thisinitial debugging period of high breakdown is witnessed. Gradually the break-down

Page 230: DBA1656

DBA 1656 QUALITY MANAGEMENT

230

NOTES

Anna University Chennai

frequency decreases to a constant range when the plant is turned over to operations in ahighly reliable stage.

Most original equipment manufacturers debug the equipments before delivery.The wear-in failures are caused by material defects and human errors during assemblyand most suppliers have quality control programmes to defeat and eliminate these bugs.Failures during the wear-in period are described by the Weibull distribution.

Admittedly, the normal operation period covers the major part of an equipmentand is characterized by a constant failure rate. During this normal period, the failure ratedoes not change as the equipment ages implying that the probability of failure at one pointof time is the same as at any other point in time.

Like the human being, all equipment wears and all material degrades within time.After a long normal operation period with a constant failure rate, the wear-out periodstarts with increasing failure rate. When an equipment enters the wear-out period, itmust be overhauled.

Combinatorial Aspects

The prediction of the overall reliability of a complex system, similar to the computerinstallation can be achieved through combinatorial analysis. Let us consider the computerconfiguration depicted in Exhibit.

FIGURE 4.4

Page 231: DBA1656

DBA 1656 QUALITY MANAGEMENT

231

NOTES

Anna University Chennai

Ideally we should like to accomplish our aim of getting correct output in 100%of the cases. But, in practice, this may not always be possible. Let us now assume thatthat chance of failure of each of the three blocks is 1%, i.e. each block has a reliabilityof 99%. The three blocks in Exhibit are in a series and if any one of them fails, the entiresystem fails. Therefore, the series reliability of the above system comprising of threeblocks in a series is the product of the respective reliability of individual blocks.

0.99 x 0.99 x 0.99 = 0.970299

In the above system, the CPU is the most expensive block and it is noteconomically viable to have a standby CPU. In order to improve the reliability of thissystem, we can introduce additional input and output acting in parallel. This willensure that the failure of one (input or output) will not result in the failure of the entiresystem.

The reliability of this system can be obtained as follows:

The chance of the failure of only one input is .01 because its reliability is 0.99.Form probability considerations, the chance of failure of both inputs is 0.01 x 0.01 =0.0001. This implies that the reliability of the total input system is 1-0.0001 = 0.9999.Similarly, the reliability of the total output system is 0.9999. Therefore, the reliability ofthe total system

= 0.9999 x 0.99 x 0.9999 = 0.989802

We see that the reliability of the total system improves by about 12%. Thisincreased reliability is achieved through redundancy which implies the existence of morethan one item for accomplishing a given task. Obviously, a cost benefit analysis has tobe carried out before embarking on reliability through redundancy.

Well-defined reliability protocol and well-equipped test facilities are only aprerequisite but problems in reliability do persist in a developing economy like India.These include large varieties of parts, small quantities of requirement, high stringent timeschedule, indigenous manufacturers not bring upto the expectations for producing veryhigh reliability.

4.9 SEVEN OLD STATISTICAL TOOLS

Seven Tools of Quality

The discipline of Total Quality Control uses a number of quantitative methodsand tools to identify problems and suggest avenues for continuous improvement in

Page 232: DBA1656

DBA 1656 QUALITY MANAGEMENT

232

NOTES

Anna University Chennai

fields such as manufacturing. Over many years, total quality practitioners graduallyrealized that a large number of quality related problems can be solved with seven basicquantitative tools, which then became known as the traditional “Seven Tools of Quality”.These are:

. Ishikawa diagram

. Pareto chart

. Check sheet

. Control chart

. Flowchart

. Histogram

. Scatter diagrams

These tools have been widely used in most quality management organizations,and a number of extensions and improvements to them have been proposed and adopted.

Pareto chart

A Pareto chart is a special type of bar chart where the values being plottedare arranged in descending order. It is named for Vilfredo Pareto, and its use in qualityassurance was popularized by Joseph M. Juran and Kaoru Ishikawa.

FIGURE 4.5

Page 233: DBA1656

DBA 1656 QUALITY MANAGEMENT

233

NOTES

Anna University Chennai

Simple example of a Pareto chart using hypothetical data showing the relativefrequency of reasons for arriving late at work.

The Pareto chart is one of the seven basic tools of quality control, which includethe histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram,flowchart, and scatter diagram.

Typically the left vertical axis is frequency of occurrence, but it can alternativelyrepresent cost or other important unit of measure. The right vertical axis is the cumulativepercentage of the total number of occurrences, total cost, or total of the particular unitof measure. The purpose is to highlight the most important among a (typically large) setof factors. In quality control, the Pareto chart often represents the most common sourcesof defects, the highest occurring type of defect, or the most frequent reasons for customercomplaints, etc.

Check sheet

The check sheet is a simple document that is used for collecting data in real-time and at the location where the data is generated. The document is typically a blankform that is designed for the quick, easy, and efficient recording of the desired information,which can be either quantitative or qualitative. When the information is quantitative, thechecksheet is sometimes called a tally sheet.

A defining characteristic of a checksheet is that data is recorded by makingmarks (“checks”) on it. A typical checksheet is divided into regions, and marks made indifferent regions have different significance. Data is read by observing the location andnumber of marks on the sheet. 5 Basic types of Check Sheets : Classification : A traitsuch as a defect or failure mode must be classified into a category Location : Thephysical location of a trait is indicated on a picture of a part or item being evaluatedFrequency : The presence or absence of a trait or combination of traits is indicated.Also number of occurrences of a trait on a part can be indicated Measurement Scale :A measurement scale is divided into intervals, and measurements are indicated bychecking an appropriate interval Check List : The items to be performed for a task arelisted so that, as each is accomplished, it can be indicated as having been completed.

An example of a simple quality control checksheet

The check sheet is one of the seven basic tools of quality control, which includethe histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram,flowchart, and scatter diagram. See Quality Management Glossary.

Page 234: DBA1656

DBA 1656 QUALITY MANAGEMENT

234

NOTES

Anna University Chennai

Control chart

The control chart, also known as the ‘Shewhart chart’ or ‘process-behaviourchart’ is a statistical tool intended to assess the nature of variation in a process and tofacilitate forecasting and management. A control chart is a more specific kind of a runchart.

The control chart is one of the seven basic tools of quality control,which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, andPerformance of control charts

When a point falls outside of the limits established for a given control chart,those responsible for the underlying process are expected to determine whether a specialcause has occurred. If one has, then that cause should be eliminated if possible. It isknown that even when a process is in control (that is, no special causes are present inthe system), there is approximately a 0.27% probability of a point exceeding 3-sigmacontrol limits. Since the control limits are evaluated each time a point is added to thechart, it readily follows that every control chart will eventually signal the possible presenceof a special cause, even though one may not have actually occurred. For a Shewhartcontrol chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.

Meanwhile, if a special cause does occur, it may not be of sufficient magnitudefor the chart to produce an immediate alarm condition. If a special cause occurs, onecan describe that cause by measuring the change in the mean and/or variance of theprocess in question. When those changes are quantified, it is possible to determine theout-of-control ARL for the chart.

It turns out that Shewhart charts are quite good at detecting large changes inthe process mean or variance, as their out-of-control ARLs are fairly short in thesecases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), theShewhart chart does not detect these changes efficiently. Other types of control chartshave been developed, such as the EWMA chart and the CUSUM chart, which detectsmaller changes more efficiently by making use of information from observations collectedprior to the most recent data point.

Page 235: DBA1656

DBA 1656 QUALITY MANAGEMENT

235

NOTES

Anna University Chennai

Criticisms

Several authors have criticised the control chart on the grounds that it violatesthe likelihood principle. However, the principle is itself controversial and supporters ofcontrol charts further argue that, in general, it is impossible to specify a likelihood functionfor a process not in statistical control, especially where knowledge about the causesystem of the process is weak.

Some authors have criticised the use of average run lengths (ARLs) forcomparing control chart performance, because that average usually follows a Geometricdistribution, which has a high variability.

Flowchart

FIGURE 4.6

A simple flowchart for what to do if a lamp doesn’t work

A flowchart (also spelled flow-chart and flow chart) is a schematicrepresentation of an algorithm or a process flowchart is one of the seven basic tools ofquality control, which include the histogram, Pareto chart, check sheet, control chart,cause-and-effect diagram, flowchart, and scatter diagram. See Quality Management

Page 236: DBA1656

DBA 1656 QUALITY MANAGEMENT

236

NOTES

Anna University Chennai

Glossary. They are commonly used in business/economic presentations to help theaudience visualize the content better, or to find flaws in the process.

Symbols

A typical flowchart from older Computer Science textbooks may have the followingkinds of symbols:

Start and end symbols, represented as lozenges, ovals or rounded rectangles,usually containing the word “Start” or “End”, or another phrase signaling thestart or end of a process, such as “submit enquiry” or “receive product”.

Arrows, showing what’s called “flow of control” in computer science. An arrowcoming from one symbol and ending at another symbol represents that controlpasses to the symbol the arrow points to.

Processing steps, represented as rectangles. Examples: “Add 1 to X”;“replace identified part”; “save changes” or similar.

Input/Output, represented as a parallelogram. Examples: Get X from the user;display X.

Conditional (or decision), represented as a diamond (rhombus). Thesetypically contain a Yes/No question or True/False test. This symbol is unique inthat it has two arrows coming out of it, usually from the bottom point and rightpoint, one corresponding to Yes or True, and one corresponding to No orFalse. The arrows should always be labeled. More than two arrows can beused, but this is normally a clear indicator that a complex decision is beingtaken, in which case it may need to be broken-down further, or replaced withthe “pre-defined process” symbol.

A number of other symbols that have less universal currency, such as:

o A Document represented as a rectangle with a wavy base;

o A Manual input represented by rectangle, with the top irregularlysloping up from left to right. An example would be to signify data-entryfrom a form;

o A Manual operation represented by a trapezoid with the longestparallel side utmost, to represent an operation or adjustment to processthat can only be made manually.

o A Data File represented by a cylinder

Page 237: DBA1656

DBA 1656 QUALITY MANAGEMENT

237

NOTES

Anna University Chennai

Note: All process symbols within a flowchart should be numbered.Normally a number is inserted inside the top of the shape to indicate whichstep the process is within the flowchart.

Flowcharts may contain other symbols, such as connectors, usually representedas circles, to represent converging paths in the flow chart. Circles will have more thanone arrow coming into them but only one going out. Some flow charts may just have anarrow point to another arrow instead. These are useful to represent an iterative process(what in Computer Science is called a loop). A loop may, for example, consist of aconnector where control first enters, processing steps, a conditional with one arrowexiting the loop, and one going back to the connector. Off-page connectors are oftenused to signify a connection to a (part of a) process held on another sheet or screen. Itis important to remember to keep these connections logical in order. All processesshould flow from top to bottom and left to right.

A flowchart is described as “cross-functional” when the page is divided intodifferent “lanes” describing the control of different organizational units. A symbol appearingin a particular “lane” is within the control of that organizational unit. This techniqueallows the analyst to locate the responsibility for performing an action or making adecision correctly, allowing the relationship between different organizational units withresponsibility over a single process.

Histogram

For the histogram used in digital image processing, see Color histogram.

FIGURE 4.7

Example of a histogram of 100 normally distributed random values.

Page 238: DBA1656

DBA 1656 QUALITY MANAGEMENT

238

NOTES

Anna University Chennai

In statistics, a histogram is a graphical display of tabulated frequencies. Ahistogram is the graphical version of a table which shows what proportion of cases fallinto each of several or many specified categories. The categories are usually specifiedas non-overlapping intervals of some variable. The categories (bars) must be adjacent.

The word histogram is derived from histos and gramma in Greek, the firstmeaning web or mast and the second meaning drawing, record or writing. A histogramof something is thus, etymologically speaking, a drawing of the web of this something.

The histogram is one of the seven basic tools of quality control, which alsoinclude the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart,and scatter diagram. See also the glossary of quality management.

Examples

As an example we consider data collected by the U.S. Census Bureau on timeto travel to work (2000 census, [1], Table 5). The census found that there were 124million people who work outside of their homes. People were asked how long it takesthem to get to work, and their responses were divided into categories: less than 5minutes, more than 5 minutes and less than 10, more than 10 minutes and less than 15,and so on. The tables shows the numbers of people per category in thousands, so that4,180 means 4,180,000.

The data in the following tables are displayed graphically by histograms. Aninteresting feature of both diagrams is the spike in the 30 to 35 minutes category. Itseems likely that this is an artifact: half an hour is a common unit of informal timemeasurement, so people whose travel times were perhaps a little less than, or a littlegreater than 30 minutes might be inclined to answer “30 minutes”. This rounding is acommon phenomenon when collecting data from people.

FIGURE 4.8

Page 239: DBA1656

DBA 1656 QUALITY MANAGEMENT

239

NOTES

Anna University Chennai

Histogram of travel time, US 2000 census. Area under the curve equals the total numberof cases. This diagram uses Q/width from the table.

TABLE 4.8

Data by absolute numbers

Interval Width Quantity Quantity/width

0 5 4180 836

5 5 13687 2737

10 5 18618 3723

15 5 19634 3926

20 5 17981 3596

25 5 7190 1438

30 5 16369 3273

35 5 3212 642

40 5 4122 824

45 15 9200 613

60 30 6461 215

90 60 3435 57

This histogram shows the number of cases per unit interval so that the height ofeach bar is equal to the proportion of total people in the survey who fall into thatcategory. The area under the curve represents the total number of cases (124 million).This type of histogram shows absolute numbers.

FIGURE 4.9

Page 240: DBA1656

DBA 1656 QUALITY MANAGEMENT

240

NOTES

Anna University Chennai

Histogram of travel time, US 2000 census. Area under the curve equals 1. Thisdiagram uses Q/total/width from the table.

TABLE 4..9

Data by proportion

Marks Scored No. of Students

10–20 1

20–30 1

30–40 4

40–50 4

50–60 8

60–70 7

70–80 11

80–90 6

90–100 5

This histogram differs from the first only in the vertical scale. The height of eachbar is the decimal percentage of the total that each category represents, and the totalarea of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayedis a simple density estimate. This version shows proportions, and is also known as anunit area histogram.

Mathematical Definition

In a more general mathematical sense, a histogram is simply a mapping mi thatcounts the number of observations that fall into various disjoint categories (known asbins), whereas the graph of a histogram is merely one way to represent a histogram.Thus, if we let n be the total number of observations and k be the total number of bins,the histogram mi meets the following conditions:

Cumulative Histogram

A cumulative histogram is a mapping that counts the cumulative number ofobservations in all of the bins up to the specified bin. That is, the cumulative histogramMi of a histogram mi is defined as:

Page 241: DBA1656

DBA 1656 QUALITY MANAGEMENT

241

NOTES

Anna University Chennai

Number of bins and width

There is no “best” number of bins, and different bin sizes can reveal differentfeatures of the data. Some theoreticians have attempted to determine an optimal numberof bins, but these methods generally make strong assumptions about the shape of thedistribution. You should always experiment with bin widths before choosing one (ormore) that illustrate the salient features in your data.

The number of bins can be calculated directly, or from an suggested bin width,h:

Where n is the number of observations in the sample , andthe braces indicate a ceiling function.

Sturges’ formula

which implicitly bases the bin sizes on the range of the data, and can perform poorly ifn < 30.

Scott’s choice

where h is the common bin width, and s is the sample standard deviation.

Freedman-Diaconis’ choice

which is based on the interquartile range

4.9 SEVEN NEW MANAGEMENT TOOLS

Seven New Management and Planning Tools

In 1976, the Union of Japanese Scientists and Engineers (JUSE) saw the needfor tools to promote innovation, communicate information and successfully plan major

Page 242: DBA1656

DBA 1656 QUALITY MANAGEMENT

242

NOTES

Anna University Chennai

projects. A team researched and developed the seven new quality control tools, oftencalled the seven management and planning (MP) tools, or simply the seven managementtools. Not all the tools were new, but their collection and promotion were.

The seven MP tools, listed in an order that moves from abstract analysis todetailed planning, are:

1. Affinity diagram: organizes a large number of ideas into their natural relationships.

2. Relations diagram: shows cause-and-effect relationships and helps you analyzethe natural links between different aspects of a complex situation.

3. Tree diagram: breaks down broad categories into finer and finer levels of detail,helping you move your thinking step by step from generalities to specifics.

4. Matrix diagram: shows the relationship between two, three or four groups ofinformation and can give information about the relationship, such as its strength,the roles played by various individuals, or measurements.

5. Matrix data analysis: a complex mathematical technique for analyzing matrices,often replaced in this list by the similar prioritization matrix. One of the mostrigorous, careful and time-consuming of decision-making tools, a prioritizationmatrix is an L-shaped matrix that uses pairwise comparisons of a list of optionsto a set of criteria in order to choose the best option(s).

6. Arrow diagram: shows the required order of tasks in a project or process, thebest schedule for the entire project, and potential scheduling and resourceproblems and their solutions.

7. Process decision program chart (PDPC): systematically identifies what mightgo wrong in a plan under development.

As problems have increased in complexity, more tools have been developed toencourage employees to participate in the problem-solving process. Let’s review theseven “new” quality management tools being used today.

The affinity diagram is used to generate ideas, then organize these ideas in alogical manner. The first step in developing an affinity diagram is to post the problem (orissue) where everyone can see it. Next, team members write their ideas for solving theproblem on cards and post them below the problem. Seeing the ideas of other membersof the team helps everyone generate new ideas. As the idea generation phase slows, theteam sorts the ideas into groups, based on patterns or common themes. Finally,descriptive title cards are created to describe each group of ideas.

Page 243: DBA1656

DBA 1656 QUALITY MANAGEMENT

243

NOTES

Anna University Chennai

The interrelationship digraph allows teams to look for cause and effectrelationships between pairs of elements. The team starts with ideas that seem to berelated and determines if one causes the other. If idea 1 causes idea 5, then an arrow isdrawn from 1 to 5. If idea 5 causes idea 1, then the arrow is drawn from 5 to 1. If nocause is ascertained, no arrow is drawn. When the exercise is finished, it is obvious thatideas with many outgoing arrows cause things to happen, while ideas with many incomingarrows result from other things.

A tree diagram assists teams in exploring all the options available to solve aproblem, or accomplish a task. The tree diagram actually resembles a tree when complete.The trunk of the tree is the problem or task. Branches are major options for solving theproblem, or completing the task. Twigs are elements of the options. Leaves are themeans of accomplishing the options.

The prioritization matrix helps teams select from a series of options based onweighted criteria. It can be used after options have been generated, such as in a treediagram exercise. A prioritization matrix is helpful in selecting which option to pursue.The prioritization matrix adds weights (values) to each of the selection criteria to beused in deciding between options. For example, if you need to install a new softwaresystem to better track quality data, your selection criteria could be cost, leadtime,reliability, and upgrades. A simple scale, say 1 through 5, could be used to prioritize theselection criteria being used. Next, you would rate the software options for each ofthese selection criteria and multiply that rating by the criteria weighting.

The matrix diagram allows teams to describe relationships between lists ofitems. A matrix diagram can be used to compare the results of implementing a newmanufacturing process to the needs of a customer. For example, if the customer’s mainneeds are low cost products, short leadtimes, and products that are durable; and achange in the manufacturing process can provide faster throughput, larger quantities,and more part options; then the only real positive relationship is the shorter leadtime tothe faster throughput.

The other process outcomes—larger quantities and more options—are of littlevalue to the customer. This matrix diagram, relating customer needs to the manufacturingprocess changes, would be helpful in deciding which manufacturing process to implement.

The process decision program chart can help a team identify things that couldgo wrong, so corrective action can be planned in advance. The process decision programchart starts with the problem. Below this, major issues related to the problem are listed.

Page 244: DBA1656

DBA 1656 QUALITY MANAGEMENT

244

NOTES

Anna University Chennai

Below the issues, associated tasks are listed. For each task, the team considers whatcould go wrong and records these possibilities on the chart. Next, the team considersactions to prevent things from going wrong. Finally, the team selects which preventiveactions to take from all the ones listed.

The activity network diagram graphically shows total completion time, therequired sequence of events, tasks that can be done simultaneously, and critical tasksthat need monitoring. In this respect, an activity network diagram is similar to the traditionalPERT chart used for activity measurement and planning.

Affinity Diagram

Also called as affinity chart, K-J method

The affinity diagram organizes a large number of ideas into their naturalrelationships. This method taps a team’s creativity and intuition. It was created in the1960s by Japanese anthropologist Jiro Kawakita.

When to Use

When you are confronted with many facts or ideas in apparent chaos

When issues seem too large and complex to grasp

When group consensus is necessary

Typical situations are:

1 After a brainstorming exercise

2 When analyzing verbal data, such as survey results.

Procedure

Materials needed: sticky notes or cards, marking pens, large work surface (wall,table, or floor).

1. Record each idea with a marking pen on a separate sticky note or card. (Duringa brainstorming session, write directly onto sticky notes or cards if you suspectyou will be following the brainstorm with an affinity diagram.) Randomly spreadnotes on a large work surface so all notes are visible to everyone. The entireteam gathers around the notes and participates in the next steps.

2. It is very important that no one talk during this step. Look for ideas that seem tobe related in some way. Place them side by side. Repeat until all notes aregrouped. It’s okay to have “loners” that don’t seem to fit a group. It’s all right

Page 245: DBA1656

DBA 1656 QUALITY MANAGEMENT

245

NOTES

Anna University Chennai

to move a note someone else has already moved. If a note seems to belong intwo groups, make a second note.

3. You can talk now. Participants can discuss the shape of the chart, any surprisingpatterns, and especially reasons for moving controversial notes. A few morechanges may be made. When ideas are grouped, select a heading for eachgroup. Look for a note in each grouping that captures the meaning of the group.Place it at the top of the group. If there is no such note, write one. Often it isuseful to write or highlight this note in a different color.

4. Combine groups into “supergroups” if appropriate.

Example

The ZZ-400 manufacturing team used an affinity diagram to organize its list ofpotential performance indicators. Shows the list team members brainstormed. Becausethe team works a shift schedule and members could not meet to do the affinity diagramtogether, they modified the procedure.

Figure 4.10

They wrote each idea on a sticky note and put all the notes randomly on ararely used door. Over several days, everyone reviewed the notes in their spare time

Page 246: DBA1656

DBA 1656 QUALITY MANAGEMENT

246

NOTES

Anna University Chennai

and moved the notes into related groups. Some people reviewed the evolving patternseveral times. After a few days, the natural grouping shown in figure B had emerged.

Notice that one of the notes, “Safety,” has become part of the heading for itsgroup. The rest of the headings were added after the grouping emerged. Five broadareas of performance were identified: product quality, equipment maintenance,manufacturing cost, production volume, and safety and environmental.

FIGURE 4.10

Page 247: DBA1656

DBA 1656 QUALITY MANAGEMENT

247

NOTES

Anna University Chennai

CONSIDERATIONS

The affinity diagram process lets a group move beyond its habitual thinking andpreconceived categories. This technique accesses the great knowledge andunderstanding residing untapped in our intuition.

Very important “Do nots”: Do not place the notes in any order. Do not determinecategories or headings in advance. Do not talk during step 2. (This is hard forsome people!)

Allow plenty of time for step 2. You can, for example, post the randomly-arrangednotes in a public place and allow grouping to happen over several days.

Most groups that use this technique are amazed at how powerful and valuable atool it is. Try it once with an open mind and you’ll be another convert.

Use markers. With regular pens, it is hard to read ideas from any distance.

Relations Diagram

Also called as interrelationship diagram or digraph, network diagram

Description

The relations diagram shows cause-and-effect relationships. Just as importantly,the process of creating a relations diagram helps a group analyze the natural links betweendifferent aspects of a complex situation.

When to Use

When trying to understand links between ideas or cause-and-effect relationships,such as when trying to identify an area of greatest impact for improvement.

When a complex issue is being analyzed for causes.

When a complex solution is being implemented.

After generating an affinity diagram, cause-and-effect diagram or tree diagram, tomore completely explore the relations of ideas.

Basic Procedure

Materials needed: sticky notes or cards, large paper surface (newsprint or two flipchartpages taped together), marking pens, tape.

1. Write a statement defining the issue that the relations diagram will explore.Write it on a card or sticky note and place it at the top of the work surface.

Page 248: DBA1656

DBA 1656 QUALITY MANAGEMENT

248

NOTES

Anna University Chennai

2. Brainstorm ideas about the issue and write them on cards or notes. If anothertool has preceded this one, take the ideas from the affinity diagram, the mostdetailed row of the tree diagram or the final branches on the fishbone diagram.You may want to use these ideas as starting points and brainstorm additionalideas.

3. Place one idea at a time on the work surface and ask: “Is this idea related toany others?” Place ideas that are related near the first. Leave space betweencards to allow for drawing arrows later. Repeat until all cards are on the worksurface.

4. For each idea, ask, “Does this idea cause or influence any other idea?” Drawarrows from each idea to the ones it causes or influences. Repeat the questionfor every idea.

5. Analyze the diagram:

o Count the arrows in and out for each idea. Write the counts at thebottom of each box. The ones with the most arrows are the key ideas.

o Note which ideas have primarily outgoing (from) arrows. These arebasic causes.

6. Note which ideas have primarily incoming (to) arrows. These are final effectsthat also may be critical to address.

Be sure to check whether ideas with fewer arrows also are key ideas. The numberof arrows is only an indicator, not an absolute rule. Draw bold lines around the keyideas.

Example

A computer support group is planning a major project: replacing the mainframecomputer. The group drew a relations diagram (see figure below) to sort out a confusingset of elements involved in this project.

Page 249: DBA1656

DBA 1656 QUALITY MANAGEMENT

249

NOTES

Anna University Chennai

FIGURE 4.11

“Computer replacement project” is the card identifying the issue. The ideasthat were brainstormed were a mixture of action steps, problems, desired results andless-desirable effects to be handled. All these ideas went onto the diagram together. Asthe questions were asked about relationships and causes, the mixture of ideas began tosort itself out.

After all the arrows were drawn, key issues became clear. They are outlinedwith bold lines.

• “New software” has one arrow in and six arrows out. “Install newmainframe” has one arrow in and four out. Both ideas are basic causes.

• “Service interruptions” and “increased processing cost” both have threearrows in, and the group identified them as key effects to avoid.

Tree Diagram

Also called: systematic diagram, tree analysis, analytical tree, hierarchydiagram

The tree diagram starts with one item that branches into two or more, each ofwhich branch into two or more, and so on. It looks like a tree, with trunk and multiplebranches.

It is used to break down broad categories into finer and finer levels of detail.Developing the tree diagram helps you move your thinking step by step from generalitiesto specifics.

Page 250: DBA1656

DBA 1656 QUALITY MANAGEMENT

250

NOTES

Anna University Chennai

When to Use

• When an issue is known or being addressed in broad generalities and you mustmove to specific details, such as when developing logical steps to achieve anobjective.

• When developing actions to carry out a solution or other plan.

• When analyzing processes in detail.

• When probing for the root cause of a problem.

• When evaluating implementation issues for several potential solutions.

• After an affinity diagram or relations diagram has uncovered key issues.

• As a communication tool, to explain details to others.

Procedure

1. Develop a statement of the goal, project, plan, problem or whatever is beingstudied. Write it at the top (for a vertical tree) or far left (for a horizontal tree) ofyour work surface.

2. Ask a question that will lead you to the next level of detail. For example:

o For a goal, action plan or work breakdown structure: “What tasksmust be done to accomplish this?” or “How can this be accomplished?”

o For root-cause analysis: “What causes this?” or “Why does thishappen?”

o For gozinto chart: “What are the components?” (Gozinto literally comesfrom the phrase “What goes into it?”

Brainstorm all possible answers. If an affinity diagram or relationship diagramhas been done previously, ideas may be taken from there. Write each idea in aline below (for a vertical tree) or to the right of (for a horizontal tree) the firststatement. Show links between the tiers with arrows.

o Do a “necessary and sufficient” check. Are all the items at this levelnecessary for the one on the level above? If all the items at this levelwere present or accomplished, would they be sufficient for the one onthe level above?

o Each of the new idea statements now becomes the subject: a goal,objective or problem statement. For each one, ask the question again

Page 251: DBA1656

DBA 1656 QUALITY MANAGEMENT

251

NOTES

Anna University Chennai

to uncover the next level of detail. Create another tier of statementsand show the relationships to the previous tier of ideas with arrows.Do a “necessary and sufficient check” for each set of items.

o Continue to turn each new idea into a subject statement and ask thequestion. Do not stop until you reach fundamental elements: specificactions that can be carried out, components that are not divisible, rootcauses.

o Do a “necessary and sufficient” check of the entire diagram. Are all theitems necessary for the objective? If all the items were present oraccomplished, would they be sufficient for the objective?

The district has three fundamental goals. The first, to improve academicperformance, is partly shown in the figure below. District leaders have identified twostrategic objectives that, when accomplished, will lead to improved academicperformance: academic achievement and college admissions.

FIGURE 4.12

Tree diagram example

Lag indicators are long-term and results-oriented. The lag indicator for academicachievement is Regents’ diploma rate: the percent of students receiving a state diplomaby passing eight Regents’ exams.

Lead indicators are short-term and process-oriented. Starting in 2000, thelead indicator for the Regents’ diploma rate was performance on new fourth and eighthgrade state tests.

Page 252: DBA1656

DBA 1656 QUALITY MANAGEMENT

252

NOTES

Anna University Chennai

Finally, annual projects are defined, based on cause-and-effect analysis, thatwill improve performance. In 2000–2001, four projects were accomplished to improveacademic achievement. Thus this tree diagram is an interlocking series of goals andindicators, tracing the causes of systemwide academic performance first through highschool diploma rates, then through lower grade performance, and back to specificimprovement projects.

Matrix Diagram

Also called as matrix, matrix chart

The matrix diagram shows the relationship between two, three or four groupsof information. It also can give information about the relationship, such as its strength,the roles played by various individuals or measurements.

Six differently shaped matrices are possible: L, T, Y, X, C and roof-shaped,depending on how many groups must be compared.

When to Use each Shape

Table 1 summarizes when to use each type of matrix. Also click on the linksbelow to see an example of each type. In the examples, matrix axes have been shadedto emphasize the letter that gives each matrix its name.

• An L-shaped matrix relates two groups of items to each other (or one group toitself).

• A T-shaped matrix relates three groups of items: groups B and C are eachrelated to A. Groups B and C are not related to each other.

• A Y-shaped matrix relates three groups of items. Each group is related to theother two in a circular fashion.

• A C-shaped matrix relates three groups of items all together simultaneously, in3-D.

• An X-shaped matrix relates four groups of items. Each group is related to twoothers in a circular fashion.

• A roof-shaped matrix relates one group of items to itself. It is usually usedalong with an L- or T-shaped matrix.

Page 253: DBA1656

DBA 1656 QUALITY MANAGEMENT

253

NOTES

Anna University Chennai

Table 1: When to use differently-shaped matrices

L-shaped 2 groups A B (or A A)

T-shaped 3 groups B A C but not B C

Y-shaped 3 groups A B C A

C-shaped 3 groups All three simultaneously (3-D)

X-shaped 4 groups A B C D A but not A C or B D

Roof-shaped 1 group A A when also A B in L or T

L-Shaped Matrix

This L-shaped matrix summarizes customers’ requirements. The team placednumbers in the boxes to show numerical specifications and used check marks to showchoice of packaging. The L-shaped matrix actually forms an upside-down L. This is themost basic and most common matrix format.

Customer Requirements

Customer Customer Customer CustomerD M R T

Purity % > 99.2 > 99.2 > 99.4 > 99.0

Trace metals (ppm) < 5 — < 10 < 25

Water (ppm) < 10 < 5 < 10 —

Viscosity (cp) 20-35 20-30 10-50 15-35

Color < 10 < 10 < 15 < 10

Drum

Truck

Railcar

T-Shaped Matrix

This T-shaped matrix relates product models (group A) to their manufacturinglocations (group B) and to their customers (group C).

Examining the matrix in different ways reveals different information. For example,concentrating on model A, we see that it is produced in large volume at the Texas plantand in small volume at the Alabama plant. Time Inc. is the major customer for model A,

Page 254: DBA1656

DBA 1656 QUALITY MANAGEMENT

254

NOTES

Anna University Chennai

while Arlo Co. buys a small amount. If we choose to focus on the customer rows, welearn that only one customer, Arlo, buys all four models. Zig buys just one. Time makeslarge purchases of A and D, while Lyle is a relatively minor customer.

Products—Customers—Manufacturing Locations

Y-Shaped Matrix

This Y-shaped matrix shows the relationships between customer requirements,internal process metrics and the departments involved. Symbols show the strength ofthe relationships: primary relationships, such as the manufacturing department’sresponsibility for production capacity; secondary relationships, such as the link betweenproduct availability and inventory levels; minor relationships, such as the distributiondepartment’s responsibility for order lead time; and no relationship, such as betweenthe purchasing department and on-time delivery.

The matrix tells an interesting story about on-time delivery. The distributiondepartment is assigned primary responsibility for that customer requirement. The twometrics most strongly related to on-time delivery are inventory levels and order leadtime. Of the two, distribution has only a weak relationship with order lead time andnone with inventory levels. Perhaps the responsibility for on-time delivery needs to bereconsidered.

Page 255: DBA1656

DBA 1656 QUALITY MANAGEMENT

255

NOTES

Anna University Chennai

FIGURE 4.13 Responsibilities for Performance to Customer Requirements

C-Shaped Matrix

Think of C meaning “cube.” Because this matrix is three-dimensional, it is difficultto draw and infrequently used. If it is important to compare three groups simultaneously,consider using a three-dimensional model or computer software that can provide aclear visual image.

This figure shows one point on a C-shaped matrix relating products, customersand manufacturing locations. Zig Company’s model B is made at the Mississippi plant.

FIGURE 4.14

Page 256: DBA1656

DBA 1656 QUALITY MANAGEMENT

256

NOTES

Anna University Chennai

X-Shaped Matrix

This figure extends the T-shaped matrix example into an X-shaped matrix byincluding the relationships of freight lines with the manufacturing sites they serve and thecustomers who use them. Each axis of the matrix is related to the two adjacent ones,but not to the one across. Thus, the product models are related to the plant sites and tothe customers, but not to the freight lines.

A lot of information can be contained in an X-shaped matrix. In this one, wecan observe that Red Lines and Zip Inc., which seem to be minor carriers based onvolume, are the only carriers that serve Lyle Co. Lyle doesn’t buy much, but it and Arloare the only customers for model C. Model D is made at three locations, while theother models are made at two. What other observations can you make?

Manufacturing Sites—Products—Customers—Freight Lines

Roof-Shaped Matrix

The roof-shaped matrix is used with an L- or T-shaped matrix to show onegroup of items relating to itself. It is most commonly used with a house of quality, whereit forms the “roof” of the “house.” In the figure below, the customer requirements arerelated to one another. For example, a strong relationship links color and trace metals,while viscosity is unrelated to any of the other requirements.

Page 257: DBA1656

DBA 1656 QUALITY MANAGEMENT

257

NOTES

Anna University Chennai

FIGURE 4.15

Frequently Used Symbols

Arrow Diagram

Also called as activity network diagram, network diagram, activity chart, nodediagram, CPM (critical path method) chart.

The arrow diagram shows the required order of tasks in a project or process,the best schedule for the entire project, and potential scheduling and resource problemsand their solutions. The arrow diagram lets you calculate the “critical path” of the project.This is the flow of critical steps where delays will affect the timing of the entire projectand where addition of resources can speed up the project.

When to Use

• When scheduling and monitoring tasks within a complex project or processwith interrelated tasks and resources.

• When you know the steps of the project or process, their sequence and howlong each step takes, and.

Page 258: DBA1656

DBA 1656 QUALITY MANAGEMENT

258

NOTES

Anna University Chennai

• When project schedule is critical, with serious consequences for completingthe project late or significant advantage to completing the project early.

Procedure

Materials needed: sticky notes or cards, marking pens, large writing surface (newsprintor flipchart pages)

Drawing the Network

1. List all the necessary tasks in the project or process. One convenient method isto write each task on the top half of a card or sticky note. Across the middle ofthe card, draw a horizontal arrow pointing right.

2. Determine the correct sequence of the tasks. Do this by asking three questionsfor each task:

o Which tasks must happen before this one can begin?

o Which tasks can be done at the same time as this one?

o Which tasks should happen immediately after this one?

It can be useful to create a table with four columns —prior tasks, this task,simultaneous tasks, following tasks.

3. Diagram the network of tasks. If you are using notes or cards, arrange them insequence on a large piece of paper. Time should flow from left to right andconcurrent tasks should be vertically aligned. Leave space between the cards.

4. Between each two tasks, draw circles for “events.” An event marks the beginningor end of a task. Thus, events are nodes that separate tasks.

5. Look for three common problem situations and redraw them using “dummies”or extra events. A dummy is an arrow drawn with dotted lines used to separatetasks that would otherwise start and stop with the same events or to showlogical sequence. Dummies are not real tasks.

Problem situations:

o Two simultaneous tasks start and end at the same events. Solution:Use a dummy and an extra event to separate them. In Figure 1, event2 and the dummy between 2 and 3 have been added to separate tasksA and B.

Page 259: DBA1656

DBA 1656 QUALITY MANAGEMENT

259

NOTES

Anna University Chennai

o Task C cannot start until both tasks A and B are complete; a fourthtask, D, cannot start until A is complete, but need not wait for B. (SeeFigure 2.) Solution: Use a dummy between the end of task A and thebeginning of task C.

o A second task can be started before part of a first task is done. Solution:Add an extra event where the second task can begin and use multiplearrows to break the first task into two subtasks. In Figure 3, event 2was added, splitting task A.

Figure 1: Dummy separating simultaneous tasks

Figure 2: Dummy keeping sequence correct

Figure 3: Using an extra event

Page 260: DBA1656

DBA 1656 QUALITY MANAGEMENT

260

NOTES

Anna University Chennai

6. When the network is correct, label all events in sequence with event numbersin the circles. It can be useful to label all tasks in sequence, using letters.

Scheduling: Critical Path Method (CPM)

7. Determine task times—the best estimate of the time that each task should require.Use one measuring unit (hours, days or weeks) throughout, for consistency.Write the time on each task’s arrow.

8. Determine the “critical path,” the longest path from the beginning to the end ofthe project. Mark the critical path with a heavy line or color. Calculate thelength of the critical path: the sum of all the task times on the path.

9. Calculate the earliest times each task can start and finish, based on how longpreceding tasks take. These are called earliest start (ES) and earliest finish(EF). Start with the first task, where ES = 0, and work forward. Draw a squaredivided into four quadrants, as in the following table. Write the ES in the top leftbox and the EF in the top right.

For each task:

o Earliest start (ES) = the largest EF of the tasks leading into this one

o Earliest finish (EF) = ES + task time for this task

Table 4.12 Arrow diagram time box

ES EFEarliest Earlieststart finish

LS LFLatest Lateststart finish

10. Calculate the latest times each task can start and finish without upsetting theproject schedule, based on how long later tasks will take. These are calledlatest start (LS) and latest finish (LF). Start from the last task, where the latestfinish is the project deadline, and work backwards. Write the LS in the lowerleft box and the LF in the lower right box.

o Latest finish (LF) = the smallest LS of all tasks immediately followingthis one

o Latest start (LS) = LF – task time for this task

Page 261: DBA1656

DBA 1656 QUALITY MANAGEMENT

261

NOTES

Anna University Chennai

11. Calculate slack times for each task and for the entire project.

Total slack is the time a job could be postponed without delaying the projectschedule.

Total slack = LS – ES = LF – EF

Free slack is the time a task could be postponed without affecting the earlystart of any job following it.

Free slack = the earliest ES of all tasks immediately following this one – EF

View example of a completed arrow diagram.

FIGURE 4.16

Process Decision Program Chart

The process decision program chart systematically identifies what might gowrong in a plan under development. Countermeasures are developed to prevent or

Page 262: DBA1656

DBA 1656 QUALITY MANAGEMENT

262

NOTES

Anna University Chennai

offset those problems. By using PDPC, you can either revise the plan to avoid theproblems or be ready with the best response when a problem occurs.

When to Use

• Before implementing a plan, especially when the plan is large and complex.

• When the plan must be completed on schedule.

• When the price of failure is high.

Procedure

• Obtain or develop a tree diagram of the proposed plan. This should be a high-level diagram showing the objective, a second level of main activities and athird level of broadly defined tasks to accomplish the main activities.

• For each task on the third level, brainstorm what could go wrong.

• Review all the potential problems and eliminate any that are improbable orwhose consequences would be insignificant. Show the problems as a fourthlevel linked to the tasks.

• For each potential problem, brainstorm possible countermeasures. These mightbe actions or changes to the plan that would prevent the problem, or actionsthat would remedy it once it occurred. Show the countermeasures as a fifthlevel, outlined in clouds or jagged lines.

• Decide how practical each countermeasure is. Use criteria such as cost, timerequired, ease of implementation and effectiveness. Mark impracticalcountermeasures with an X and practical ones with an O.

Here are some questions that can be used to identify problems:

o What inputs must be present? Are there any undesirable inputs linkedto the good inputs?

o What outputs are we expecting? Might others happen as well?

o What is this supposed to do? Is there something else that it might doinstead or in addition?

o Does this depend on actions , conditions or events? Are thesecontrollable or uncontrollable?

Page 263: DBA1656

DBA 1656 QUALITY MANAGEMENT

263

NOTES

Anna University Chennai

o What cannot be changed or is inflexible?

o Have we allowed any margin for error?

o What assumptions are we making that could turn out to be wrong?

o What has been our experience in similar situations in the past?

o How is this different from before?

o If we wanted this to fail, how could we accomplish that?

Example

A medical group is planning to improve the care of patients with chronic illnessessuch as diabetes and asthma through a new chronic illness management program(CIMP). They have defined four main elements and, for each of these elements, keycomponents. The information is laid out in the process decision program chart below.

Dotted lines represent sections of the chart that have been omitted. Only some of thepotential problems and countermeasures identified by the planning team are shown onthis chart.

Process decision program chart example

FIGURE 4.17

Page 264: DBA1656

DBA 1656 QUALITY MANAGEMENT

264

NOTES

Anna University Chennai

For example, one of the possible problems with patients’ goal-setting isbacksliding. The team liked the idea of each patient having a buddy or sponsor and willadd that to the program design. Other areas of the chart helped them plan better rollout,such as arranging for all staff to visit a clinic with a CIMP program in place. Still otherareas allowed them to plan in advance for problems, such as training the CIMP nurseshow to counsel patients who choose inappropriate goals.

4.13 Benchmarking

Why Benchmark?

One of the prime reasons for using QFD is to develop a product or servicewhich will excite the customer and get him/her to purchase your product. When a teamcaptures the customer’s perceptions of how well different products perform in themarketplace, the team can better understand what is driving the purchase decision.They are able to determine what the market likes and dislikes. However, they are reallystill dealing with Customer Perceptions and not actual performance. They have notnecessarily learned what they, as a team, have to do to create the desired level ofPerceived Performance. Benchmarking your own, and others’, products against theDesign Measures which the team has established helps to define the level of RealPerformance required to produce the desired level of Perceived Performance. It alsohelps you to answer the following questions:

Has the team defined the right Measures to predict CustomerSatisfaction?

Does the product have perception, as opposed to, technical problems?

Benchmarking is a relatively expensiveand time consuming process in mostindustries. Therefore, it is recommended practice to Benchmark only against the criticalDesign Measures. Criticality is defined by how important a particular Measure is to thesuccess of the product and whether there are special circumstances impacting a particularMeasure. A special circumstance might include whether a particular Measure is new orcomplex. Typically, a team might only Benchmark 50 percent of the Design Measures.Sorting the List of Design Measures based upon their importance values is a good wayto identify which Measures to Benchmark.

Who should we Benchmark?

Generally, teams benchmark the same products or services for which theycaptured performance perceptions. In this way, they can try to correlate Actual

Page 265: DBA1656

DBA 1656 QUALITY MANAGEMENT

265

NOTES

Anna University Chennai

Performance with the Perceived Performance. A good policy is to Benchmark productsacross the whole spectrum of performance. In this way, it becomes much clearer whatlevel of performance is perceived to be inadequate, what level is acceptable, and whatlevel of performance currently gets customers excited about a product. Benchmarkingall of the competitive products is not required; just check representative products.

How do we capture the results of Benchmarking?

There are two schools of thought relative to capturing Benchmark Results. Thefirst suggests that the team capture the raw Benchmark data directly and associate thatdata with the appropriate Measure. The other suggests that the team translate the rawBenchmark data into the same scale as was used to capture the perceived performanceratings. Capturing the raw data and using it directly through the process tends to makeit easier to understand exactly how well a product has to perform in order to achieve adesired level of customer satisfaction. However, the raw data sometimes implies toomuch precision for the process. For example, if the team were Benchmarking “Numberof Commands Required to Perform the Desired Functions” as a way of predictingwhether a software package would be perceived to be “Is easy to use”, they couldeasily get caught up in counting precise numbers when, in reality, “Less than 10”, “10 to20”, and “More than 20” might be sufficiently accurate for the purposes of the team.On the other hand, translating the raw Benchmark data into the same rating scale aswas used to capture perceived performances forces the team to repeatedly translatethose ratings back into their original values. This tends to make nuances in the datadisappear and be lost from consideration. However, since only numeric rating data iscaptured with this approach, QFD/CAPTURE could graph this data. QFD/CAPTUREsupports both of these approaches. The general process is to define Related Datacolumns for the list whose entries are to be Benchmarked. Each column would representa particular product. If the raw data is to be captured, the team would configure theRelated Data columns to contain text so that they could enter any type of data and unitswhich are appropriate. If they instead want to capture the ratings, they would configurethe columns to contain numbers only.

Setting Target Values

How should we set our Target Values?

The final goal of many QFD projects is to set the Target Values for the DesignMeasures. This step occurs when the data gathered throughout the process is broughttogether and final decisions are made to answer the question “What are we really goingto do (with respect to this product or service)?” Setting Target Values should be relativelyeasy because:

Page 266: DBA1656

DBA 1656 QUALITY MANAGEMENT

266

NOTES

Anna University Chennai

The team has already defined where they want their product to be positionedfor the Customer.

The team has Benchmarked the existing products to gain a good understandingof what level of actual performance is required in order to produce the desiredlevel of perceived performance.

The team has evaluated the Tradeoffs between Design Measures in orderto determine what compromises may be required and how those compromiseswould be made.

Taking into account all of this information, the team decides upon the Targetswhich they will shoot for. Normally at this point, the team would not decide how theyare going to achieve the Target Values. They are just stating, “we know that we have toachieve this level of performance if we are going to be perceived the way in which wewant to be perceived.” Deciding on the implementation approach will generally occurduring the Conceptualization process. QFD/CAPTURE supports setting Targets for aList through the Related Data columns associated with that List. Generally, a separatecolumn is defined for each release which is being planned. For example, if a new productwere going to be released in 1996 and followed up with enhancements in 1997 and1998, the team would create a separate Related Data column for each year. This wouldallow the team to show the progression of product performance over the life of theProduct. This implies a long term planning perspective rather than just a short term, get-it-out-the-door, perspective. Since Target data is generally textual, these columns wouldbe configured to display Text (as opposed to just numeric data).

4.10 POKA YOKE

From the Japanese words “poka (mistakes)” and “yokeru (to avoid).” Poka Yokeis a “mistake-proofing” concept that aims to not only minimize defects (and waste dueto defects), but to eliminate the possibility of defects at the source by establishingprocedures and tools early on in a manufacturing process that may it impossible toperform a task or make a component incorrectly.

A simple example is the hole near the rim of most sinks that prevent overflows.

Poka-yoke - pronounced “POH-kah YOH-keh” means “fail-safing” or “mistake-proofing” — avoiding (yokeru) inadvertent errors (poka)) is a behavior-shapingconstraint, or a method of preventing errors by putting limits on how an operation canbe performed in order to force the correct completion of the operation. The conceptwas originated by Shigeo Shingo as part of the Toyota Production System. Originallydescribed as Baka-yoke, but as this means “fool-proofing” (or “idiot proofing”) the

Page 267: DBA1656

DBA 1656 QUALITY MANAGEMENT

267

NOTES

Anna University Chennai

name was changed to the milder Poka-yoke.

An example of this in general experience is the inability to remove a car keyfrom the ignition switch of an automobile if the automatic transmission is not first put inthe “Park” position, so that the driver cannot leave the car in an unsafe parking conditionwhere the wheels are not locked against movement. In the IT world another examplecan be found in a normal 3.5" floppy disk: the top-right corner is shaped in a certainway so that the disk cannot be inserted upside-down. In the manufacturing world anexample might be that the jig for holding pieces for processing only allows pieces to beheld in one orientation, or has switches on the jig to detect whether a hole has beenpreviously cut or not, or it might count the number of spot welds created to ensure that,say, four have been executed by the operator.

Implementation

Shigeo Shingo recognises three types of Poka-Yoke

1. The contact method identifies defects by whether or not contact is establishedbetween the device and the product. Colour detection and other productproperty techniques are considered extensions of this.

2. The fixed-value method determines whether a given number of movementshave been made.

3. The motion-step method determines whether the prescribed steps or motionsof the process have been followed.

Poka-yoke either give warnings or can prevent, or control, the wrong action. It issuggested that the choice between these two should be made based on the behavioursin the process, occasional errors may warrant warnings whereas frequent errors, orthose impossible to correct, may warrant a control poka-yoke.

SUMMARY

Many tools and techniques are developed over time to manage quality in variousindustries. This is due to the fact that the nature and character varies from one to another.Developmentof quality functions in line with customer expectations are elaborated inthis unit. The information collected for various purposes have to be judiciously used byestablishing a logical relationship between them. The House of Quality is presented in asimpler way for better understanding. The quality function deployment is extensivelydeliberated right from thinking stage. The Failure Mode Effect Analusis, its, evolution,benefits, types and categories along with its application are presented. The FMEAdesign process, the steps involved in them are presented by using examples. Bath TubCurve is demonstrated and the reliability concepts are deliberated in detail. Using

Page 268: DBA1656

DBA 1656 QUALITY MANAGEMENT

268

NOTES

Anna University Chennai

illustrations, the Taguchi’s Loss Function, its applications, the tolerance design are dealtwith. The seven tools of quality – Ishikawa diagram, Pareto Chart, Check Sheet, ControlChart, Flow Chart, Histogram and Scatter Diagrams are deliberated using examples.The seven new management tools viz. Affinity Diagram, Relations Diagram, Tree Diagram,Matrix Diagram, Matrix Data Analysis, Arrow Diagram and Process Decision ProgramChart are also presented with illustrations.

REVIEW QUESTIONS

1. Expalin the role of the customer in ensuring quality in an organization.

2. What is House of Quality? Explain the building process.

3. Explain how the quality function is deployed in organization without hazzles.

4. What is FEMA? Explain the process of analyzing failures.

5. Discuss the complementary nature of reliability and failures using examples.

6. Explain the parameter and tolerance of design methodology in Taguchitechnique.

7. Compare and contrast the old and new management tools? Give examples.

8. Detail how benchmarking is arrived at. How the concept of Poka yoke isused in this?

Page 269: DBA1656

DBA 1656 QUALITY MANAGEMENT

269

NOTES

Anna University Chennai

QUALITY SYSTEMS ORGANIZING ANDIMPLEMENTATION

INTRODUCTION

Standards give the professional a clarity of achievability. The QualityManagement System has to engulf the organization so as to achieve Total QualityManagement. Involvement of everyone-employee, leader and other stakeholders in apositive way will bring in success. To achieve this, the use of latest technologies likeComputers, Telecommunication have become part of system organization andimplementation process. This unit deals with Introduction to IS / ISO 9004 : 2000,Quality Management Systems, Guidelines for Performance Improvements, QualityAudits, TQM Culture, Leadership, Quality Council, Employee involvement, Motivation,Empowerment, Recognition and Reward, Information Technology Computers andQuality Functions, Internet and Electronic Communications, Information Quality Issues

LEARNING OBJECTIVES

Upon completion of this unit, you will be able to:

Classify the quality systemsDevelop and organize Quality Management SystemsHandle change management on cultureFocusing on all the stakeholdersApply latest developments in ICT for ensuring quality in organization.

5.1 INTRODUCTION TO IS / ISO 9004 : 2000

ISO 9000

The ISO 9000 Series, issued in 1987 by the International Organization forStandardization (ISO), is a set of international standards on quality and quality

UNIT V

Page 270: DBA1656

DBA 1656 QUALITY MANAGEMENT

270

NOTES

Anna University Chennai

management. The standards are generic and not specific to any particular product.They were adopted by the American Society for Quality Control (ASQC), now AmericanSociety for Quality, and issued in the United States as the ANSI/ASQC Q90 Series(revised in 1994 as the ANSI/ASQC Q9000 Series). ISO 9000:2000 is the mostrecent revision of the standards.

ISO 9000 is a family of standards for quality management systems. ISO 9000 ismaintained by ISO, the International Organization for Standardization and is administeredby accreditation and certification bodies. For a manufacturer, some of the requirementsin ISO 9001 (which is one of the standards in the ISO 9000 family) would include:

a set of procedures that cover all key processes in the business;monitoring manufacturing processes to ensure they are producing quality product;keeping proper records;checking outgoing product for defects, with appropriate corrective action wherenecessary; andregularly reviewing individual processes and the quality system itself foreffectiveness.

A company or organization that has been independently audited and certifiedto be in conformance with ISO 9001 may publicly state that it is “ISO 9001 certified”or “ISO 9001 registered.” Certification to an ISO 9000 standard does not guaranteethe compliance (and therefore the quality) of end products and services; rather, it certifiesthat consistent business processes are being applied.

Although the standards originated in manufacturing, they are now employedacross a wide range of other types of organizations. A “product”, in ISO vocabulary,can mean a physical object, or services, or software. In fact, according to ISO in 2004,“service sectors now account by far for the highest number of ISO 9001:2000certificates - about 31% of the total”

History of ISO 9000

Pre-ISO 9000

During World War II, there were quality problems in many British high-techindustries such as munitions, where bombs were going off in factories. The adoptedsolution was to require factories to document their manufacturing procedures and to

Page 271: DBA1656

DBA 1656 QUALITY MANAGEMENT

271

NOTES

Anna University Chennai

prove by record-keeping that the procedures were being followed. The name of thestandard was BS 5750, and it was known as a management standard because it didnot specify what to manufacture, but how to manage the manufacturing process.According to Seddon, “In 1987, the British Government persuaded the InternationalStandards Organisation to adopt BS 5750 as an international standard. BS 5750became ISO 9000.”

Certification

ISO does not itself certify organizations. Many countries have formedaccreditation bodies to authorize certification bodies, which audit organizations applyingfor ISO 9001 compliance certification. It is important to note that it is not possible to becertified to ISO 9000. Although commonly referred to as ISO 9000:2000 certification,the actual standard to which an organization’s quality management can be certified isISO 9001:2000. Both the accreditation bodies and the certification bodies charge feesfor their services. The various accreditation bodies have mutual agreements with eachother to ensure that certificates issued by one of the Accredited Certification Bodies(CB) are accepted world-wide.

The applying organization is assessed based on an extensive sample of its sites,functions, products, services and processes; a list of problems (“action requests” or“non-compliances”) is made known to the management. If there are no major problemson this list, the certification body will issue an ISO 9001 certificate for each geographicalsite it has visited, once it receives a satisfactory improvement plan from the managementshowing how any problems will be resolved.

An ISO certificate is not a once-and-for-all award, but must be renewed atregular intervals recommended by the certification body, usually around three years. Incontrast to the Capability Maturity Model there are no grades of competence withinISO 9001.

Fundamentals of ISO 9000

ISO 9000 represents an evolution of traditional quality systems rather than atechnical change. Whereas traditional quality systems rely on inspection of products toensure quality, the ISO 9000–compliant quality system relies on the control and

Page 272: DBA1656

DBA 1656 QUALITY MANAGEMENT

272

NOTES

Anna University Chennai

continuous improvement of the processes used to design, produce, inspect, install, andservice products. In short, ISO 9000 represents a systemic tool for bringing qualityprocesses under control. Once processes are controlled, they can be continuouslyimproved, resulting in higher-quality products.

ISO 9000 represents a significant step beyond ensuring that specific productsor services meet specifications or industry standards. It certifies that a facility hasimplemented a quality system capable of consistently producing quality products. Thatis, ISO 9000 does not certify the quality of products; it certifies the processes used todevelop them.

Thus ISO 9000 is a process-oriented rather than a results-oriented standard.It affects every function, process, and employee at a facility, and it stresses managementcommitment to quality. But above all, it is customer-focused: It strives to meet or exceedcustomer expectations.

ISO 9000 is not a prescriptive standard for quality. The requirements section(ISO 9001), which covers all aspects of design, development, production, test, training,and service, is less than 10 pages long. For example, when addressing the productdesign process, ISO 9000 focuses on design inputs, outputs, changes, and verification.It is not meant to inhibit creative thinking.

ISO 9000 is a system quality standard that provides requirements and guidanceon all aspects of a company’s procedures, organization, and personnel that affectquality—from product inception through delivery to the customer. It also providessignificant requirements and guidance on the quality of the output delivered to thecustomer. Pertinent questions are: What benefits will the proposed changes to theprocedures, organization, and personnel provide to the customer? Will the proposedchanges help to continuously improve product delivery schedules and product qualityand reduce the amount of variance in product output?

ISO 9000 does not require inspection to verify quality, nor is it the preferredmethod. ISO requires that the output be verified according to documented process-control procedures. ISO 9000 does not mandate that specific statistical processes beused; it requires the user to implement appropriate statistical processes. ISO 9000mandates product-control methods such as inspection only when process-controlmethods are neither practical nor feasible.

Page 273: DBA1656

DBA 1656 QUALITY MANAGEMENT

273

NOTES

Anna University Chennai

ISO 9000 does not provide industry-specific performance requirements. Itprovides a quality model that can be applied to virtually every industry procurementsituation and is being used worldwide for commercial and, recently, governmentprocurements.

Many suppliers already have a quality system in place, be it simple or elaborate.ISO 9000 does not require a supplier to add new or redundant requirements to anexisting quality system. Rather, it requires that the supplier specify a basic, common-sense, minimal quality system that will meet the quality needs of the customer. Thus,many suppliers find that their operative quality system already meets some or all of theISO 9000 requirements. They only need to show that their existing procedurescorrespond to the relevant sections of ISO 9000.

ISO 9000 provides suppliers with the flexibility of designing a quality systemfor their particular type of business, market environment, and strategic objectives. It isexpected that management, aided by experienced internal quality personnel and, ifnecessary, external ISO consultants, will determine the exact set of supplier qualityrequirements. To ensure the overall success of the quality program, however, the specificwork procedures should be created by those actually doing the work rather than bymanagement or ISO consultants. Although an organization’s documentation of workprocedures may be ISO 9000 compliant, if employees do not follow the procedures,the organization may not attain ISO 9000 certification. Drawing upon employee expertiseand keeping employees involved in the process when improving and controllingprocedures are critical to attaining ISO 9000 compliance.

Developing a quality system is not a sprint, but a journey, and because processesare continuously being improved, it is a journey without an end. ISO 9000 does notmandate the use of short-term motivational techniques to foster employee enthusiasmfor a supplier’s quality system program. Attempting to motivate employees by promisinglower overhead or greater market share is not likely to be successful. Instead, it isrecommended that employees be educated on how ISO 9000 standards will help themperform their jobs better and faster.

ISO 9000 emphasizes that for any quality system to be successful, top-management commitment and active involvement are essential. Management isresponsible for defining and communicating the corporate quality policy. It must define

Page 274: DBA1656

DBA 1656 QUALITY MANAGEMENT

274

NOTES

Anna University Chennai

the roles and responsibilities of individuals responsible for quality and ensure thatemployees have the proper background for their jobs and are adequately trained.Management must periodically review the effectiveness of the quality system. It shouldnot back the effort to comply with ISO 9000 during its inception and then back downwhen the scope and cost of the effort is fully realized. When employees sense thatmanagement commitment has diminished, their own commitment slackens. Employeestypically want out of a costly project not backed by management.

ISO 9000 does require that an organization have documented and implementedquality procedures that ensure personnel understand the quality system, that managementmaintain control of the system, and that internal and external audits be performed toverify the system’s performance. Because ISO 9000 affects the entire organization, allemployees should be given at least basic instruction in the ISO 9000 process and itsspecific implementation at their facility. Training should emphasize goals, benefits, andthe specific responsibilities and feedback required of each employee. ISO 9000 usescustomer satisfaction as its benchmark. But the “customers” of ISO 9000–compliantprocesses include not only the obvious end-users of the product, but also an organization’sproduct designers, manufacturers, inspectors, deliverers, and sales force.

Improving the processes that produce a quality product can provide an additionalbenefit: When the processes are well defined and constant and when employees arewell trained to perform these processes, employee safety typically improves significantly.Also, during the course of improving its processes, a company often finds after closeinspection that many of its processes and procedures are ineffectual and can beeliminated. Thus, while ISO 9000 requires preparation and maintenance of a formidableset of documents and records, the total paperwork of a company implementing ISO9000 may decrease significantly in the long run. Other benefits of ISO 9000 complianceare a decrease in product defects and customer complaints and increased manufacturingyields. A final but very important by-product of implementing ISO 9000 is a heightenedsense of mission at a company and an increased level of cooperation betweendepartments.

ISO 9000 is not product-quality oriented. It does not provide criteria forseparating acceptable output from defective output. Instead, it is a strategy for continuousimprovement where employees meet and exceed customer quality requirements and, indoing so, continuously improve the quality of the product.

Page 275: DBA1656

DBA 1656 QUALITY MANAGEMENT

275

NOTES

Anna University Chennai

ISO 9000 recognizes that when a customer is looking at a specific part of aproduct (e.g., car, stereo system), he is often looking at an item (e.g., engine, stereocabinet) provided by a subcontractor. Hence, ISO 9000 requires that a company verifythat its subcontractors are providing quality items. Today, organizations with excellentquality systems often partner with their subcontractors. ISO 9000 provides an excellentframework for such a relationship, with subcontractors providing the raw materials andcomponents of the final product.

The ISO 9000 family is a set of “quality system management” standards, thefirst in a set of evolving management system standards. Standards for environmentalmanagement are in place; standards for occupational safety, health management, andenergy management will soon follow. These new standards will affect the space andaircraft industries just as they affect other industries.

In summary, ISO 9000 compliance provides customers with the assurance thatapproved raw materials for a product have been purchased and that the product hasbeen manufactured according to the correct specifications, assembled by trainedemployees, properly inspected and tested, adequately packaged for preservation, andtransported in a manner that prevents damage to it en route. Overall, ISO 9000compliance helps generate quality awareness among a company’s employees, animproved competitive position for the company, an enhanced customer quality image,and increased market share and profits.

Components of the ISO 9000 Series

The ISO 9000 Series includes three standards:

ISO 9000:2000 Quality Management Systems—Fundamentals and VocabularyISO 9001:2000 Quality Management Systems—RequirementsISO 9004:2000 Quality Management Systems—Guidelines for PerformanceImprovement

ISO 9000 family

ISO 9000 includes the following standards:

ISO 9000:2005, Quality management systems - Fundamentals andvocabulary. covers the basics of what quality management systems are andalso contains the core language of the ISO 9000 series of standards.

Page 276: DBA1656

DBA 1656 QUALITY MANAGEMENT

276

NOTES

Anna University Chennai

ISO 9001:2000 Quality management systems - Requirements is intendedfor use in any organization which designs, develops, manufactures, installs and/or services any product or provides any form of service. It provides a numberof requirements which an organization needs to fulfill if it is to achieve customersatisfaction through consistent products and services which meet customerexpectations. This is the only implementation for which third-party auditorsmay grant certifications.

ISO 9004:2000 Quality management systems - Guidelines forperformance improvements. Covers continual improvement. This gives youadvice on what you could do to enhance a mature system. This standard veryspecifically states that it is not intended as a guide to implementation.

ISO 9002:1994 and ISO 9003:1994 were discontinued in the ISO 9000:2000 familyof standards. Organizations that do not have design or manufacturing responsibilities(and were previously certified using ISO 9002:1994) will now have to use ISO9001:2000 for certification. These organizations are allowed to exclude design andmanufacturing requirements in ISO 9001:2000 based on the rules for exception givenin Clause 1.2, Permissible Exclusions.ISO Facts

The International Organization for Standardization (ISO), founded in 1946, is a globalfederation of national standards organizations that includes some 130 member nations:

ISO is based in Geneva, Switzerland.ISO’s mission is to develop standards that facilitate trade across internationalborders.In 1979, the Technical Committee 176 (ISO/TC 176) was established to createinternational standards for quality assurance.Representatives from the United States and many other countries served onthe committees responsible for developing ISO 9000.Early in the 1990s, the chair of the consortium was a U.S. citizen from AmericanTelephone & Telegraph (AT&T).The U.S. standards organization within ISO is the American National StandardsInstitute (ANSI).

Page 277: DBA1656

DBA 1656 QUALITY MANAGEMENT

277

NOTES

Anna University Chennai

The American Society for Quality (ASQ) has published a U.S. version of theISO 9000 standards under the name Q9000.ISO serves only as a disseminator of information on system quality.ISO 9000 certificates are not issued on behalf of ISO.ISO does not monitor the activities of ISO 9000 accreditation bodies.Monitoring is done by accreditation boards within member nations.

Philosophy of ISO 9000

ISO 9000 places the responsibility for the establishment, performance, andmaintenance of a quality system directly with a company’s top management:

ISO requires the top management to define a quality policy, provide adequateresources for its implementation, and verify its performance.Top management must demonstrate how its employees acquire and maintainawareness of its quality policy.

The ISO 9000 process strives for generic applicability:

No specific methods, statistical processes, or techniques are mandated.Emphasis is on the overall objective of meeting customer expectations regardingthe output of the system quality process.ISO has said that it will never issue industry (product-specific) quality guidelines.

ISO 9000 strives to achieve a quality system by employing the following practices forcontinuous improvement:

Prevention rather than detection by inspectionComprehensive review of critical process pointsOngoing communication between the facility, its suppliers, and its customersDocumentation of processes and quality outcomesManagement commitment at the highest levels

ISO 9000 provides a clear definition of the management style required to achieve a“world-class” quality system:

Formal organization that delineates responsibilities

Page 278: DBA1656

DBA 1656 QUALITY MANAGEMENT

278

NOTES

Anna University Chennai

Documented, authorized, and enforced procedures for all key activitiesFull set of archived but periodically analyzed quality outcome recordsSet of periodic reviews to track system quality performance and plan andimplement corrective actionsPhilosophy of regulating, but not eliminating, individual initiative in achievingsystem quality

ISO 9000 provides a facility with a formal management style leading to systemquality. The measure of success in implementing system quality is determined by well-organized, well-planned, and well-executed periodic internal and external audits of theprocesses and quality outcomes of the facility.

The majority of ISO member nations will not mandate the adoption of the ISO9000 standards in the foreseeable future. To date, only Australia mandates adoption ofthe standards.

How well the ISO standards facilitate trade in the international marketplacewill determine how widespread their use becomes.

Advantages

According to the Providence Business News , implementing ISO often givesthe following advantages:

1. Create a more efficient, effective operation2. Increase customer satisfaction and retention3. Reduce audits4. Enhance marketing5. Improve employee motivation, awareness, and morale6. Promote international trade

Problems

A common criticism of ISO 9000 is

the amount of moneytimepaperwork required for registration

Page 279: DBA1656

DBA 1656 QUALITY MANAGEMENT

279

NOTES

Anna University Chennai

According to Barnes, “Opponents claim that it is only for documentation.Proponents believe that if a company has documented its quality systems, then most ofthe paperwork has already been completed.”

The ISO 9004:2000 standard

ISO 9004:2000 goes beyond ISO 9001:2000 in that it provides guidance onhow you can continually improve your business’ quality management system so that itbenefits not only your customers but also:

employeesownerssupplierssociety in general

By measuring these groups’ satisfaction with your business, you’ll be able toassess whether you’re continuing to improve.Read about ISO 9004:2000 at the British Standards Institution (BSI) website.

The ISO 9000 series, which includes 9001 and 9004, is based around eightquality management principles that the senior managers should use as a framework toimprove the business:

Customer focus - they must understand and fulfil customer needs.Leadership - they should demonstrate strong leadership skills to increase employeemotivation.Involvement of people - all levels of staff should be aware of the importance ofproviding what the customer requires and their responsibilities within the business.Process approach - identifying your essential business activities and considering eachone as part of a process.

System approach to management - managing your processes together as asystem, leading to greater efficiency and focus. You could think of each process as acog in a machine, helping it to run smoothly.

Continual improvement - this should be a permanent business objective.Factual approach to decision-making - senior staff should base decisions

on thorough analysis of data and information.

Page 280: DBA1656

DBA 1656 QUALITY MANAGEMENT

280

NOTES

Anna University Chennai

Mutually beneficial supplier relationships - managers should recognisethat your business and its suppliers depend on each other.

As ISO 9004:2000 is a set of guidelines and recommendations, you can’t becertified as achieving it.

5.2 QUALITY MANAGEMENT SYSTEMS

Quality Management System (QMS) can be defined as a set of policies,processes and procedures required for planning and Execution (Production /Development / Service) in their core business area of an Organization. QMS integratesthe various internal processes within the organization and intends to provide a processapproach for project execution. QMS enables the organizations to identify, measure,control and improve the various core business processes that will ultimately lead toimproved business performance.

Concept of QMS

The concept of quality as we think of it now first emerged out of the IndustrialRevolution. Previously goods had been made from start to finish by the same person orteam of people, with handcrafting and tweaking the product to meet ‘quality criteria’.Mass production brought huge teams of people together to work on specific stages ofproduction where one person would not necessarily complete a product from start tofinish. In the late 1800’s pioneers such as Frederick Winslow Taylor and Henry Fordrecognised the limitations of the methods being used in mass production at the time andthe subsequent varying quality of output. Taylor established Quality Departments tooversee the quality of production and rectifying of errors, and Ford emphasisedstandardisation of design and component standards to ensure a standard product wasproduced. Management of quality was the responsibility of the Quality department andwas implemented by inspection of product output to catch defects.

Application of statistical control came later as a result of World War productionmethods. Quality management systems are the outgrowth of work done by W. EdwardsDeming, a statistician, after whom the Deming Prize for quality is named.

Quality, as a profession and the managerial process associated with the qualityfunction, was introduced during the second-half of the 20th century, and has evolvedsince then. No other profession has seen as many changes as the quality profession.

Page 281: DBA1656

DBA 1656 QUALITY MANAGEMENT

281

NOTES

Anna University Chennai

The quality profession grew from simple control, to engineering, to systemsengineering. Quality control activities were predominant in the 1940s, 1950s, and 1960s.The 1970s were an era of quality engineering and the 1990s saw quality systems as anemerging field. Like medicine, accounting, and engineering, quality has achieved statusas a recognized profession.

Quality management organizations and awards

The International Organization for Standardization’s ISO 9000 series describesstandards for a QMS addressing the processes surrounding the design, developmentand delivery of a general product or service. Organisztions can participate in a continuingcertification process to demonstrate their compliance with the standard.

The Malcolm Baldrige National Quality Award is a competition to identify andrecognize top-quality U.S. companies. This model addresses a broadly based range ofquality criteria, including commercial success and corporate leadership. Once anorganization has won the award it has to wait several years before being eligible toapply again.

The European Foundation for Quality Management’s EFQM Excellence Modelsupports an award scheme similar to the Malcolm Baldrige Award for Europeancompanies.

In Canada, the National Quality Institute presents the ‘Canada Awards forExcellence’ on an annual basis to organizations that have displayed outstandingperformance in the areas of Quality and Workplace Wellness, and have met the Institute’scriteria with documented overall achievements and results.

The Alliance for Performance Excellence is a network of state, local, andinternational organizations that use the Malcolm Baldrige National Quality Award criteriaand model at the grassroots level to improve the performance of local organizationsand economies. NetworkforExcellence.org is the Alliance web site; browsers can findAlliance members in their state and get the latest news and events from the Baldrigecommunity.

5.3 GUIDELINES FOR PERFORMANCE IMPROVEMENTS

1. Purpose: The purpose of a Performance Improvement Plan is to communicate tothe employee the specific job performance areas that do not meet expectedstandards.

Page 282: DBA1656

DBA 1656 QUALITY MANAGEMENT

282

NOTES

Anna University Chennai

2. Develop a Performance Improvement Plan:

a) Clearly state why the employee’s job performance is a concern andhow it impacts the work environment.

b) Summarize the facts and events that necessitate the development of aPerformance Improvement Plan.

c) Develop specific and measurable steps to improve performance andinclude the employee’s ideas for improvement.

d) Establish reasonable timelines for improved performance on eachexpectation.

e) Conduct periodic reviews on a regular basis to monitor progress beingmade toward the expected outcome and provide feedback.

f) Communicate consequences for failure to meet expectations and sustainimproved performance.

3. Implement the Performance Improvement Plan:

a) Document each step of the Performance Improvement Planb) Provide constructive feedback to help the employee understand how

he/she is doing and what is expected.c) Focus on the job and not on the person. Concentrate on a specific

behavior to enable the employee to understand what you want andwhy.The individual will feel less defensive.

* Example with focus on behavior: “Your report is two days late.”* Example with focus on person: “You are not very reliable about

getting things done on time.”

d) Always meet with the employee and provide an opportunity fordiscussion and feedback.

e) At the end of the Performance Improvement Plan period, the supervisorwill determine if the process was satisfactorily completed or if progressivediscipline will be implemented in conjunction with Human Resources.

5.4 QUALITY AUDITS

Quality audit means a systematic, independent examination of a quality system.Quality audits are typically performed at defined intervals and ensures that the institution

Page 283: DBA1656

DBA 1656 QUALITY MANAGEMENT

283

NOTES

Anna University Chennai

has clearly-defined internal quality monitoring procedures linked to effective action.The checking determines if the quality system complies with applicable regulations orstandards The process involves assessing the standard operating procedures (SOP’s)for compliance to the regulations, and also assessing the actual process and resultsagainst what is stated in the SOP.

The U.S. Food and Drug Administration requires quality auditing to be done aspart of its Quality System Regulation (QSR) for medical devices, title 21 of the UnitedStates Code of Federal Regulations part 820.

The process of a Quality Audit can be managed using software tools, oftenWeb-based.

Internal Quality auditing is an important element in ISO’s quality system standard,ISO 9001. . With the upgrade of the ISO9000 series of standards from the 1994 to2000 series, the focus of audits has shifted from procedural adherence only tomeasurement of the effectiveness of the Quality Management System processes todeliver in accordance with planned results.

Higher education quality audit is an approach adopted by several countries,including New Zealand, Australia, Sweden, Finland Norway and the USA. It wasinitiated in the UK and is a term designed to focus on procedures rather than quality.

Guidelines for Planning and Performing Quality AuditsISO 10011-1: 1990

Quality audit objectives

• Quality audits are intended to achieve the following kinds of objectives:• To determine to what extent your quality system:

• Achieves its objectives. • Conforms to your requirements. • Complies with regulatory requirements. • Meets customers’ contractual requirements. • Conforms to a recognized quality standard.

• To improve the efficiency and effectiveness of your quality managementsystem.

• To list your quality system in registry of an independent agency. • To verify that your quality system continues to meet requirements.

Page 284: DBA1656

DBA 1656 QUALITY MANAGEMENT

284

NOTES

Anna University Chennai

Professional conduct

• Auditors must behave in a professional manner. Auditors must:• Have integrity and be independent and objective. • Have the authority they need to do a proper job. • Avoid compromising the audit by discussing

audit details with auditees during the audit.

The lead auditor’s job

A lead auditor’s job is to:

• Manage the audit. • Assign audit tasks. • Help select auditors. • Orient the audit team. • Prepare the audit plan. • Define auditor qualifications. • Clarify quality audit requirements. • Communicate audit requirements. • Prepare audit forms and checklists. • Review quality system documents. • Report major nonconformities immediately. • Interact with auditee’s management and staff. • Prepare, submit, and discuss audit reports.

Auditor’s job

• An auditor’s job is to:• Evaluate the quality system.• Carry out assigned audit tasks. • Comply with audit requirements. • Respect all confidentiality requirements. • Collect evidence about the quality system. • Document audit observations and conclusions. • Safeguard audit documents, records, and reports. • Determine whether quality policy is being applied.• Find out if the quality objectives are being achieved.• See whether quality procedures are being followed.• Detect evidence that might invalidate audit results.

Page 285: DBA1656

DBA 1656 QUALITY MANAGEMENT

285

NOTES

Anna University Chennai

Client’s job

• A client’s job is to:• Initiate the audit process.• Select the auditor organization. • Decide whether an audit needs to be done. • Define the purpose and scope of the audit. • Ensure that audit resources are adequate. • Determine how often audits must be done. • Specify which follow-up actions the auditee should take. • Indicate which standards should be used to evaluate compliance. • Select the elements, activities, and locations that must be audited. • Ensure enough evidence is collected to draw valid conclusions. • Receive and review the reports prepared by auditors.

NOTE: A “ client” is the organization that asked for the audit. The client couldbe an auditee, a customer, a regulatory body, or a registrar.

Auditee’s job

• An auditee’s job is to:• Explain the nature, purpose, and scope of the audit to employees. • Appoint employees to accompany and assist the auditors. • Ensure that all personnel cooperate fully with the audit team. • Provide the resources the audit team needs to do the audit. • Allow auditors to examine all documents, records, and facilities. • Correct and prevent problems that were identified by the audit.

NOTE: An “ auditee” is the organization being audited or a member of thatorganization.

When to do an audit

• A client may initiate an audit because:• A regulatory agency requires an audit. • A previous audit indicated that a follow-up audit was necessary. • An auditee has made important changes in:

• Policies or procedures.• Technologies or techniques.

Page 286: DBA1656

DBA 1656 QUALITY MANAGEMENT

286

NOTES

Anna University Chennai

• Management or organization. • An auditee may carry out audits on a regular basis to improve quality system

performance or to achieve business objectives.

Prepare an audit plan

• The auditor should begin planning the audit by reviewing documents (e.g.manuals) that both describe the quality system and explain how it is attemptingto meet quality requirements.

• If this preliminary review shows that the quality system is inadequate,the audit process should be suspended until this inadequacy is resolved.

• Prepare an audit plan. The plan should be prepared by the lead auditor andapproved by the client before the audit begins. The audit plan should:

• Define the objectives and scope of the audit. • Explain how long each phase of the audit will take. • Specify where and when the audit will be carried out. • Introduce the lead auditor and his team members.• Identify the quality elements that will be audited.• Identify the groups and areas that will be audited.• List the documents and records that will be studied. • List the people who are responsible for quality and whose areas and

functions will be audited.• Explain when meetings will be held with auditee’s senior management. • Clarify who will get the final audit report and when it will be ready.

Perform the quality audit

• Start the quality audit. Start the audit by having an opening meeting with theauditee’s senior management. This meeting should:

• Introduce the audit team.• Clarify scope, objectives, and schedule. • Explain how the audit will be carried out. • Confirm that the auditee is ready to support the audit process.

••••• Prepare audit working papers.• Prepare ← checklists ← (use ← to ← evaluate

quality management system elements).

Page 287: DBA1656

DBA 1656 QUALITY MANAGEMENT

287

NOTES

Anna University Chennai

••••• Prepare ← forms ← (use ← to ← record observations and collectevidence).

••••• Collect evidence by:• Interviewing personnel. • Reading documents. • Reviewing manuals. • Studying records.• Reading reports.• Scanning files.• Analyzing data. • Observing activities. • Examining conditions.

• Confirm interview evidence. Evidence collected through interviews should,whenever possible, be confirmed by more objective means.

• Investigate clues. Clues that point to possible quality management systemnonconformities should be thoroughly and completely investigated.

••••• Document observations. Auditors must study the evidence and documenttheir observations.

• List nonconformities. Auditors must study their observations and make a listof key nonconformities. They must ensure that nonconformities are:

• Supported by the evidence. • Cross-referenced to the standards that are being violated.

• Draw conclusions. Auditors must draw conclusions about how well the qualitysystem is applying its policies and achieving its objectives.

••••• Discuss results. Auditors should discuss evidence, observations, conclusions,recommendations, and nonconformities with auditee senior managers beforethey prepare a final audit report.

Prepare the audit report

• Prepare the final audit report. The audit report should be dated and signedby the lead auditor. This report should include:

• The detailed audit plan. • A review of the evidence that was collected.• A discussion of the conclusions that were drawn. • A list of the nonconformities that were identified.• A judgment about how well the quality system

complies with all quality system requirements.

Page 288: DBA1656

DBA 1656 QUALITY MANAGEMENT

288

NOTES

Anna University Chennai

• An assessment of the quality system’s ability to achieve quality objectivesand apply the quality system policy.

• Submit the audit report. The lead auditor should send the audit report to theclient, and the client should send it to the auditee.

Follow-up steps

• Take remedial actions. The auditee is expected to take whatever actions arenecessary to correct or prevent nonconformities.

• Schedule follow-up audit. Follow-up audits should be scheduled in order toverify that corrective and preventive actions were taken.

Quality Management Vs Quality Audit

In the ePMbook, we will make a distinction between Quality Management and QualityAudit.

By Quality Management, we mean all the activities that are intended to bringabout the desired level of quality.By Quality Audit we mean the procedural controls that ensure participants areadequately following the required procedures.These concepts are related, but should not be confused. In particular, Quality

Audit relates to the approach to quality that is laid down in quality standards such as theISO-900x standards.

The abbreviation “QA” has been generally avoided in the ePMbook as it canmean different things - e.g. “Quality Assurance”, “Quality Audit”, testing, external reviews,etc.

The principle behind Quality Audit

The principles of Quality Audit, in the sense we mean it here, are based on thestyle of quality standards used in several formal national and international standardssuch as the ISO-900x international quality standards. These standards do not inthemselves create quality. The logic is as follows.

Every organization should define comprehensive procedures by which theirproducts or services can be delivered consistently to the desired level of quality. Aswas discussed in the section on Quality Management, maximum quality is rarely the

Page 289: DBA1656

DBA 1656 QUALITY MANAGEMENT

289

NOTES

Anna University Chennai

desired objective since it can cost too much and take too long. The average product orservice provides a sensible compromise between quality and cost. There is also alegitimate market for products that are low cost and low quality.

Standards authorities do not seek to make that business judgement and enforceit upon businesses, except where certain minimum standards must be met (e.g. all carsmust have seat belts that meet minimum safety standards, but there is no attempt todefine how elegant or comfortable they are).

The principle is that each organization should create thorough, controlledprocedures for each of its processes. Those procedures should deliver the quality thatis sought. The Quality Audit, therefore, only needs to ensure that procedures have beendefined, controlled, communicated and used. Processes will be put in place to dealwith corrective actions when deviations occur. This principle can be applied to continuousbusiness process operations or recurring project work. It would not be normal toestablish a set of quality controlled procedures for a one-off situation since the emphasisis consistency.

This principle may be applied whether or not the organization seeks to establishor maintain an externally recognized quality certification such as ISO-900x. To achievea certification, the procedures will be subjected to internal and external scrutiny.Preparing for Quality Audit

Thorough procedures need to be defined, controlled, communicated and used.

Thorough Procedures should cover all aspects of work where conformityand standards are required to achieve desired quality levels. Forexample, one might decide to control formal program testing,but leave the preliminary testing of a prototype to theprogrammer’s discretion.

Procedures Any recurring aspect of work could merit regulation. The styleand depth of the description will vary according to needs andpreferences, provided it is sufficiently clear to be followed.

Defined A major tenet is that the defined procedures are good and willlead to the desired levels of quality. Considerable thought,consultation and trialing should be applied in order to defineappropriate procedures. Procedures will often also requiredefined forms or software tools.

Page 290: DBA1656

DBA 1656 QUALITY MANAGEMENT

290

NOTES

Anna University Chennai

Controlled As with any good quality management, the procedures shouldbe properly controlled in terms of accessibility, version control,update authorities etc.

Communicated All participants need to know about the defined procedures -which they exist, where to find them, what they cover. Qualityreviewers are likely to check that team members understandabout the procedures.

Used The defined procedures should be followed. Checks will bemade to ensure this is the case. A corrective action procedurewill be applied to deal with shortcomings. Typically, the correctiveaction would either be to learn the lesson for next time, or to re-work the item if it is sufficiently important.

There is no reason why these Quality Audit techniques should conflict with the

project’s Quality Management processes. Where project work is recurring, the aimshould be for the Quality Methods and other procedures to be defined once for bothpurposes.

Problems may occur where the current project has significant differences fromearlier ones. Quality standards may have been set in stone as part of a quality certification.In extreme situations this can lead to wholly inappropriate procedures being forcedupon the team, for example, using traditional structured analysis and design in a waterfallstyle approach for what would be handled best using iterative prototyping. The ProjectManager may need to re-negotiate quality standards with the organization’s QualityManager. Operating Quality Audit

A Quality Audit approach affects the entire work lifecycle:

Pre-defined standards will impact the way the project is plannedQuality requirements for specific work packages and deliverables will beidentified in advanceSpecific procedures will be followed at all stagesQuality Methods must be defined and followedCompleted work and deliverables should be reviewed for compliance.

This should be seen as an underlying framework and set of rules to apply in the project’sQuality Management processes.

Page 291: DBA1656

DBA 1656 QUALITY MANAGEMENT

291

NOTES

Anna University Chennai

Quality Audit reviews

Although the impact of Quality Audit will be across all parts of the lifecycle,specific Quality Audit activities tend to be applied as retrospective reviews that theProject Team correctly followed its defined procedures. Such reviews are most likelyto be applied at phase end and project completion. Of course, the major drawback ofsuch a review is that it is normally too late to affect the outcome of the work. Theemphasis is often on learning lessons and fixing administrative items. In many ways, thepurpose of the review is to encourage conformity by the threat of a subsequent badexperience with the quality police.

CHARACTERISTICS OF AUDITS

What is a quality auditor and what is the purpose of a quality audit? Is a qualityaudit similar to a financial audit? Is an audit the same as a surveillance or inspection?These types of questions are often asked by those unfamiliar with the quality auditingprofession. As far as what a quality auditor is, Allan J. Sayle says it best:

Auditors are the most important of the quality professionals. They must havethe best and most thorough knowledge of business, systems, developments, etc. Theysee what works, what does not work, strengths, weaknesses of standards, codes,procedures and systems.

The purpose of a quality audit is to assess or examine a product, the processused to produce a particular product or line of products or the system supporting theproduct to be produced. A quality audit is also used to determine whether or not thesubject of the audit is operating in compliance with governing source documentationsuch as corporate directives, federal and state environmental protection laws andregulations, etc. A quality audit distinguishes itself from a financial audit in that a financialaudit’s primary objective is to verify the integrity and accuracy of the accounting methodsused within the organization. Yet, despite this basic difference, it is important to notethat many of the present-day quality audit techniques have their traditional roots infinancial audits.

WHO’S AUDITING WHOM?

The audit can be accomplished by three different sets of auditors and auditees:first party, second party, and third party.

Page 292: DBA1656

DBA 1656 QUALITY MANAGEMENT

292

NOTES

Anna University Chennai

First-Party Audits

The first-party audit is also known as an internal audit or self audit. It isperformed within your own company. This can be a central office group auditing one ofthe plants, auditing within a division, local audits within the plant, or any number ofsimilar combinations. There are no external customer-supplier audit relationships here,just internal customers and suppliers.

Second-Party Audits

A customer performs a second-party audit on a supplier. A contract is in placeand goods are being, or will be, delivered. If you are in the process of approving apotential supplier through the application of these auditing techniques, you are performinga supplier survey. A survey is performed before the contract is signed; an audit isperformed after the contract is signed. Second-party audits are also called externalaudits, if you are the one doing the auditing. If your customer is auditing you, it is still asecond-party audit, but, since you are now on the receiving end, this is an extrinsic(not external) audit.

Third-Party Audits

Regulators or registrars perform third-party audits. Government inspectors mayexamine your operations to see if regulations are being obeyed. Within the United States,this is quite common in regulated industries, such as nuclear power stations and medicaldevice manufacturers. Through these regulatory audits, the consumer public receivesassurance that the laws are being obeyed and products are safe. Registration audits areperformed as a condition of joining or being approved. Hospitals and universities areaccredited by non-governmental agencies to certain industry standards. Tradeorganizations may wish to promote the safety and quality of their industry products orservices through an audit program and seal of approval. Other countries often use theterm certification rather than registration. Businesses around the world are registeringtheir facilities to the ISO 9001 standard in order to gain marketing advantage. Doneproperly, this registration promotes better business practices and greater efficiencies.

Page 293: DBA1656

DBA 1656 QUALITY MANAGEMENT

293

NOTES

Anna University Chennai

5.5 TQM CULTURE

Culture

Culture is the pattern of shared beliefs and values that provides the members ofan organization rules of behaviour or accepted norms for conducting operations. It isthe philosophies, ideologies, values, assumptions, beliefs, expectations, attitudes, andnorms that knit an organization together and are shared by employees.

For example, IBM’s basic beliefs are, (1) respect for the individual, (2) bestcustomer service and (3) pursuit of excellence. In turn, these beliefs are operationalizedin terms of strategy and customer values. In simple terms, culture provides a frameworkto explain “the way things are done around here”.

Other examples of basic beliefs include:

Company Basic belief

Ford Quality is job oneDelta A family feeling3M Product innovationLincoln electric Wages proportionate to productivityCaterpillar Strong dealer support; 24-hour spare

parts support around the worldMcDonald’s Fast service, consistent quality

Institutionalizing strategy requires a culture that supports the strategy. For mostorganizations a strategy based on TQM requires a significant if not sweeping change inthe way people think. Jack Welch, head of General Electric and one of the mostcontroversial and respected executives in America, states that cultural change must besweeping – not incremental change but “quantum”. His cultural transformation at GEcalls for a “boundary-less” company where internal divisions blur, everyone works as ateam, and both suppliers and customers are partners. His cultural concept of changemay differ from Juran, who says that, “when it comes to quality, there is no such thing asimprovement in general. Any improvement is going to come about project by projectand no other way. The acknowledged experts agree on the need for a cultural or valuesystem transformation:

Page 294: DBA1656

DBA 1656 QUALITY MANAGEMENT

294

NOTES

Anna University Chennai

Deming calls for a transformation of the American management style.Feigenbaurn suggests a pervasive improvement throughout the organization.According to Crosby, “Quality is the result of a carefully constructed culture, ithas to be the fabric of the organization”

It is not surprising that many executives hold the same opinions. In a GallupOrganization survey of 615 business executives, 43 percent rated a change in corporateculture, as an integral part of improving quality. The needed change may be givendifferent names in different companies. Robert Crandall, CEO of American Airlines,calls it an innovative environment,” while at DuPont it is “The Way People Think” andat Allied Signal “Workers attitudes had to change. Xerox specified a 5-year culturalchange strategy called Leadership through Quality.

Successful organizations have a central core culture around which the rest ofthe company revolves. It is important for the organization to have a sound basis ofcore values into which management and other employees will be drawn. Without thiscentral core, the energy of members of the organization will dissipate as they developplans, make decisions, communicate, and carry on operations without a fundamentalcriteria of relevance to guide them. This is particularly true in decisions related toquality. Research has shown that quality means different things to different people andlevels in the organization. Employees tend to think like their peers and think differentlyfrom those at other levels. This suggests that organizations will have considerable difficultyin improving quality unless core values are embedded in the organization.

Commitment to quality as a core value for planning, organizing and control willbe doubly difficult when a concern for the practice is lacking. Research has shown thatmany U.S. supervisors believe that a concern for quality is lacking among workers andmanagers. Where this is the case, the perceptions of these supervisors may become aself-fulfilling prophecy.

Embedding a Culture of Quality

It is one thing for top management to state a commitment to quality but quiteanother for this commitment to be accepted or embedded in the company. The basicvehicle for embedding an organizational culture is a teaching process in which desiredbehaviours and activities are learned through experiences, symbols, and explicitbehaviour. Once again, the components of the total quality system provide the vehiclesfor change. Above all, demonstration of commitment by top management is essential.

Page 295: DBA1656

DBA 1656 QUALITY MANAGEMENT

295

NOTES

Anna University Chennai

This commitment is demonstrated by behaviours and activities that are exhibitedthroughout the company. Categories of behaviours include:

Signalling : Making statements or taking actions that support the vision of quality,such as mission statements, creeds, or charters directed toward customersatisfaction. Public supermarkets “Where shopping is a pleasure” and JC Penney’s“The customer is always right are examples of such statements.

Focus : Every employee must know the mission, his or her part in it, and what hasto be done to achieve it. What management pays attention to and how they reactto crisis is indicative of this focus. When all functions and systems are aligned andwhen practice supports the culture, everyone is more likely to support the vision.Johnson and Johnson’s cool reaction to the Tylenol scare is such an example.

Employee policies : These may be the clearest expression of culture, at least fromthe view point of the employee. A culture of quality can be easily demonstrated insuch policies as the reward and promotion system status symbols, and other humanresource actions.

Executives at all levels could learn a lesson from David T. Kearns, Chairmanand Chief Executive Officer of Xerox Corporation. In an article for the academicjournal, Academy of Management Executive, he describes the change at Xerox: “Atthe time Leadership-Through-Quality was introduced, I told our employees that customersatisfaction would be our top priority and that it would change the culture of the company.We redefined quality as meeting the requirements of our customers. It may have beenthe most significant strategy xerox ever embarked on”.

Among the changes brought about by the cultural change, were the managementstyle and the role of first-line management, Kearns continues: “We altered the role offirst-line management from that of the traditional, dictatorial foreman to that of a supervisorfunctioning primarily as a coach and expediter.”

Using a modification of the Ishikawa (fishbone) diagram, Xerox demonstratedhow the major component of the company’s quality system was used for the transitionto TQM.

Page 296: DBA1656

DBA 1656 QUALITY MANAGEMENT

296

NOTES

Anna University Chennai

5.6 LEADERSHIP

Leadership Commitment

People create results. Involving all employees is essential to the GE qualityapproach. GE is committed to providing opportunities and incentives for employees tofocus their talents and energies on satisfying customers.

All GE employees are trained in the strategy, statistical tools and techniques ofSix Sigma Quality. Training courses are offered at various levels:

Quality Overview Seminars: Basic Six Sigma awarenessTeam Training: Basic tool introduction to equip employees to participate on SixSigma teamsMaster Black Belt, Black Belt and Green Belt Training: In-depth quality trainingthat includes high-level statistical tools, basic quality control tools, ChangeAcceleration Process and Flow technology tools.Design for Six Sigma (DFSS) Training: Prepares teams for the use of statisticaltools to design it right the first time

Quality is the responsibility of every employee. Every employee must beinvolved, motivated and knowledgeable if we are to succeed.

5.7 QUALITY COUNCIL

Quality Control

Quality control may generally be defined as a system that is used to maintain adesired level of quality in a product or service. This task may be achieved throughdifferent measures such as planning, design, use of proper equipment and procedures,inspection, and taking corrective action in case a deviation is observed between theproduct, service or process output and a specified standard (ASQC 1983; Walsh etal.1986). This general area may be divided into three main sub-areas – namely, off-linequality control, statistical process control, and acceptance sampling plans.

Off-Line Quality Control

Off-line quality control procedures deal with measures to select and choosecontrol label product and process parameters in such a way that the deviation between

Page 297: DBA1656

DBA 1656 QUALITY MANAGEMENT

297

NOTES

Anna University Chennai

the product or process output and the standard will be minimized. Much of this task isaccomplished through product and process design. The goal is to come up with adesign within the constraints of resources and environmental parameters such that whenproduction takes place, the output meets the standard. Thus, to the extent possible, theproduct and process parameters are set before production begins. Principles ofexperimental design and the Taguchi method, discussed in a later chapter, provideinformation on off-line process control procedures.

Statistical Process Control

Statistical process control involves comparing the output of a process or aservice with a standard and taking remedial actions in case of a discrepancy betweenthe two. It also involves determining whether a process can produce a product thatmeets desired specifications or requirements.

For example, to control paperwork errors in an administrative department,information might be gathered daily on the number of errors. If the observed numberexceeds some specified standard, then on identification of possible causes, action shouldbe taken to reduce the number of errors. This may involve training the administrativestaff, simplifying operations if the error is of an arithmetic nature, redesigning the form,or other appropriate measures.

On-line statistical process control means that information is gathered about theproduct, process, or service while it is functional. When the output differs from adetermined norm, corrective action is taken in that operational phase. It is preferableto take corrective actions on a real-time basis for quality control problems. Thisapproach attempts to bring the system to an acceptable state as soon as possible, thusminimizing either the number of unacceptable items produced or the time over whichundesirable service is rendered.

One question that may come to mind is: Shouldn’t all processes be controlledon an off-line basis? The answer is yes, to the extent possible. The prevailing theme ofquality control is that quality has to be designed into the product or service, it cannot beinspected into it. However, in spite of taking off-line quality control measures, theremay be a need for on-line quality control, because variation in the manufacturing stageof a product or the delivery stage of a service is inevitable. Therefore, some rectifying

Page 298: DBA1656

DBA 1656 QUALITY MANAGEMENT

298

NOTES

Anna University Chennai

measures are needed in this phase. Ideally, a combination of off-line and on-line qualitycontrol measures will lead to a desirable level of operation.

Acceptance Sampling Plans

This branch of quality control deals with inspection of the product or service.When 100 percent inspection of all items is not feasible, a decision has to be made onhow many items should be sampled or whether the batch should be sampled at all. Theinformation obtained from the sample is used to decide whether to accept or reject theentire batch or lot. In the case of attributes, one parameter is the acceptable number ofnonconforming items in the sample. If the observed number of nonconforming itemsare less than or equal to this number, the batch is accepted. This is known as theacceptance number. In the case of variables, one parameter may be the proportion ofitems in the sample that are outside the specifications. This proportion would have tobe less than or equal to a standard for the lot to be accepted.

A plan that determines the number of items to sample and the acceptance criteriaof the lot, based on meeting certain stipulated conditions is known as an acceptancesampling plan.

Let’s consider a case of attribute inspection where an item is classified asconforming or not conforming to a specified thickness of 12 ± 0.4 mm. Suppose theitems come in batches of 500 units. If an acceptance sampling plan with a sample sizeof 50 and an acceptance number of 3 is specified, then the interpretation of the plan isas follows. Fifty items will be randomly selected by the inspector from the batch of 500items. Each of the 50 items will then be inspected and classified as conforming or notconforming. If the number of nonconforming items in the sample is 3 or less, the entirebatch of 500 items is accepted. However, if the number of nonconforming items isgreater than 3, the batch is rejected. Alternatively, the rejected batch may be screened;that is, each item is inspected and nonconforming ones are removed.Benefits of Quality Control

The goal of most companies is to conduct business in such a manner that anacceptable rate of return is obtained by the shareholders. What must be considered in

Page 299: DBA1656

DBA 1656 QUALITY MANAGEMENT

299

NOTES

Anna University Chennai

this setting is the short-term goal versus the long-term goal. If the goal is to show acertain rate of return this coming year, this may not be an appropriate strategy, becausethe benefits of quality control may not be realized immediately. However, from a long-term perspective, a quality control system may lead to a rate of return that is not onlybetter but is also sustainable.

One of the drawbacks of the manner in which many U.S. companies operate isthat the output of managers is measured in short time frames. It is difficult for a managerto shown an increase of a 5 percent rate of return, say, in the quarter after implementinga quality system. Top management may then doubt the benefits of quality control.

The advantages of a quality control system, however, become obvious in thelong run. First and foremost is the improvement in the quality of products and services.Production improves because a well-defined structure for achieving production goals ispresent. Second, the system is continually evaluated and modified to meet the changingneeds of the customer. Therefore, a mechanism exists to rapidly modify product orprocess design, manufacture, and service to meet customer requirements so that thecompany remains competitive. Third, a quality control system improves productivity,which is a goal of every organization. It reduces the production of scrap and rework,thereby increasing the number of usable products. Fourth, such a system reduces costsin the long run. The notion that improved productivity and cost reduction do not gohand in hand is a myth. On the contrary, this is precisely what a quality control systemdoes achieve. With the production of fewer nonconforming items, total costs decrease,which may lead to a reduced selling price and thus increased competitiveness. Fifth,with improved productivity, the lead time for producing parts and subassemblies isreduced, which results in improved delivery dates. Once again, quality control keepscustomers satisfied. Meeting their needs on a timely basis helps sustain a goodrelationship. Last, but not least, a quality control system maintains an “improvement”environment where everyone strives for improved quality and productivity. There is noend to this process – there is always room for improvement. A company that adoptsthis philosophy and uses a quality control system to help meet this objective is one thatwill stay competitive.

5.8 EMPLOYEE INVOLVEMENT

Employment involvement

Page 300: DBA1656

DBA 1656 QUALITY MANAGEMENT

300

NOTES

Anna University Chennai

In a Harvard Business Review article, David Gumpert described a small“microbrewery” where the head of the company attributed their success to a loyal,small, and involved work force. He found that keeping the operation small, strengthenedemployee cohesiveness and gave them a feeling of responsibility and pride. This anecdotetells a lot about small groups and how they can impact motivation, productivity, andquality. If quality is the objective, employee involvement in small groups and teams willgreatly facilitate the result because of two reasons: motivation and productivity.

The theory of motivation, but not necessarily its practice, is fairly mature, andthere is substantial proof that it can work. By oversimplifying a complex theory, it canbe shown why team membership is an effective motivational device that can lead toimproved quality.

Teams improve productivity as a result of greater motivation and reduced overlapand lack of communication in a functionally based classical structure characterized byterritorial battles and parochial outlooks. There is always the danger that functionalspecialists, if left to their own devices, may pursue their own interests with little regardfor the overall company mission. Team membership, particularly a cross-functionalteam, reduces many of these barriers and encourages an integrative systems approachto achievement of common objectives, those that are common to both the companyand the team. There are many success stories. To cite a few:

- Globe Metallurgical Inc., the first small company to win the BaldrigeAward, had a 380 percent increase in productivity which was attributedprimarily to self-managed work teams.

- The partnering concept requires a new corporate culture of participativemanagement and teamwork throughout the entire organization. Fordincreased productivity 28 percent by using the team concept with thesame workers and equipment.

- Harleysville Insurance Company’s Discovery program providessynergism resulting from the team approach. The program produced acost saving of $3.5 million, along with enthusiasm and involvement amongemployees.

- At Decision Data Computer Corporation middle management is trainedto support “Pride Team”.

- Martin Marietta Electronics and Missiles Group has achieved successwith performance measurement teams (PMTs).

- Publishers Press has achieved significant productivity improvementsand attitude change from the company’s process improvement teams

Page 301: DBA1656

DBA 1656 QUALITY MANAGEMENT

301

NOTES

Anna University Chennai

(PITs).- Florida Power and Light Company, the utility that was the first recipient

of the Deming Prize, has long had quality improvement teams as afundamental component of their quality improvement program.

5.9 MOTIVATION

In psychology, motivation refers to the initiation, direction, intensity andpersistence of behavior. Motivation is a temporal and dynamic state that should not beconfused with personality or emotion. Motivation is having the desire and willingness todo something. A motivated person can be reaching for a long-term goal such as becominga professional writer or a more short-term goal like learning how to spell a particularword. Personality invariably refers to more or less permanent characteristics of anindividual’s state of being (e.g., shy, extrovert, conscientious). As opposed to motivation,emotion refers to temporal states that do not immediately link to behavior (e.g., anger,grief, happiness).

Drive theory

There are a number of drive theories. The Drive Reduction Theory growsout of the concept that we have certain biological needs, such as hunger. As time passesthe strength of the drive increases as it is not satisfied. Then as we satisfy that drive byfulfilling its desire, such as eating, the drive’s strength is reduced. It is based on thetheories of Freud and the idea of negative feedback systems, such as a thermostat.

Page 302: DBA1656

DBA 1656 QUALITY MANAGEMENT

302

NOTES

Anna University Chennai

There are several problems, however, that leave the validity of the DriveReduction Theory open for debate. The first problem is that it does not explain howsecondary reinforcers reduce drive. For example, money does not satisfy any biologicalor psychological need but reduces drive on a regular basis through a pay check (see:second-order conditioning). Secondly, if the drive reduction theory held true we wouldnot be able to explain how a hungry human being can prepare a meal without eating thefood before the end of the preparation. Supposedly, the drive to satiate one’s hungerwould drive a person to consume the food, however we prepare food on a regularbasis and “ignore” the drive to eat. Thirdly, a drive is not able to be measured andtherefore cannot be proven to exist in the first place (Barker 2004).

Rewards and incentives

A reward is that which is given following the occurrence of a behavior with theintention of acknowledging the positive nature of that behavior, and often with theadditional intent of encouraging it to happen again. The definition of reward is not to beconfused with the definition of reinforcer, which includes a measured increase in therate of a desirable behavior following the addition of something to the environment.There are two kinds of rewards, extrinsic and intrinsic. Extrinsic rewards are externalto, or outside of, the individual; for example, praise or money. Intrinsic rewards areinternal to, or within, the individual; for example, satisfaction or accomplishment.

It was previously thought that the two types of motivation (intrinsic and extrinsic)were additive, and could be combined to produce the highest level of motivation. Someauthors differentiate between two forms of intrinsic motivation: one based on enjoyment,the other on obligation. In this context, obligation refers to motivation based on what anindividual thinks ought to be done. For instance, a feeling of responsibility for a missionmay lead to helping others beyond what is easily observable, rewarded, or fun.

INTRINSIC MOTIVATION

Intrinsic motivation is evident when people engage in an activity for its ownsake, without some obvious external incentive present. A hobby is a typical example.

Page 303: DBA1656

DBA 1656 QUALITY MANAGEMENT

303

NOTES

Anna University Chennai

Intrinsic motivation has been intensely studied by educational psychologistssince the 1970s, and numerous studies have found it to be associated with high educationalachievement and enjoyment by students.

There is currently no “grand unified theory” to explain the origin or elements ofintrinsic motivation. Most explanations combine elements of Bernard Weiner’s attributiontheory, Bandura’s work on self-efficacy and other studies relating to locus of controland goal orientation. Thus it is thought that students are more likely to experience intrinsicmotivation if they:

• Attribute their educational results to internal factors that they can control (eg.the amount of effort they put in, not ‘fixed ability’).

• Believe they can be effective agents in reaching desired goals (eg. the resultsare not determined by dumb luck.)

• Are motivated towards deep ‘mastery’ of a topic, instead of just rote-learning‘performance’ to get good grades.

Note that the idea of reward for achievement is absent from this model ofintrinsic motivation, since rewards are an extrinsic factor.

In knowledge-sharing communities and organizations, people often cite altruisticreasons for their participation, including contributing to a common good, a moralobligation to the group, mentorship or ‘giving back’. This model of intrinsic motivationhas emerged from three decades of research by hundreds of educationalists and is stillevolving.

EXTRINSIC MOTIVATION

Traditionally, extrinsic motivation has been used to motivate employees:

• Tangible rewards such as payments, promotions (or punishments).• Intangible rewards such as praise or public commendation.

Within economies transitioning from assembly lines to service industries, the importanceof intrinsic motivation rises:

• The further jobs move away from pure assembly lines, the harder it becomes tomeasure individual productivity. This effect is most pronounced for knowledgeworkers and amplified in teamwork. A lack of objective or universally acceptedcriteria for measuring individual productivity may make individual rewardsarbitrary.

Page 304: DBA1656

DBA 1656 QUALITY MANAGEMENT

304

NOTES

Anna University Chennai

• Since by definition intrinsic motivation does not rely on financial incentives, it ischeap in terms of dollars but expensive in the fact that the inherent rewards ofthe activity must be internalized before they can be experienced as intrinsicallymotivating.

However, intrinsic motivation is no panacea for employee motivation. Problemsinclude:

• For many commercially viable activities it may not be possible to find any orenough intrinsically motivated people.

• Intrinsically motivated employees need to eat, too. Other forms of compensationremain necessary.

• Intrinsic motivation is easily destroyed. For instance, additional extrinsicmotivation is known to have a negative impact on intrinsic motivation in manycases, perceived injustice in awarding such external incentives even more so.

Telic and Paratelic motivational modes

Psychologist Michael Apter’s studies of motivation led him to describe what hecalled the “telic” (from Greek telos or “goal”) and “paratelic” motivational modes, orstates. In the telic state, a person is motivated primarily by a particular goal or objective—such as earning payment for work done. In the paratelic mode, a person is motivatedprimarily by the activity itself—intrinsic motivation.

Punishment

Punishment, when referred in general, is an unfavorable condition introduced into theenvironment to eliminate undesirable behavior. This is used as one of the measures ofBehavior Modification. Action resulting in punishment will dismotivate repetition of action.

Aggression

Aggression is generally used in the civil service area where units are devoted tomaintaining law and order. In some environments officers are grounded by their superiorsin order to perform better and to stay out of illegal activities.

Stress

Stress works in a strange way to motivate, like reverse psychology. Whenunder stress and difficult situations, a person feels pressured. This may trigger feelings

Page 305: DBA1656

DBA 1656 QUALITY MANAGEMENT

305

NOTES

Anna University Chennai

of under-achieving, which results in a reverse mindset, to strive to achieve. This isalmost sub-conscious. The net amount motivation under stress may motivate a personto work harder in order to “compensate” for his feelings.

Secondary goals

These important biological needs tend to generate more powerful emotionsand thus more powerful motivation than secondary goals. This is described in modelslike Abraham Maslow’s hierarchy of needs. A distinction can also be made betweendirect and indirect motivation: In direct motivation, the action satisfies the need, in indirectmotivation, the action satisfies an intermediate goal, which can in turn lead to thesatisfaction of a need. In work environments, money is typically viewed as a powerfulindirect motivation, whereas job satisfaction and a pleasant social environment are moredirect motivations. However, this example highlights well that an indirect motivationalfactor (money) towards an important goal (having food, clothes etc.) may well be morepowerful than the direct motivation provided by an enjoyable workplace.

Coercion

The most obvious form of motivation is coercion, where the avoidance of painor other negative consequences has an immediate effect. When such coercion ispermanent, it is considered slavery. While coercion is considered morally reprehensiblein many philosophies, it is widely practiced on prisoners, students in mandatory schooling,and in the form of conscription. Critics of modern capitalism charge that without socialsafety networks, wage slavery is inevitable. However, many capitalists such as AynRand have been very vocal against coercion. Successful coercion sometimes can takepriority over other types of motivation. Self-coercion is rarely substantially negative(typically only negative in the sense that it avoids a positive, such as undergoing anexpensive dinner or a period of relaxation), however it is interesting in that it illustrateshow lower levels of motivation may be sometimes tweaked to satisfy higher ones.

SOCIAL AND SELF REGULATION

Self control

The self-control of motivation is increasingly understood as a subset of emotionalintelligence; a person may be highly intelligent according to a more conservative definition

Page 306: DBA1656

DBA 1656 QUALITY MANAGEMENT

306

NOTES

Anna University Chennai

(as measured by many intelligence tests), yet unmotivated to dedicate this intelligenceto certain tasks. Victor Vroom’s “expectancy theory” provides an account of whenpeople will decide whether to exert self control to pursue a particular goal. Self controlis often contrasted with automatic processes of stimulus-response, as in themethodological behaviorist’s paradigm of JB Watson.

Drives and desires can be described as a deficiency or need that activatesbehaviour that is aimed at a goal or an incentive. These are thought to originatewithin the individual and may not require external stimuli to encourage the behaviour.Basic drives could be sparked by deficiencies such as hunger, which motivates a personto seek food; whereas more subtle drives might be the desire for praise and approval,which motivates a person to behave in a manner pleasing to others.

By contrast, the role of extrinsic rewards and stimuli can be seen in the exampleof training animals by giving them treats when they perform a trick correctly. The treatmotivates the animals to perform the trick consistently, even later when the treat isremoved from the process.

Business Application

At lower levels of Maslow’s hierarchy of needs, such as Physiological needs,money is a motivator, however it tends to have a motivating effect on staff that lasts onlyfor a short period (in accordance with Herzberg’s two-factor model of motivation). Athigher levels of the hierarchy, praise, respect, recognition, empowerment and a senseof belonging are far more powerful motivators than money, as both Abraham Maslowand Douglas McGregor’s Theory X and theory Y have demonstrated vividly.

Maslow has money at the lowest level of the hierarchy and shows other needsare better motivators to staff. McGregor places money in his Theory X category andfeels it is a poor motivator. Praise and recognition are placed in the Theory Y categoryand are considered stronger motivators than money.

• Motivated employees always look for better ways to do a job.• Motivated employees are more quality oriented.• Motivated workers are more productive.

Page 307: DBA1656

DBA 1656 QUALITY MANAGEMENT

307

NOTES

Anna University Chennai

5.10 EMPOWERMENTEmpowerment

Empowerment means investing people with authority. Its purpose is to tap theenormous reservoir of potential contribution that lies within every worker.

Empowerment is an environment in which people have the ability, the confidenceand the commitment to take the responsibility and ownership to improve the processand initiate the necessary steps to satisfy the process and initiate the necessary steps tosatisfy customer requirements within well defined boundaries in order to achieveorganizational values and goals.

There are two steps to empowerment. One is to arm people to be successfulthrough coaching, guidance and training. The second is letting people do by themselves.

Empowerment should not be confused with delegation (or) job enrichment.Delegation refers to distributing and entrusting work to others. Employee empowermentrequires that the individual is held responsible for accomplishing a whole task.

The principles of empowering people are given here.

1. Tell people what their responsibilities are.2. Give authority that is commensurate with responsibility.3. Set standards for excellence.4. Render training.5. Provide knowledge and information.6. Trust them.7. Allow them to commit mistakes.8. Treat them with dignity and respect.

The empowerment matrix is shown here.

Page 308: DBA1656

DBA 1656 QUALITY MANAGEMENT

308

NOTES

Anna University Chennai

FIGURE 5.1 EMPOWERMENT MATRIX

One of the dimensions of empowerment is capability. Employees must havethe ability, skills and knowledge needed to know their jobs as well as their willingnessto co-operate.

Page 309: DBA1656

DBA 1656 QUALITY MANAGEMENT

309

NOTES

Anna University Chennai

A key dimension to empowerment is alignment. All employees need to knowthe organization’s mission, vision, values, policies, objectives and methodologies. Fullyaligned employees not only know their roles, they are also dedicated to attain the goals.

Once the management has developed empowerment capabilities and alignment,it can unleash the power, creativity and resourcefulness of the workforce. This is notpossible without trust. Employees need to trust management and feel that managementtrusts them. Mutual trust therefore completes the picture required to build anempowerment workforce.

5.11 RECOGNITION AND REWARD

Quality Management Philosophies:

W. Edwards Deming is best known for helping to lead the Japanesemanufacturing sector out of the ruins of World War II to becoming a major presence inthe world market. The highest quality award in Japan, The Deming Prize, is named inhis honor. He is also known for his 14 points (a new philosophy for competing on thebasis of quality), for the Deming Chain Reaction, and for the Theory of ProfoundKnowledge. Read more about Deming’s Theory of Profound Knowledge at the MAAWweb site. He also modified the Shewart cycle (Plan, Do, Check, Act) to what is nowreferred to as the Deming Cycle (Plan, Do, Study, Act). Beginning in the early 1980she finally came to prominence in the United States and played a major role in qualitybecoming a major competitive issue in American industry. His book, Out of the Crisis(1986), is considered a quality classic. Read more about Dr. Deming and his philosophyat the W. Edwards Deming Institute Home Page.

Joseph Juran also assisted the Japanese in their reconstruction. Juran firstbecame well - known in the quality field in the U.S. as the editor of the Quality ControlHandbook (1951) and later for his paper introducing the quality trilogy. While Deming’sapproach is revolutionary in nature (i.e. throw out your old system and “adopt the newphilosophy” of his 14 points), Juran’s approach is more evolutionary (i.e. we can workto improve your current system). Deming refers to statistics as being the language ofbusiness while Juran says that money is the language of business and quality effortsmust be communicated to management in their language. Read more about Dr. Juranand his philosophy at the Juran Institute web site.

Phillip Crosby came to national prominence with the publication of his book,Quality is Free. He established the Absolutes of Quality Management which includes“the only performance standard (that makes any sense) is Zero Defects,” and the BasicElements of Improvement. Phillip Crosby Associates II, Inc. home page.

Page 310: DBA1656

DBA 1656 QUALITY MANAGEMENT

310

NOTES

Anna University Chennai

Armand Feigenbaum is credited with the creation of the idea of total qualitycontrol in his 1951 book, Quality Control—Principles, Practice, and Administrationand in his 1956 article, “Total Quality Control.” The Japanese adopted this conceptand renamed it Company-Wide Quality Control, while it has evolved into Total QualityManagement (TQM) in the U.S.

There are other major contributors to the quality field as we know it today. Thelist of major contributors would include Walter Shewhart, Shigeo Shingo, GenichiTaguchi, Kaoru Ishikawa, and David Garvin among others.

Quality Practice Award

The Quality Practice Award (QPA) is an award that is given to generalpractitioner practices in the United Kingdom to show recognition for high quality patientcare by all members of staff in the team. It is awarded by the Royal College of GeneralPractitioners (RCGP).

For the practice to achieve the award, evidence has to be provided that conformsto a set criteria in the following areas:

Practice ProfileAvailabilityClinical CareCommunicationContinuity of CareEquipment and Minor SurgeryHealth PromotionInformation TechnologyMedical RecordsNursing and MidwiferyPractice ManagementOther Professional StaffPatient IssuesPremisesPrescribing/Repeat PrescribingThe Practice as a Learning Organisation

Page 311: DBA1656

DBA 1656 QUALITY MANAGEMENT

311

NOTES

Anna University Chennai

After the evidence is completed, an onsite visit is arranged and takes placeduring a normal working day to assess the practice and interview the members of staff.

5.12 INFORMATION TECHNOLOGY

Information Technology (IT), as defined by the Information TechnologyAssociation of America (ITAA) is: “the study, design, development, implementation,support or management of computer-based information systems, particularly softwareapplications and computer hardware.” In short, IT deals with the use of electroniccomputers and computer software to convert, store, protect, process, transmit andretrieve information, securely.

In this definition, the term “information” can usually be replaced by “data” withoutloss of meaning. Recently it has become popular to broaden the term to explicitly includethe field of electronic communication so that people tend to use the abbreviation lCT(Information and Communication Technology).

The term “information technology” came about in the 1970s. Its basic concept,however, can be traced back even further. Throughout the 20th century, an alliancebetween the military and various industries has existed in the development of electronics,computers, and information theory. The military has historically driven such research byproviding motivation and funding for innovation in the field of mechanization andcomputing.

The first commercial computer was the UNIVAC I. It was designed by J.PresperEckert and John Mauchly for the U.S. Census Bureau. The late 70s saw the rise ofmicrocomputers, followed closely by IBM’s personal computer in 1981. Since then,four generations of computers have evolved. Each generation represented a step thatwas characterized by hardware of decreased size and increased capabilities. The firstgeneration used vacuum tubes, the second transistors, and the third integrated circuits.The fourth (and current) generation uses more complex systems such as Very-large-scale integration.

Information technology refers to all forms of technology applied to processing,storing, and transmitting information in electronic form. The physical equipment usedfor this purpose includes computers, communications equipment and networks, faxmachines, and even electronic pocket. organizers. Information systems execute organizedprocedures that process and / or communicate information. We define information as atangible or intangible entity that serves to reduce uncertainty about some state or event.

Page 312: DBA1656

DBA 1656 QUALITY MANAGEMENT

312

NOTES

Anna University Chennai

Data can originate from the internal operations of the firm and from externalentities such as suppliers or customers. Data also come from external databases andservices; for example, organizations purchase a great deal of marketing and competitiveinformation. Brokerage firms provide a variety of research on different companies toclients.

An information system usually processes these data in some way and presentsthe results to users. With the easy availability of personal computers, users often processthe output of a formal system themselves in an adhoc manner. Human interpretation ofinformation is extremely important in understanding how an organization reacts to theoutput of a system. Different results may mean different things to two managers. Amarketing manager may use statistical programs and graphs to look for trends orproblems with sales. A financial manager may see a problem with cash flow given thesame sales data. The recipient of a system’s output may be an individual, as in theexample of the marketing manager, or it may be a workgroup.

Many systems are used routinely for control purposes in the organization andrequire limited decision making. The accounts receivable application generally runswith little senior management oversight. It is a highly structured application with rulesthat can be followed by a clerical staff. A department manager handles exceptions. Theoutput of some systems may be used as a part of a program or strategy. The systemitself could be implementing a corporate strategy, such as simplifying the customer orderprocess. A system might help’ managers make decisions.

Information technology, however, extends far beyond the computationalcapabilities of computers. Today computers are used extensively for communicationsas well as for their traditional roles of data storage and computation. Many computersare connected together using various kinds of communications lines to form networks.There are more than 43 million host computers, for example, on the Internet, and over100 million computers around the world access it, an estimated 70 million of which arein the U.S. Through a network, individuals and organizations are linked together, andthese linkages are changing the way we think about doing business. Boundaries betweenfirms are breaking down from the electronic communications link provided by networks.Firms are willing to provide direct access to their systems for suppliers and customers.If the first era of computing was concerned with computation, the second era is aboutcommunications.

Page 313: DBA1656

DBA 1656 QUALITY MANAGEMENT

313

NOTES

Anna University Chennai

Manager and the IT

Managers are involved in a wide range of decisions about technology, decisionsthat are vital to the success of the organization. Some 45 to 50 percent of capitalinvestment in the U.S. is for information, according to the Department of Commerceand other sources. Business Week estimate that there are 63 PCs per 100 workers inthe U.S. (including machines at home), and others have estimated that one in three U.S.workers uses a computer on the job. A recent survey of 373 senior executives at largeU.S. and Japanese companies found that 64 percent of the U.S. managers said theymust use computers in their jobs. Other surveys have suggested that as many as 88percent of managers use computers. One estimate is that in 1996, U.S. firms spent$500 billion on information technology while the IT bill for the world was $1 trillion.Because this technology is so pervasive, managers at all levels and in all functional areasof the firm are involved with IT. Managers are challenged with decisions about:

• The use of technology to design and structure the organization.

• The creation of alliances and partnerships that include electronic linkages. Thereis a growing trend for companies to connect with their customers and suppliers,and often with support service providers like law firms.

• The selection of systems to support different kinds of workers. Stockbrokers,traders and others use sophisticated computer-based workstations in performingtheir jobs. Choosing a vendor, designing the system, and implementing it aremajor challenges for management.

• The adoption of groupware or group-decision support systems for workerswho share a common task. In many firms, the records of shared materialsconstitute one type of knowledge base for the corporation.

• Determining a World Wide Web Strategy: The Internet and World WideWeb offer ways to provide information, communicate, and engage in commerce.A manager must determine if and how the firm can take advantage of theopportunities provided by the Web.

• Routine transactions processing system: These applications handle the basicbusiness transactions, for example, the order cycle from receiving a purchaseorder through shipping goods, invoicing, and receipt of payment. These routinesystems must function for the firm to continue in business. More often todaymanagers are eliminating physical documents in transactions processing andsubstituting electronic transmission over networks.

Page 314: DBA1656

DBA 1656 QUALITY MANAGEMENT

314

NOTES

Anna University Chennai

• Personal support systems: Managers in a variety of positions use personalcomputers and networks to support their work.

• Reporting and control: Managers have traditionally been concerned withcontrolling the organization and reporting results to management, shareholders,and the public. The information needed for reporting and control is containedin one or more databases on an internal computer network. Many reports arefiled with the government and can be accessed through the Internet and theWorld Wide Web, including many 10K filings and other SEC-requiredcorporate reports.

• Automated production processes: One of the keys to competitivemanufacturing is increasing efficiency and quality through automation. Similarimprovements can be found in the service sectors through technologies such asimage processing, optical storage, and workflow processing in which paper isreplaced by electronic images shared by staff members using networkedworkstations.

• Embedded products: Increasingly, products contain embedded intelligence. Amodem automobile may contain six or more computers on chips, for example,to control the engine and climate, compute statistics, and manage an antilockbrake and traction control system. A colleague remarked a few years ago thathis washing machine today contained more logic than the first computer heworked on.

Major Trends

In the past few years, six major trends have drastically altered the wayorganizations use technology. These trends make it imperative that a manager becomefamiliar with both the use of technology and how to control it in the organization.

1. The use of technology to transform the organization: The cumulative effectof what all the technology firms are installing is to transform the organizationand allow new types of organizational structures. Sometimes the transformationoccurs slowly as one unit in an organization begins to use groupware. In othercases, like Kennametal or Oticon, a Danish firm. The firm is totally differentafter the application of technology. This ability of information technology totransform organizations is one of the most powerful tools available to a managertoday.

Page 315: DBA1656

DBA 1656 QUALITY MANAGEMENT

315

NOTES

Anna University Chennai

2. The use of information processing technology as a part of corporatestrategy: Firms like Bron Passot are implementing information systems thatgive them an edge on the competition. Firms that prosper in the coming yearswill be managed by individuals who are able to develop creative, strategicapplications of the technology.

3. Technology as a pervasive part of the work environment: From the largestcorporations to the smallest business, we find technology is used to reducelabor, improve quality, provide better customer service, or change the way thefirm operates. Factories use technology to design parts and control production.The small auto-repair shop uses a packaged personal computer system toprepare work orders and bills for its customers. The body shop uses acomputer-controlled machine with lasers to take measurements so it can checkthe alignment of automobile suspensions, frames, and bodies. In this text, weshall see a large number of examples of how technology is applied to changeand improve the way we work.

4. The use of technology to support knowledge workers: The personalcomputer has tremendous appeal. It is easy to use and has a variety of powerfulsoftware programs available that can dramatically increase the user’sproductivity. When connected to a network within the organization and to theInternet, it is a tremendous tool for knowledge workers.

5. The evolution of the computer from a computational device to a mediumfor communications: Computers first replaced punched card equipment andwere used for purely computational tasks. From the large centralized computers,the technology evolved into desktop, personal computers. When users wantedaccess to information stored in different locations, companies developednetworks to link terminals and computers to other computers. These networkshave grown and become a medium for internal and external communicationswith other organizations. For many workers today, the communications aspectsof computers are more important than their computational capabilities.

6. The growth of the Internet and World Wide Web. The Internet offers atremendous amount of information on-line, information that you can searchfrom your computer. Network link people and organizations together, greatlyspeeding up the process of communications. The Internet makes expertiseavailable regardless of time and distance, and provides access to informationat any location connected to the Internet. Companies can expand theirgeographic scope electronically without having to open branch offices. The

Page 316: DBA1656

DBA 1656 QUALITY MANAGEMENT

316

NOTES

Anna University Chennai

Internet leads naturally to electronic commerce-creating new ways to market,contracts and complete transactions.

What does all this mean for the management student? The manager must be acompetent user of computers and the Internet, and learn to manage informationtechnology. The personal computer connected to a network is as commonplace in theoffice as the telephone has been for the past 75 years. Managers today are expected tomake information technology an integral part of their jobs. It is the manager, not thetechnical staff member, who must come up with the idea for a system, allocate resources,and see that systems are designed well to provide the firm with a competitive edge. Youwill have to recognize opportunities to apply technology and then manage theimplementation of the new technology. The success of information processing in thefirm lies more with top and middle management than with the information servicesdepartment.

Information technology today

Today, the term Information Technology has ballooned to encompass manyaspects of computing and technology, and the term is more recognizable than everbefore. The Information Technology umbrella can be quite large, covering many fields.IT professionals perform a variety of duties that range from installing applications todesigning complex computer networks and information databases. A few of the dutiesthat IT professionals perform may include:

Data Management

Computer Networking

Database Systems Design

Software design

Management Information Systems

Systems management

History of Information Technology

The term “information technology” came about in the 1970s. Its basic concept,however, can be traced back even further. Throughout the 20th century, an alliancebetween the military and various industries has existed in the development of electronics,computers, and information theory. The military has historically driven such research byproviding motivation and funding for innovation in the field of mechanization andcomputing.

Page 317: DBA1656

DBA 1656 QUALITY MANAGEMENT

317

NOTES

Anna University Chennai

The first commercial computer was the UNIVAC 1. It was designed by J.Presper Eckert and John Mauchly for the U.S. Census Bureau. The late 70s saw therise of microcomputers, followed closely by IBM’s personal computer in 1981. Sincethen, four generations of computers have evolved. Each generation represented a stepthat was characterized by hardware of decreased size and increased capabilities. Thefirst generation used vacuum tubes, the second transistors, and the third integratedcircuits. The fourth (and current) generation uses more complex systems such as Very-large-scale integration.

• Four basic periods: Characterized by a principal technology used to solvethe input, processing, output and communication problems of the time:

1. Premechanical,2. Mechanical,3. Electromechanical, and4. Electronic

A. The Premechanical Age: 3000 B.C. -1450 A.D.

1. Writing and Alphabets—communication.

2. Paper and Pens—input technologies

3. Books and Libraries: Permanent Storage Devices.

4. The First Numbering Systems.

5. The First Calculators: The Abacus.

B. The Mechanical Age: 1450 - 1840

1. The First Information Explosion.

2. The first general purpose “computers”

3. Slide Rules, the Pascaline and Leibniz’s Machine

4. Babbage’s Engines The Difference Engine, The Analytical Engine.

C. The Electromechanical Age: 1840 - 1940.

The discovery of ways to harness electricity was the key advance made duringthis period. Knowledge and information could now be converted into electrical impulses.

1. The Beginnings of Telecommunication

1. Voltaic Battery - Late 18th century.

2. Telegraph - early 1800s.

Page 318: DBA1656

DBA 1656 QUALITY MANAGEMENT

318

NOTES

Anna University Chennai

3. Morse Code - developed in1835 by Samuel Morse with dots and dashes.

4. Telephone and Radio - Followed by the discovery that electrical wavestravel through space and can produce an effect far from the point at whichthey originated, led to the invention of the radio by Guglielmo Marconi in1894

5. Electromechanical Computing - Herman Hollerith and IBM. HowardAiken, a Ph.D. student at Harvard University, built the Mark I, completedin January 1942. It was 2.8 feet tall, 51 feet long, 2 feet thick, weighed 5tons and used about 750,000 parts

D. The Electronic Age: 1940 - Present.

1. First tries.

* Early 1940s

* Electronic vacuum tubes.

2. Eckert and Mauchly.

1. The First High-Speed, General-Purpose Computer Using Vacuum Tubes:

Electronic Numerical Integrator and Computer (ENIAC)

The ENIAC team (Feb 14, 1946). Left to right: J. Presper Eckert, Jr.; John GristBrainerd; Sam Feltman; Herman H. Goldstine; John W. Mauchly; Harold Pender; MajorGeneral G. L. Barnes; Colonel Paul N. Gillon.

• Electronic Numerical Integrator and Computer (ENIAC)

• 1946.

• Used vacuum tubes (not mechanical devices) to do its calculations.

• Hence, first electronic computer.

• Developers John Mauchly, a physicist, and J. Prosper Eckert, an electrical engineer

• The Moore School of Electrical Engineering at the University of Pennsylvania

• Funded by the U.S. Army.

• But it could not store its programs (its set of instructions) 2. The First Stored-Program Computer(s)

Page 319: DBA1656

DBA 1656 QUALITY MANAGEMENT

319

NOTES

Anna University Chennai

• Early 1940s, Mauchly and Eckert began to design the EDV AC the ElectronicDiscreet Variable Computer.

• John von Neumann’s influential report in June 1945:

• “The Report on the EDV AC”

• British scientists used this report and outpaced the Americans.

• Max Newman headed up the effort at Manchester University

• Where the Manchester Mark I went into operation in June 1948—becomingthe first stored-program computer.

• Maurice Wilkes, a British scientist at Cambridge University, completed the EDSAC (Electronic Delay Storage Automatic Calculator) in 1949—two yearsbefore EDV AC was finished.

• Thus, EDSAC became the first stored-program computer in general use (i.e.,not a prototype).

4. The First General-Purpose Computer for Commercial Use: Universal AutomaticComputer (UNIVAC

• Late 1940s, Eckert and Mauchly began the development of a computer calledUNIVAC (Universal Automatic Computer)

• Remington Rand.

• First UNIVAC delivered to Census Bureau in 1951.

• But, a machine called LEO (Lyons Electronic Office) went into action a fewmonths before UNIVAC and became the world’s first commercial computer.

THE FOUR GENERATIONS OF DIGITAL COMPUTING

1. The First Generation (1951-1958)

1. Vacuum tubes as their main logic elements.

2. Punch cards to input and externally store data.

3. Rotating magnetic drums for internal storage of data and programs

1. Programs written in

1. Machine language

2. Assembly language

Page 320: DBA1656

DBA 1656 QUALITY MANAGEMENT

320

NOTES

Anna University Chennai

- Requires a compiler.

2. The Second Generation (1959-1963).

1. Vacuum tubes replaced by transistors as main logic element.

1. AT&T’s Bell Laboratories, in the 1940s

2. Crystalline mineral materials called semiconductors could be used in thedesign of a device called a transistor

2. Magnetic tape and disks began to replace punched cards as external storage devices.

3. Magnetic cores (very small donut-shaped magnets that could be polarized in one oftwo directions to represent data) strung on wire within the computer became the primaryinternal storage technology.

1. High-level programming languages

eg., FORTRAN and COBOL

3. The Third Generation (1964-1979).

1. Individual transistors were replaced by integrated circuits.

2. Magnetic tape and disks completely replaced punch cards as external storagedevices.

3. Magnetic core internal memories began to give way to a new form, metal oxidesemiconductor (MOS) memory, which, like integrated circuits, used silicon-backedchips.

• Operating systems

• Advanced programming languages like BASIC developed.

• Which is where Bill Gates and Microsoft got their start in 1975.

2. The Fourth Generation (1979- Present).

1. Large-scale and very large-scale integrated circuits (LSIs and VLSICs)

2. Microprocessors that contained memory, logic, and control circuits (an entireCPU = Central Processing Unit) on a single chip.

• Which allowed for home-use personal computers or Pcs, like the Apple (IIand Mac) and IBM pc.

• Apple II released to public in 1977, by Stephen Wozniak and Steven Jobs.

Page 321: DBA1656

DBA 1656 QUALITY MANAGEMENT

321

NOTES

Anna University Chennai

Initially sold for $1,195 (without a monitor); had 16k RAM.

• First Apple Mac released in 1984.

• IBM PC introduced in 1981.

o Debuts with MS-DOS (Microsoft Disk Operating System)

• Fourth generation language software products

• eg., Visicalc, Lotus 1-2-3, dBase, Microsoft Word, and many others.

• Graphical User Interfaces (GUI) for PCs arrive in early 1980s

Transforming Organizations

How is information technology changing organizations? One impact of IT, is itsuse to develop new organizational structures. The organization that is most likely toresult from the use of these variables is the T-Form or Technology-Form organization,an organization that uses IT to become highly efficient and effective.

The firm has a flat structure made possible by using e-mail and groupware(programs that help co-ordinate people with a common task to perform) to increasethe span of control and reduce managerial hierarchy. Employees co-ordinate their workwith the help of electronic communications and linkages. Supervision of employees isbased on trust because there are fewer face-to-face encounters with subordinates andcolleagues than in today’s organization. Managers delegate tasks and decision- makingto lower levels of management, and information systems make data available at thelevel of management where it is needed to make decisions. In this way, the organizationprovides a fast response to competitors and customers. Some members of theorganization primarily work remotely without having a permanent office assigned.

The company’s technological infrastructure features networks of computers.Individual client workstations connect over a network to larger computers that act asservers. The organization has an internal Intranet, and internal client computers areconnected to the Internet so members of the firm can link to customers, suppliers, andothers with whom they need to interact. They can also access the huge repository ofinformation contained on the Internet and the firm’s own Intranet.

Technology-enabled firms feature highly automated production and electronicinformation handling to minimize the use of paper and rely extensively on images andoptical data storage. Technology is used to give workers jobs that are as complete aspossible. In the office, companies will convert assembly line operations for processingdocuments to a series of tasks that one individual or a small group can perform from a

Page 322: DBA1656

DBA 1656 QUALITY MANAGEMENT

322

NOTES

Anna University Chennai

workstation. The firm also adopts and uses electronic agents, a kind of software robot,to perform a variety of tasks over networks.

These organizations use communications technology to form temporary taskforces focused on a specific project. Technology like e-mail and groupware facilitatethe work of these task forces. These temporary workgroups may include employees ofcustomers, suppliers, and/or partner corporations; they form a virtual team that meetselectronically to work on a project.

The organization is linked extensively with customers and suppliers. There arenumerous electronic customer / supplier relationships. These linkages increaseresponsiveness, improve accuracy, reduce cycle times, and reduce the amount ofoverhead when firms do business with each other. Suppliers access customer computersdirectly to learn of their needs for materials, then deliver raw materials and assembliesto the proper location just as they are needed. Customers pay many suppliers as thecustomer consumes materials, dispensing with invoices and other documents associatedwith a purchase transaction.

The close electronic linking of companies doing business together creates virtualcomponents where traditional parts of the organization appear to exist, but in realityexist in a novel or unusual manner. For example, the traditional inventory of raw materialsand subassemblies is likely not to be owned or stored by a manufacturing firm. Thisvirtual inventory actually exists at suppliers’ locations. Possibly the subassemblies willnot exist at all; suppliers will build them just in time to provide them to the customer.From the customer’s standpoint, however, it appears that all needed components are ininventory because suppliers are reliable partners in the production process.

This model of a technology-enabled firm shows the extent to which mangerscan apply IT to transforming the organization. The firms that succeed in the turbulentenvironment of the 21st century will take advantage of information technology to createinnovative organizational structures. They will use IT to develop highly competitiveproducts and services, and will be connected in a network with their customers andsuppliers. The purpose of this book is to prepare you to manage in this technologicallysophisticated environment of the 21 5t century.

The Challenge of Change

A major feature of information technology is the change that IT brings. Thosewho speak of a revolution from technology are really talking about change. Businessand economic conditions change all the time; a revolution is a discontinuity, an abruptand dramatic series of changes in the natural evolution of economies. In the early daysof technology, change was gradual and often not particularly significant. The advent of

Page 323: DBA1656

DBA 1656 QUALITY MANAGEMENT

323

NOTES

Anna University Chennai

personal computers accelerated the pace of change, and when the Internet becameavailable for profit-making activities around 1992, change became exponential andrevolutionary. To a great extent, your study of information technology is a study ofchange.

In what way can and does technology change the world around us? The impactof IT is broad and diverse; some of the changes it brings are profound. Informationtechnology has demonstrated an ability to change or create the following:

• Within Organizations

Create new pr9cedures, workflows, the knowledge base, products and services, andcommunications.

• Organizational structure

Facilitate new reporting relationships, increased spans of control, local decision rights,supervision, the formation of divisions, geographic scope, and “virtual” organizations.

• Interorganizational relations

Create new customer-supplier relations, partnerships, and alliances.

• The economy

Alter the nature of markets through electronic commerce, disintermediation, new formsof marketing and advertising, partnerships and alliances, the cost of transactions, andmodes of governance in customer-supplier relationships.

• Education

Enhance “on campus” education through videoconferencing, e-mail, electronic meetings,groupware, and electronic guest lectures.

Facilitate distance learning through e-mail, groupware, and videoconferencing. Provideaccess to vast amounts of reference material; facilitate collaborative projects independentof time zones and distance.

• National development

Provide small companies with international presence and facilitate commerce.

Make large amounts of information available, perhaps to the consternation of certaingovernments.

Present opportunities to improve education.

Page 324: DBA1656

DBA 1656 QUALITY MANAGEMENT

324

NOTES

Anna University Chennai

A more extensive list of related topics is provided below.

Worldwide

World Information Technology and Services Alliance (WITSA) is a consortiumof over 60 information technology (IT) industry associations from economies aroundthe world. Founded in 1978 and originally known as the World Computing ServicesIndustry Association, WITSA has increasingly assumed an active advocacy role ininternational public policy issues affecting the creation of a robust global informationinfrastructure.

5.13 COMPUTER AND QUALITY FUNCTIONS

This section examines the extent to which computer-based systems are organizedto enhance or degrade the quality of working life for clerks, administrative staff,professionals, and managers. Worklife merits a lot of attention for four reasons.

First, work is a major component of many people’s lives. Wage income is theprimary way that most people between the ages of 22 and 65 obtain money for food,housing, clothing, transportation, etc. The United States’ population is about 260,000,000,and well over 110,000,000 work for a living. So, major changes in the nature of workthe number of jobs, the nature of jobs, career opportunities, job content, socialrelationships at work, working conditions of various kinds can affect a significant segmentof society.

Second, in the United States, most wage earners work thirty to sixty hours perweek-a large fraction of their waking lives. And people’s experiences at work, whethergood or bad, can shape other aspects of their lives as well. Work pressures or workpleasures can be carried home to families. Better jobs give people some room to growwhen they seek more responsible, or complex positions, while stifled careers oftenbreed boredom and resentment in comparably motivated people. Although people varyconsiderably in what kinds of experiences and opportunities they want from a job, fewpeople would be thrilled with a monotonous and socially isolated job, even if it were topay very well.

Third, computerization has touched more people more visibly in their workthan in any other kind of setting—home, schools, churches, banking, and so on.Workplaces are good places to examine how the dreams and dilemmas ofcomputerization really work out for large numbers of people under an immense varietyof social and technical conditions.

Fourth, many aspects of the way that people work influence their relationships

Page 325: DBA1656

DBA 1656 QUALITY MANAGEMENT

325

NOTES

Anna University Chennai

to computer systems, the practical forms of computerization, and their effects. Forexample, in our last section, Steven Hodas argued that the tenuousness of many teachers’classroom authority could discourage them from seeking computer supported instructionin their classes. Also, Martin Baily and Paul Attewell argued that computerization hashad less influence on the productivity of organizations because people integrate intotheir work so as to provide other benefits to them, such as producing more professional-looking documents and enhancing their esteem with others, or managers becomingcounterproductive control-freaks with computerized reports.

When specialists discuss computerization and work, they often appeal to strongimplicit images about the transformations of work in the last one hundred years, and therole that technologies have played in some of those changes. In nineteenth centuryNorth America, there was a major shift from farms to factories as the primaryworkplaces. Those shifts—often associated with the industrial revolution-continuedwell into the early twentieth century. Industrial technologies such as the steam engineplayed a key role in the rise of industrialism. But ways of organizing work also alteredsignificantly. The assembly line with relatively high-volume, lowcost production andstandardized, fragmented jobs was a critical advance in the history of industrialization.During the last 100 years, farms also were increasingly mechanized, with motorizedtractors, harvesters and other powerful equipment replacing horse-drawn plows andhand-held tools. The farms also have been increasingly reorganized. Family farms runby small groups have been dying out, and have been bought up (or replaced by) hugecorporate farms with battalions of managers, accountants, and hired hands.

Our twentieth century economy has been marked by the rise of human servicejobs, in areas such as banking, insurance, travel, education, and health. And many ofthe earliest commercial computer systems were bought by large service organizationssuch as banks and insurance companies. (By some estimates, the finance industriesbought about 30% of the computer hardware in the United States in the 1980s.) Duringthe last three decades, computer use has spread to virtually every kind of workplace,although large firms are still the dominant investors in computer-based systems. Sinceoffices are the predominant site of computerization, it is helpful to focus on offices inexamining the role that these systems play in altering work.

Today, the management of farms and factories is frequently supported withcomputer systems in their offices. Furthermore, approximately 50% of the staff of hightech manufacturing firms are white collar workers who make use of such systems—engineers, accountants, marketing specialists, etc. There is also some computerizationin factory production lines through the introduction of numerically controlled machinetools and industrial robots. And certainly issues such as worklife quality and managerial

Page 326: DBA1656

DBA 1656 QUALITY MANAGEMENT

326

NOTES

Anna University Chennai

control are just as real on the shop floor as in white collar areas (See Shaiken, 1986;Zuboff, 1988). While the selections here examine white collar work, the reader canconsider the parallels between the computerization of TCP is responsible for breakingup the message into datagrams, reassembling the datagrams at the other end, resendinganything that gets lost, and putting things back in the right order. IP is responsible forrouting individual datagrams. The datagrams are individually identified by a uniquesequence number to facilitate reassembly in the correct order. The whole process oftransmission is done through the use of routers. Routing is the process by which twocommunication stations find and use the optimum path across any network of anycomplexity. Routers must support fragmentation, the ability to subdivide receivedinformation into smaller units where this is required to match the underlying networktechnology. Routers operate by recognizing that a particular network number relates toa specific area within the interconnected networks. They keep track of the numbersthroughout the entire process.

Domain Name System

The addressing system on the Internet generates IP addresses, which are usuallyindicated by numbers such as 128.201.86.290. Since such numbers are difficult toremember, a user-friendly system has been created known as the Domain Name System(DNS). This system provides the mnemonic equivalent of a numeric IP address andfurther ensures that every site on the Internet has a unique address. For example, anInternet address might appear as crito.uci.edu. If this address is accessed through aWeb browser, it is referred to as a URL (Uniform Resource Locator), and the full URLwill appear as http://www. crito.uci.edu.

The Domain Name System divides the Internet into a series of componentnetworks called domains that enable e-mail (and other files) to be sent across the entireInternet. Each site attached to the Internet belongs to one of the domains. Universities,for example, belong to the “edu” domain. Other domains are gov (government), com(commercial organizations), mil (military), net (network service providers), and org(nonprofit organizations).

World Wide Web

The World Wide Web (WWW) is based on technology called hypertext. TheWeb may be thought of as a very large subset of the Internet, consisting of hypertextand hypermedia documents. A hypertext document is a document that has a reference(or link) to another hypertext document, which may be on the same computer or in adifferent computer that may be located anywhere in the world. Hypermedia is a similarconcept except that it provides links to graphic, sound, and video files in addition totext files.

Page 327: DBA1656

DBA 1656 QUALITY MANAGEMENT

327

NOTES

Anna University Chennai

In order for the Web to work, every client must be able to display everydocument from any server. This is accomplished by imposing a set of standards knownas a protocol to govern the way that data are transmitted across the Web. Thus datatravel from client to server and back through a protocol known as the HyperText TransferProtocol (http). In order to access the documents that are transmitted through thisprotocol, a special

Technological features

The Internet ‘Ls technological success depends on its principal communicationtools, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). Theyare referred to frequently as TCP/IP. A protocol is an agreed-upon set of conventionsthat defines the rules of communication. TCP breaks down and reassembles packets,whereas IP is responsible for ensuring that the packets are sent to the right destination.

Data travels across the Internet through several levels of networks until it reachesits destination. E-mail messages arrive at the mail server (similar to the local post office)from a remote personal computer connected by a modem, or a node on a local-areanetwork. From the server, the messages pass through a router, a special-purposecomputer ensuring that each message is sent to its correct destination. A message maypass through several networks to reach its destination. Each network has its own routerthat determines how best to move the message closer to its destination, taking intoaccount the traffic on the network. A message passes from one network to the next,until it arrives at the destination network, from where it can be sent to the recipient, whohas a mailbox on that network. See also Electronic mail; Local-area networks; Wide-area networks.

TCP/IP

TCP/IP is a set of protocols developed to allow cooperating computers toshare resources across the networks. The TCP/IP establishes the standards and rulesby which messages are sent through the networks. The most important traditional TCP/IP services are file transfer, remote login, and mail transfer.

The file transfer protocol (FTP) allows a user on any computer to get files fromanother computer, or to send files to another computer. Security is handled by requiringthe user to specify a user name and password for the other computer.

The network terminal protocol (TELNET) allows a user to log in on any othercomputer on the network. The user starts a remote session by specifying a computer toconnect to. From that time until the end of the session, anything the user types is sent tothe other computer.

Mail transfer allows a user to send messages to users on other computers.Originally, people tended to use only one or two specific computers. They would maintain

Page 328: DBA1656

DBA 1656 QUALITY MANAGEMENT

328

NOTES

Anna University Chennai

“mail files” on those machines. The computer mail system is simply a way for a user toadd a message to another user’s mail file.

Other services have also become important: resource sharing, disklessworkstations, computer conferencing, transaction processing, security, multimedia access,and directory services program known as a browser is required, which browses theWeb. See also World Wide Web.

Commerce on the Internet

Commerce on the Internet is known by a few other names, such as e-business,Etailing (electronic retailing), and e-commerce. The strengths of e-business depend onthe strengths of the Internet. Internet commerce is divided into two major segments,business-to-business (B2B) and business-to-consumer (B2C). In each are somecompanies that have started their businesses on the Internet, and others that have existedpreviously and are now transitioning into the Internet world. Some products and services,such as books, compact disks (CDs), computer software, and airline tickets, seem tobe particularly suited for online business.

Internet

The Internet is a worldwide, publicly accessible network of interconnectedcomputer networks that transmit data by packet switching using the standard InternetProtocol (IP). It is a “network of networks” that consists of millions of smaller domestic,academic, business, and government networks, which together carry various informationand services, such as electronic mail, online chat, file transfer, and the interlinked Webpages and other documents of the World Wide Web.

The USSR’s launch of Sputnik spurred the United States to create the AdvancedResearch Projects Agency, known as ARPA, in February 1958 to regain a technologicallead. [1][2] ARPA created the Information Processing Technology Office (IPTO) tofurther the research of the Semi Automatic Ground Environment (SAGE) program,which had networked country-wide radar systems together for the first time. J. C. R.Licklider was selected to head the IPTO, and saw universal networking as a potentialunifying human revolution.

Licklider had moved from the Psycho-Acoustic Laboratory at HarvardUniversity to MIT in 1950, after becoming interested in information technology. AtMIT, he served on a committee that established Lincoln Laboratory and worked on theSAGE project. In 1957 he became a Vice President at BBN, where he bought the firstproduction PDP-l computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider recruited Lawrence Roberts to head a project toimplement a network, and Roberts based the technology on the work of Paul Baranwho had written an exhaustive study for the U.S. Air Force that recommended packet

Page 329: DBA1656

DBA 1656 QUALITY MANAGEMENT

329

NOTES

Anna University Chennai

switching (as opposed to circuit switching) to make a network highly robust andsurvivable. After much work, the first node went live at UCLA on October 29, 1969on what would be called the ARPANET, one of the “eve” networks of today’s Internet.Following on from this, the British Post Office, Western Union International and Tymnetcollaborated to create the first international packet switched network, referred to asthe International Packet Switched Service (IPSS), in 1978. This network grew fromEurope and the US to cover Canada, Hong Kong and Australia by 1981.

The first TCP/IP-wide area network was operational by January 1, 1983,when the United States’ National Science Foundation (NSF) constructed a universitynetwork backbone that would later become the NSFNet.

It was then followed by the opening of the network to commercial interests in1985. Important, separate networks that offered gateways into, then later merged with,the NSFNet include Usenet, BITNET and the various commercial and educationalnetworks, such as X.25, Compuserve and JANET. Telenet (later called Sprintnet) wasa large privately-funded national computer network with free dial-up access in citiesthroughout the U.S. that had been in operation since the 1970s. This network eventuallymerged with the others in the 1990s as the TCP/IP protocol became increasingly popular.The ability of TCP/IP to work over these pre-existing communication networks,especially the international X.25 IPSS network, allowed for a great ease of growth.Use of the term “Internet” to describe a single global TCP/IP network originated aroundthis time.

Growth

The network gained a public face in the 1990s. On August 6, 1991, CERN,which straddles the border between France and Switzerland, publicized the new WorldWide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTPand the first few Web pages at CERN.

An early popular web browser was ViolaWWW based upon HyperCard. Itwas eventually replaced in popularity by the Mosaic web browser. In 1993 the NationalCenter for Supercomputing Applications at the University of Illinois released version1.0 of Mosaic, and by late 1994 there was growing public interest in the previouslyacademic/technical Internet. By 1996 the word “Internet” was coming into commondaily usage, frequently misused to refer to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfullyaccommodated the majority of previously existing public computer networks (althoughsome networks, such as FidoNet, have remained separate) During the 1990s, it wasestimated that the Internet grew by 100% per year, with a brief period of explosivegrowth in 1996 and 1997.[3] This growth is often attributed to the lack of central

Page 330: DBA1656

DBA 1656 QUALITY MANAGEMENT

330

NOTES

Anna University Chennai

administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendorinteroperability and prevents anyone company from exerting too much control over thenetwork.

Today’s Internet

A rack of servers

Aside from the complex physical connections that make up its infrastructure,the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peeringagreements), and by technical specifications or protocols that describe how to exchangedata over the network. Indeed, the Internet is essentially defined by its interconnectionsand routing policies.

As of June 10, 2007, 1.133 billion people use the Internet according to InternetWorld Stats. Writing in the Harvard International Review, philosopher NJ.Slabbert, awriter on Chat rooms provide another popular Internet service. Internet Relay Chat(IRC) offers multiuser text conferencing on diverse topics. Dozens of IRC serversprovide hundreds of channels that anyone can log onto and participate in via the keyboard.See IRC.

The Original Internet

The Internet started in 1969 as the ARPAnet. Funded by the U.S. government,the ARPAnet became a series of high-speed links between major supercomputer sitesand educational and research institutions worldwide, although mostly in the U.S. Amajor part of its backbone was the National Science Foundation’s NFSNet. Along theway, it became known as the “Internet” or simply “the Net.” By the 1990s, so manynetworks had become part of it and so much traffic was not educational or pure researchthat it became obvious that the Internet was on its way to becoming a commercialventure.

It Went Commercial in 1995

In 1995, the Internet was turned over to large commercial Internet providers(ISPs), such as MCI, Sprint and UUNET, which took responsibility for the backbonesand have increasingly enhanced their capacities ever since. Regional ISPs link intothese backbones to provide lines for their subscribers, and smaller ISPs hook eitherdirectly into the national backbones or into the regional ISPs.

The TCP/IP Protocol

Internet computers use the TCP/IP communications protocol. There are morethan 100 million hosts on the Internet, a host being a mainframe or medium to high-end

Page 331: DBA1656

DBA 1656 QUALITY MANAGEMENT

331

NOTES

Anna University Chennai

server that is always online via TCP/IP. The Internet is also connected to non-TCP/IPnetworks worldwide through gateways that convert TCP/IP into other protocols.

Life before the Web

Before the Web and the graphics-based Web browser, the Internet wasaccessed from Unix terminals by academicians and scientists using command-drivenUnix utilities. Some of these utilities are still widely used, but are available in all platforms,including Windows, Mac and Linux. For example, FTP is used to upload and downloadfiles, and Telnet lets you log onto a computer on the Internet and run a program. SeeFTP, Telnet, Archie, Gopher and Veronica.

The Next Internet

Ironically, some of the original academic and scientific users of the Internethave developed their own Internet once again. Internet2 is a high-speed academicresearch network that was started in much the same fashion as the original Internet (seeInternet2). See Web vs. Internet, World Wide Web, how to search the Web, intranet,NAP, hot topics and trends, IAB, information superhighway and online service.

Policy issues for the Washington DC-based Urban Land Institute, has assertedthat the Internet is fast becoming a basic feature of global civilization, so that what hastraditionally been called “civil society” is now becoming identical with informationtechnology society as defined by Internet use.

The largest network in the world. It is made up of more than 350 millioncomputers in more than 100 countries covering commercial, academic and governmentendeavors. Originally developed for the u.s. military, the Internet became widely usedfor academic and commercial research. Users had access to unpublished data andjournals on a variety of subjects. Today, the “Net” has become commercialized into aworldwide information highway, providing data and commentary on every subject andproduct on earth.

E-Mail Was the Beginning

The Internet’s surge in growth iri the mid-1990s was dramatic, increasing ahundredfold in 1995 and 1996 alone. There were two reasons. Up until then, the majoronline services (AOL, CompuServe, etc.) provided e-mail, but only to customers ofthe same service. As they began to connect to the Internet fore-mail exchange, the Internet took on the role of a global switching center. An AOLmember could finally send mail to a CompuServe member, and so on. The Internetglued the world together for electronic mail, and today, SMTP, the Internet mail protocol,is the global e-mail standard.

The Web Was the Explosion

Page 332: DBA1656

DBA 1656 QUALITY MANAGEMENT

332

NOTES

Anna University Chennai

Secondly, with the advent of graphics-based Web browsers such as Mosaicand Netscape Navigator, and soon after, Microsoft’s Internet Explorer, the World WideWeb took off. The Web became easily available to users with pes and Macs ratherthan only scientists and hackers at Unix workstations. Delphi was the first proprietaryonline service to offer Web access, and all the rest followed. At the same time, newInternet service providers (ISPs) rose out of the woodwork to offer access to individualsand companies. As a result, the Web grew exponentially, providing an informationexchange of unprecedented proportion. The Web has also become “the” storehousefor drivers, updates and demos that are downloaded via the browser as well as a globaltransport for delivering information by subscription, both free and paid.

Newsgroups

Although daily news and information is now available on countless Web sites,long before the Web, information on a myriad of subjects was exchanged via Usenet(User Network) newsgroups. Still thriving, news group articles can be selected andread directly from your Web browser. Chat Rooms

5.15 ELECTRONIC COMMUNICATION

Electronic communication (e-mail, bulletin boards and newsgroups) comprisesa relatively new form of communication. Electronic communication differs from othermethods of communication in the following key areas:

* Speed

The time required to generate, transmit, and respond to messages.

* Permanence

The methods of storing messages and the permanence of these files.

* Cost of Distribution

The visible cost of sending messages to one or more individuals.

* Accessibility

The direct communication channels between individuals.

* Security and Privacy

The ability of individuals to access electronically stored mail and files.

* Sender Authenticity

The ability to verify the sender of a message.

In using electronic communications, we may need to reevaluate what to expect in termsof rules, guidelines, and human behavior. Our experiences with paper and telephone

Page 333: DBA1656

DBA 1656 QUALITY MANAGEMENT

333

NOTES

Anna University Chennai

communications may not tell us enough. For each of the key areas mentioned, thedifferences between electronic and other forms of communication are discussed below.

Speed

With electronic mail, written messages are delivered to the recipient withinminutes of their transmission. Messages can be read at the recipient’s convenience, atany time of the day. Or, the recipient can respond immediately, and an asynchronousdialogue can develop which resembles a telephone conversation or a meeting.

The ease and speed with which messages transmit often changes the writingstyle and formality of the written communication. These changes can lead tomisinterpretation of messages, and a need arises for a new set of standards for theinterpretation of message content.

Permanence

Electronic communications appear to be a volatile form of communication inwhich messages disappear when deleted. However, messages can be stored for yearson disks or tapes, or they can be printed and/or stored in standard files.

Unlike paper copy or a telephone message, a message also can be altered,then printed, without evidence that it is not original. Electronic messages may also bereformatted, then printed, as more formal or “official” correspondence.

Cost of Distribution

The associated costs of paper or telephone communication are familiar to mostpeople. The cost of a US Mail message (paper, stamp(s), and the personnel time toprepare the message) are known and visible. Long distance telephone costs are visiblein a monthly bill. Due to the cost and effort involved, correspondents often limit theirpaper or telephone messages to select individuals known to absolutely require theinformation.

By comparison, electronic communication allows discourse with a large numberof correspondents, over a wide geographical area, with no more effort or cost than isrequired to send a single message locally. This multiple-mailing capability often leads towider transmission of messages than is necessary, and messages may be distributed toindividuals with only a casual interest in the information.

Accessibility

Organizations develop channels of communication to filter paper or telephonemessages to ensure that only appropriate individuals receive the information. Comparablemechanisms may not yet be in place for electronic mail. In using electronic communication,organizations may need to reevaluate office procedures to ensure consistentdocumentation of correspondence and to prevent inappropriate correspondence

Page 334: DBA1656

DBA 1656 QUALITY MANAGEMENT

334

NOTES

Anna University Chennai

burdening individuals.

Security and Privacy

Currently, no legal regulations exist regarding the security and privacy ofelectronic mail. The vast majority of electronic mail messages are delivered to the correctaddressee without intervention. However, messages may be intercepted by individualsother than the sender or recipient for reasons discussed below.

Incorrect Address

Routing software uses the address in an electronic mail message to determinethe network and protocols for message delivery. Each computer that handles a mailmessage stamps it with information that allows tracking of the message. This informationallows improperly addressed messages to be sent back to the sender. Occasionally, fortechnical reasons, an improperly addressed message can not be sent back to the sender.The message then is sent to a system administrator’s mailbox. The systems administratorusually attempts to return the message to the sender with an error message indicatingthe problem with the address.

Perusal by Unauthorized Individuals

Mail delivered to a secure file storage area on a computer is held there until therecipient retrieves it. The file can only be read by the owner of the mail while in storage.Once the mail is in the owner’s home directory, security depends on the owner.

One group of users on every system has access to fill files on a system. Thesesystems administrators have special privileges required to maintain the system. Whilethese individuals have the ability to peruse private files, it is considered unprofessionalto do so. Systems administrators normally access only those files required to performtheir job.

Sender authenticity

Standard mail protocols do not test the “From:” portion of a message headerfor authenticity. A knowledgeable person can modify the “From:” address of messages.This is an extremely common occurrence

Electronic communication in technical sense is deliberated as:

• A transmitter that takes information and converts it to a signal.

• A transmission medium over which the signal is transmitted.

• A receiver that receives the signal and converts it back into usable information.

For example, consider a radio broadcast: In this case the broadcast tower is thetransmitter, the radio is the receiver and the transmission medium is free space. Often

Page 335: DBA1656

DBA 1656 QUALITY MANAGEMENT

335

NOTES

Anna University Chennai

telecommunication systems are two-way and a single device acts as both a transmitterand receiver, or transceiver. For example, a mobile phone is a transceiver.

Telecommunication over a phone line is called point-to-point communicationbecause it is between one transmitter and one receiver. Telecommunication throughradio broadcasts is called broadcast communication because it is between one powerfultransmitter and numerous receivers.

Analogue or digital

Signals can either be analogue or digital. In an analogue signal, the signal isvaried continuously with respect to the information. In a digital signal, the information isencoded as a set of discrete values (e.g. l’s and O’s). During transmission, the informationcontained in analogue signals will be degraded by noise. Conversely, unless the noiseexceeds a certain threshold, the information contained in digital signals will remain intact.This represents a key advantage of digital signals over analogue signals.

Networks

A collection of transmitters, receivers or transceivers that communicate witheach other is known as a network. Digital networks may consist of one or more routersthat route data to the correct user. An analogue network may consist of one or moreswitches that establish a connection between two or more users. For both types ofnetwork, a repeater may be necessary to amplify or recreate the signal when it is beingtransmitted over long distances. This is to combat attenuation that can render the signalindistinguishable from noise.

Channels

A channel is a division in a transmission medium so that it can be used to sendmultiple streams of information. For example, a radio station may broadcast at 96 MHzwhile another radio station may broadcast at 94.5 MHz. In this case the medium hasbeen divided by frequency and each channel received a separate frequency to broadcaston. Alternatively, one could allocate each channel a recurring segment of time overwhich to broadcast - this i.s known as time-division multiplexing and is sometimes usedin digital communication.

Modulation

The shaping of a signal to convey information is known as modulation. Modulationcan be used to represent a digital message as an analogue waveform. This is known askeying and several keying techniques exist (these include phase-shift keying, frequency-shift keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keyingto exchange information between devices.

Modulation can also be used to transmit the information of analogue signals at

Page 336: DBA1656

DBA 1656 QUALITY MANAGEMENT

336

NOTES

Anna University Chennai

higher frequencies. This is helpful because low-frequency analogue signals cannot beeffectively transmitted over free space. Hence the information from a low-frequencyanalogue signal must be superimposed on a higher-frequency signal (known as a carrierwave) before transmission. There are several different modulation schemes available toachieve this (two of the most basic being amplitude modulation and frequencymodulation). An example of this process in action is a DJ’s voice being superimposedon a 96 MHz carrier wave using frequency modulation (the voice would then be receivedon a radio as the channel “96 FM”).

Telecommunication

Telecommunications is the transmission of data and information betweencomputers using a communications link such as a standard telephone line. Typically, abasic telecommunications system would consist of a computer or terminal on each end,communication equipment for sending and receiving data, and a communication channelconnecting the two users. Appropriate communications software is also necessary tomanage the transmission of data between computers. Some applications that rely onthis communications technology include the following:

1. Electronic mail (e-mail) is a message transmitted from one person to anotherthrough computerized channels. Both the sender and receiver must have accessto on-line services if they are not connected to the same network. E-mail isnow one of the most frequently used types of telecommunication.

2. Facsimile (fax) equipment transmits a digitized exact image of a document overtelephone lines. At the receiving end, the fax machine converts the digitizeddata back into its original form.

3. Voice mail is similar to an answering machine in that it permits a caller to leavea voice message in a voice mailbox. Messages are digitized so the caller’smessage can be stored on a disk.

4. Videoconferencing involves the use of computers, television cameras, andcommunications software and equipment. This equipment makes it possible toconduct electronic meetings while the participants are at different locations.

5. The Internet is a continuously evolving global network of computer networksthat facilitates access to information on thousands of topics. The Internet isutilized by millions of people daily.

Actually, telecommunications is not a new concept. It began in the mid-1800swith the telegraph, whereby sounds were translated manually into words; then thetelephone, developed in 1876, transmitted voices; and then the teletypewriter, developedin the early 1900s, was able to transmit the written word.

Page 337: DBA1656

DBA 1656 QUALITY MANAGEMENT

337

NOTES

Anna University Chennai

Since the 1960s, telecommunications development has been rapid and widereaching. The development of dial modem technology accelerated the rate during the1980s. Facsimile transmission also enjoyed rapid growth during this time. The 1990shave seen the greatest advancement in telecommunications. It is predicted that computingperformance will double every eighteen months. In addition, it has been estimated thatthe power of the computer has doubled thirty-two times since World War II. The rateof advancement in computer technology shows no signs of slowing. To illustrate thecomputer’s rapid growth, Ronald Brown, former U.S. secretary of commerce, reportedthat only fifty thousand computers existed in the world in 1975, whereas, by 1995, itwas estimated that more than fifty thousand computers were sold every ten hours (U.S.Department of Commerce, 1995).

Deregulation and new technology have created increased competition andwidened the range of network services available throughout the world. This increase intelecommunication capabilities allows businesses to benefit from the informationrevolution in numerous ways, such as streamlining their inventories, increasing productivity,and identifying new markets. In the following sections, the technology of moderntelecommunications will be discussed.

Communications Networks

When computers were first invented, they were designed as stand-alone systems.As computers became more widespread, practical, useful, and indispensable, networksystems were developed that allowed communication between computers. The term“network” describes computers that are connected for the purpose of sharing data,software, and hardware. The two types of networks include local area networks (LANs)and wide area networks (WANs). As the name suggests, LANs cover a limitedgeographic area, usually a square mile or less. This limited area can be confined to aroom, a building, or a group of buildings. Although a LAN can include one centralcomputer connected to terminals, more commonly it connects a group of personalcomputers. A WAN covers a much larger geographic area by means of telephonecables and/or other communications channels. WANs are often used to connect acompany’s branch offices in different cities. Some familiar public wide area networksinclude AT&T, Sprint, and MCI.

Internet, Intranet, and Extranet

“Internet work” is the term used to describe two or more networks that arejoined together. The term “Internet” describes the collection of connected networks.The Internet has been made accessible by use of the World Wide Web. The Weballows users to navigate the milli~ns of sites found on the Internet using softwareapplications called Web browsers. People make use of the Internet in numerous ways

Page 338: DBA1656

DBA 1656 QUALITY MANAGEMENT

338

NOTES

Anna University Chennai

for both personal and business applications. For instance, an investor is able to accessa company directly and set up an investment account; a student is able to research anassigned topic for a class report; a shopper can obtain information on new and usedcars.

The Internet concept of global access to information transferred to a privatecorporate network creates an intranet. In conjunction with corporate Internet access,many companies are finding that it is highly practical to have an internal intranet. Becauseof the increased need for fast and accurate information, an efficient and seamlesscommunications line enabling all members to access a wealth of relevant informationinstantaneously is vital.

A company intranet in conjunction with the Internet can provide various typesof information for internal and/or external use. Uses such as instantaneous transfer ofinformation, reduced printing and reprinting, and elimination of out-of-date informationcan provide great benefits to geographically dispersed groups. Some examples ofinformation that an intranet might include are company and procedures manuals, acompany phonebook and e-mail listings, insurance and benefits information, in-housepublications, job postings, expense reports, bulletin boards for employee memoranda,training information, inventory lists, price lists, and inventory control information. Puttingsuch applications on an intranet can serve a large group of users at a substantiallyreduced cost.

Some companies might want to make some company information accessible topreauthorized people outside the company or even to the general public. This can bedone by using an extranet. An extranet is a collaborative network that uses Internettechnology to link businesses with their suppliers, customers, or other businesses. Anextranet can be viewed as part of a company’s intranet. Access by customers wouldallow entering orders into a company’s system. For example, a person may orderairline tickets, check the plane schedule, and customize the trip to his or her preferences.In addition to time and labor savings, this type of order entry could also decrease errorsmade by employees when entering manually prepared orders.

Security and privacy can be an issue in using an extranet. One way to providethis security and privacy would be by using the Internet with access via passwordauthorization. Computer dial in and Internet access to many financial institutions is nowavailable. This is an example of limited access to information. While bank employeeshave access to many facets of institutional information, the bank customers are able toaccess only information that has to do with their own accounts. In addition to theirbanking account number, they would have to use their password to gain access to theinformation.

Page 339: DBA1656

DBA 1656 QUALITY MANAGEMENT

339

NOTES

Anna University Chennai

Transmission Media

The physical devices making up the communications channel are known as thetransmission media. These devices include cabling media (such as twisted-pair cable,coaxial cable, and fiber-optic cable) and wireless media (such as microwaves andother radio waves as well as infrared light). Wireless transmission has the advantage ofnot having to install physical connections at every point. Microwave stations use radiowaves to send both voice and digital signals. The principal drawback to this system isthat microwave transmission is limited to line-of-sight applications. Relay antennas areusually placed twenty-five to seventy-five miles apart and can have no interfering buildingsor mountains between them. Earth-based microwave transmissions, called terrestrialmicrowaves, send data from one microwave station to another, similar to the methodby which cellular telephone signals are transmitted.

Earth stations receive microwave transmissions and transmit them to orbitingcommunication satellites, which then relay them over great distances to receiving earthstations. Usually, geosynchronous satellites are placed roughly twenty-two thousandmiles above the earth. Being geosynchronous allows the satellites to remain in fixedpositions above the earth and to be constantly available to a given group of earth stations.

Many businesses either lease or rent satellite and/or microwave communicationservices through the telephone company or other satellite communication companies. Ifa business has only a small amount of information to be transmitted each day, it mayprefer to use a small satellite dish antenna instead.

Types of Signals and Their Conversion By Modem

Most telecommunications involving personal computers make use of standardtelephone lines at some point in their data transmission. But since computers have beendeveloped to work with digital signals, their transmission presents a non-compatiblesignal problem. Digital signals are on/off electrical pulses grouped in a manner to representdata. Originally, telephone equipment was designed to carry only voice transmissionand operated with a continuous electrical wave called an analog signal. In order fortelephone lines to carry digital signals, a special piece of equipment called a modem(Modulator/DE Modulator) is used to convert between digital and analog signals.Modems can be either external to the computer, and thus to be moved from one computerto another, or they can be internally mounted inside the computer. Modems are alwaysused in pairs.

Both the receiving and transmitting modems must operate at the same speed.Multiple transmission speeds allow faster modems to reduce their speed to match that

Page 340: DBA1656

DBA 1656 QUALITY MANAGEMENT

340

NOTES

Anna University Chennai

of a slower modem. The transmission rate and direction are determining factors thatinfluence the speed, accuracy, and efficiency of telecommunications systems.

5.16 INFORMATION QUALITY ISSUES

Information quality (IQ) is a term to describe the quality of the content ofinformation systems. Most information system practitioners use the term synonymouslywith data quality. However, as many academics make a distinction between data andinformation, some will insist. on a distinction between data quality and information quality.Information quality assurance is confidence that particular information meets some contextspecific quality requirements.

“Information quality” is a measure of the value which the information providesto the user of that information. ‘Quality’ is subjective and the quality of information canvary among users and among uses of the information. Furthermore, accuracy is just oneelement of IQ and this can be source-dependent. Often there is a trade-off betweenaccuracy and other aspects of the information determining its suitability for any giventasks.

Dimensions of Information Quality

The generally accepted list of elements used in assessing subjective InformationQuality are those put forth in Wang & Strong (1996).:

• Intrinsic IQ: Accuracy, Objectivity, Believability, Reputation

• Contextual IQ: Relevancy, Value-Added, Timeliness, Completeness, Amountof information

• Representational IQ: Interpretability, Ease of understanding, Conciserepresentation, Consistent representation

• Accessibility IQ: Accessibility, Access security

Researchers should evaluate the quality of information appearing online or inprint based on five criteria—scope of coverage, authority, objectivity, accuracy andtimeliness. This guide defines the criteria, documents incidents of questionable, false orfraudulent information as reported in the news or trade literature, provides examples ofWeb sites that illustrate good or bad information, and suggests strategies that help youdetect bad information.

Criteria for Quality in Information

To evaluate information, you should understand the significance of scope ofcoverage, authority, objectivity, accuracy and timeliness.

Page 341: DBA1656

DBA 1656 QUALITY MANAGEMENT

341

NOTES

Anna University Chennai

• Scope of coverage

Refers to the extent to which a source explores a topic. Consider time, periods,geography or jurisdiction and coverage of related or narrower topics.

• Authority

Refers to the expertise or recognized official status of a source. When workingwith legal or government information, consider whether the source is the official providerof the information

• Objectivity: It is the bias or opinion expressed when a interprets or analysesfacts.

• Accuracy: It describes information that is complete.

• Time bounds: It refers to information that is available at the time of publication.

The most basic requirements of good information are:

• Objectivity: That the information is presented in a manner free from propagandaor disinformation.

• Completeness: That the information is a complete, not a partial picture of thesubject

• Pluralism: That all aspects of the information are given and are not restricted topresent a particular viewpoint, as in the case of censorship.[1]

To achieve quality in electronic information, it is necessary to be sure that one isretrieving all of the relevant information, and then to determine what of the retrievedinformation is valuable; what information is free of bias, propaganda, or omissions. Tohave quality information, three things are necessary:

• Gaining full and appropriate access to the available information

• Making full use of the retrieval mechanisms, which requires an understandingof how these mechanisms work.

• Evaluation of the quality of the information.

The World Wide Web holds the potential for becoming the greatest repositoryof knowledge ever created. Different from the traditional library, material on the Web isfrequently self-published, stored in quasi-secured repositories, and often, of unknownvalidity. The government, and it would seem a majority of the American population,favor public access of the Web through public libraries and public schools. Librariansare facing a new set of challenges in helping patrons access and utilize this new medium.Schools and public libraries face three main challenges:

Page 342: DBA1656

DBA 1656 QUALITY MANAGEMENT

342

NOTES

Anna University Chennai

• Providing “safe” access; the major concern here is “appropriate” access forminors.

• Locating useful, quality Information on the Web

Evaluating the information to verify quality

SUMMARY

Standards are important in ensuring the TQM culture in organizations. TheQuality Management Systems use various standards and guiding principles for ensuringthe adherence to objectives and also to improve the performance. Quality Audit willreveal the local stand of the system. The leadership plays an important and indispensablerole to win over the employees in organization through various methods of recognitionand reward. The developments in IT helped in defining quality functions and the computersand internet are used to address to the issues related to information quality.

REVIEW QUESTIONS

1. Expalin the procedure of establishing quality system in organizations.2. Describe ISO 9004 : 2000 and state its scope and applications.3. Enumerate the guidelines for performance improvements in service sector.4. “Culture place an important role in achieving TQM in organizations” –

Discuss.5. Describe the various types of leaderships and explain their impact on the

employees.6. Trace the developments in Information Technology over the years.7. “Computers alone do not ensure quality” – Critically examine.8. Detail the quality issues related to the information and explain the impact of

wrong information on quality ensuring process.