Top Banner
Esi Saari Operation and Maintenance Engineering DOCTORAL THESIS KPI framework for maintenance management through eMaintenance Development, implementation, assessment, and optimization
199

KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Apr 21, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Esi Saari

Operation and Maintenance Engineering

Department of Civil, Environmental and Natural Resources EngineeringDivision of Operation, Maintenance and Acoustics

ISSN 1402-1544ISBN 978-91-7790-400-7 (print)ISBN 978-91-7790-401-4 (pdf)

Luleå University of Technology 2019

DOCTORAL T H E S I S

Esi Saari K

PI framew

ork for maintenance m

anagement through eM

aintenance

KPI framework for maintenancemanagement through eMaintenance

Development, implementation, assessment, and optimization

Page 2: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance Development, implementation, assessment, and optimization

Esi Saari

Operation and Maintenance Engineering

Page 3: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Printed by Luleå University of Technology, Graphic Production 2019

ISSN 1402-1544 ISBN 978-91-7790-400-7 (print)ISBN 978-91-7790-401-4 (pdf)

Luleå 2019

www.ltu.se

Page 4: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

i  

ACKNOWLEDGEMENTS

The research presented in this thesis was carried out by the Operation and Maintenance Engineering division at Luleå University of Technology (LTU), Sweden.

First of all, I would like to thank LKAB for providing funding, data and other assistance, specifically, Peter Olofsson, Mats Renfors, Sylvia Simma, Maria Rytty, Mikael From and Johan Enbak, who were involved in initiating and supporting this project.

Next, I would like to express my profound gratitude to the project leader who doubles as my assistant supervisor, Professor Ramin Karim, for giving me the opportunity to be a part of the research and for providing his guidance throughout the research.

I would also like to thank my main supervisor Associate Professor Jing Lin (Janet Lin). I would not be here today without her tremendous help. I am very grateful for her patience, understanding and guidance even in the short time we worked together.

I am thankful to Professor Aditya Parida who was initially my main supervisor. Even after he left, he still guided and supported me. A big thank you to Associate Professor Phillip Tretten for his help, guidance and words of encouragement in times when I felt I could not continue. I also thank Professor Uday Kumar for his comments, contributions and guidance.

I appreciate the support of Professor Diego Galar, Dr. Liangwei (Levis) Zhang, Dr. Stephen Famurewa, Dr. Christer Stenström and other faculty members.

Great thanks are due to my loving husband John and our daughter Johanna for their understanding and support. Thanks to Mrs. Philomina Owusu-Obeng for her prayers and words of encouragement and for being the mother I never had. I would like to thank my siblings and parents-in-law, Mats and Eva Saari, for their encouragements.

A big thank you also to the wonderful brethren who supported me with their prayers and encouraging words, Dr. Obudulu Ogonna, Dr. Samuel Awe, Abiola Famurewa, Andrews Omari and Emefa Omari.

Finally, all of my help comes from God, who gives life and hope to the hopeless, the eternal creator and giver of wisdom and grace for this time and in the eternal life.

Esi Saari

June 2019

Luleå, Sweden

Page 5: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

ii  

Page 6: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

iii  

ABSTRACT

Performance measurement is critical if any organization wants to thrive. The motivation for the thesis originated from the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance (in Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h a eUnderhåll)”, initiated and financed by a mining company in Sweden. The main purpose of this project is to propose an integrated KPI system for the mining company’s maintenance process through eMaintenance, including development, implementation, assessment, and optimization.

There are gaps in the research, however, resulting in the following challenges. First, no KPI framework considers both technical and soft KPIs, so developing a system is problematic. Second, few studies have focused on implementing KPI measurement through eMaintenance. Third, there are gaps in KPI assessment. In assessing system availability, for example, the current analytical (e.g., Markov/semi-Markov) or simulation approaches (e.g., Monte Carlo simulation-based) cannot handle complicated state changes or are computationally expensive. In addition, few researchers have revealed the connections between technical and soft KPIs. For those soft KPIs for which the distribution of data collected from eMaintenance systems (e.g., work orders) is not easily determined, studies are insufficient. Fourth, the current continuous improvement process for the KPIs is very time-consuming. In short, there is a need for a new approach.

The thesis develops an integrated KPI framework consisting of technical KPIs (linked to machines) and soft KPIs (linked to maintenance workflow) to control and monitor the entire maintenance process to achieve the overall goals of the organization. The proposed KPI framework makes use of four hierarchical levels and has 134 KPIs divided into technical and soft KPIs as follows: asset operation management has 23 technical KPIs, maintenance process management has 85 soft KPIs and maintenance resources management has 26 soft KPIs.

The thesis discusses the proposed KPI framework; it lists the KPIs and provides timelines, definitions and general formulas for each specified KPI. Results will be used by the mining company to guide the implementation of the proposed KPIs in an eMaintenance environment.

To suggest novel approaches to KPI assessment, the thesis takes system availability in the operational stage as an example. It proposes parametric Bayesian approaches to assess system availability. With these approaches, Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) can be treated as distributions instead of being “averaged” by point estimation. This better reflects reality. Markov Chain Monte Carlo (MCMC)

Page 7: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

iv  

approach is adopted to take advantage of both analytical and simulation methods. Because of MCMC’s high dimensional numerical integral calculation, the selection of prior information and descriptions of reliability/maintainability can be more flexible and realistic. The limitations of data sample size can also be compensated for. In the case studies, Time to Failure (TTF) and Time to Repair (TTR) are determined using a Bayesian Weibull model and a Bayesian lognormal model, respectively. The proposed approach can integrate analytical and simulation methods for system availability assessment and could be applied to other technical problems in asset management (e.g., other industries, other systems). By comparing the results with and without considering the threshold for censoring data, the research shows the connection between technical and soft KPIs, and suggests the threshold can be used as a monitoring line for continuous improvement in the mining company. For those soft KPIs for which the distribution of data collected from the eMaintenance system (e.g., work orders) is not easily determined, other approaches, such as time series analysis (if the data are “fast moving”), the Croston method (if the data are “intermittent”), or the bootstrap method (if the data are “slow moving”) could be applied.

To ensure the KPI framework can be improved continuously, the thesis performs a comparison study to find the gaps between current and proposed KPIs in the mining company. It adapts a roadmap from the railway industry to show how optimization can be promoted by reviewing and improving the KPI framework.

Results from this study will be applied to the company and guide its development, implementation and assessment of the KPIs through eMaintenance with continuous improvement. The proposed approaches could also be applied to other technical problems in asset management (e.g., other industries, other system).

Keywords: maintenance engineering, maintenance performance measurement, Key Performance Indicator (KPI), eMaintenance, mining industry.

Page 8: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

v  

ACRONYMS

 

ACF Auto-correlation function AD Anderson-Darling ARMA Autoregressive Moving Average ARIMA Autoregressive Integrated Moving Average CM Corrective maintenance DSS Decision support system FAR False alarm rate GRB Gelmen-Rubin-Brooks HPD Highest Posterior Distribution HSE Health Safety Environment IT Information Technology ICT Information and Communications Technology KPIs Key performance indicators KRA Key result area MA Maintenance Analytics MC error Monte Carlo error MCMC Markov Chain Monte Carlo MDT Mean down time MES Manufacturing execution system MPIs Maintenance performance indicators MPM Maintenance performance measurement MTBF Mean Time between Failure MTTF Mean Time to Failure MTTR Mean Time to Rapair OEE Operational Equipment Effectiveness PACF Partial auto-correlation function PDCA Plan-Do-Check-Act PdM Predictive Maintenance PI Performance indicators PM Preventive Maintenance RCA Root Cause Analysis ROI Return on Investment RQ Research Question RTF Run to Failure SBA Syntetos-Boylan Approximation SBJ Shale-Boylan-Johnston

Page 9: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

vi  

SIQ Swedish Institute for Quality TBF Time between Failure TDU Total report operation and maintenance TSB Teunter, Syntetos and Babai TTF Time to Failure TTR Time to Repair  

Page 10: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

vii  

CONTENTS

 

PartI‐ComprehensiveSummary

CHAPTER1.INTRODUCTION 11.1 Background 1 1.1.1KPIsandMPMframework 1 1.1.2OntologyandtaxonomyineMaintenance 3 1.1.3KPIOntologyandtaxonomy 4 1.1.4Projectmotivation 6 1.2 Problem statement 9 1.3 Purpose and objectives 10 1.4 Research questions 10 1.5 Linkage of research questions and appended papers 11 1.6 Scope and limitations 12 1.7 Authorship of appended papers 12 1.8 Outline of thesis 13

CHAPTER2.THEORETICALFRAMEWORK 152.1 eMaintenance and maintenance decision-making 15 2.2 Performance measurement 17 2.3 Maintenance performance measurement 20 2.4 KPI assessment 21 2.4.1 Systemavailabilityassessment 21 2.4.2 Somequantitativeapproaches 22 2.5 Summary of research framework 27

CHAPTER3.RESEARCHMETHODOLOGY 293.1 Research design 29 3.2 Data collection 31 3.2.1 Interview 31 3.2.2 Documentationfromstudiedcompany 31 3.2.3 Datasource:operationandmaintenancedata 31 3.3 Literature review 31 3.4 Data analysis 32 3.4.1DataanalysisinPaperB 32 3.4.2DataanalysisinPaperC 35 3.5 Reliability and validity of the research 38

Page 11: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

viii  

3.6 Induction, deduction and abduction 40

CHAPTER4.SUMMARYOFTHEAPPENDEDPUBLICATIONS 414.1 Paper A 41 4.2 Paper B 41 4.3 Paper C 42

CHAPTER5.RESULTSANDDISCUSSION 455.1 Results and discussion related to RQ1 45 5.1.1KPIframework 45 5.1.2DevelopmentofassetoperationmanagementKPIs 47 5.1.3DevelopmentofmaintenanceprocessmanagementKPIs 49 5.1.4DevelopmentofmaintenanceresourcemanagementKPIs 56 5.2 Results and discussion related to RQ2 58 5.2.1KPIimplementationforassetoperationmanagement 58 5.2.2KPIimplementationformaintenanceprocessmanagement 60 5.2.3KPIimplementationformaintenanceresourcemanagement 68 5.3 Results and discussion related to RQ3 71 5.3.1AssessmentoftechnicalKPIs 71 5.3.2AssessmentandconnectionsoftechnicalandsoftKPIs 75 5.3.3AssessmentofsoftKPIs 84 5.4 Results and discussion related to RQ4 92 5.4.1ComparisonofcurrentandproposedKPIs 92 5.4.2OptimizationoftheproposedKPIs 94

CHAPTER6.CONCLUSIONS,CONTRIBUTIONSANDFUTURERESEARCH 976.1 Conclusions 97 6.2 Contributions 99 6.3 Future research 99

REFERENCES 101

APPENDIX 107

PartII–AppendedPapers 

PaperA

PaperB

PaperC

Page 12: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

ix  

LISTOFAPPENDEDPAPERS

 

PaperA

Saari, E., Sun, H-L., Lin, J. and Karim, R. 2019. Development and implementation of a KPI framework for maintenance management in a mining company. Journal of SystemAssuranceEngineeringandManagement.Under Review.

PaperB

Saari, E., Lin, J., Zhang, L-W, Liu B and Karim, R. 2019. System availability assessment using a parametric Bayesian approach – a case study of balling drums. InternationalJournalofSystemAssuranceEngineeringandManagement.Accepted.

PaperC

Saari, E., Lin, J., Liu B, Zhang, L-W and Karim, R. 2019. A novel Bayesian approach to system availability assessment using a threshold to censor data. InternationalJournalofPerformabilityEngineering.Published.

LISTOFRELATEDPUBLICATIONS

Nunoo, E., Phillip, T. and Parida, A. 2014. Issues and challenges for condition assessment: A case study in mining. Proceedingsofthe3rd internationalworkshopandcongressoneMaintenance:June17‐18,Luleå,Sweden.pp:85‐93.

Saari, E., Lin, J. and Karim, R. 2018. Development of KPI framework for maintenance process management through eMaintenance – A study for LKAB. Research Report. Approved by LKAB.

Page 13: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

x  

Page 14: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

 

PartI

ComprehensiveSummary

Page 15: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 16: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 1

Chapter1Introduction

This chapter gives a short description of the research background of the thesis, states the research problems, enumerates the research purpose, objectives and questions, and explains the research scope, limitations, and structure.

1.1 Background

Description of this section consists of both theoretical background and practical background as shown in Figure 1.1, after which the problem statement is summarized in the next section.  

PI, KPI,MPI, MP, MPM framwork

(See 1.1.1)

Maintenance, eMaintenance, Ontology

and Taxonomy (See 1.1.2)

KPI Ontology and Taxonomy

(See 1.1.3)

Project motivation,Mining company,Current situation

(See 1.1.4)

1.2 Problem statement Theoretical Background

Practical Background

 

Figure 1.1 Theoretical and practical background

1.1.1KPIsandMPMframework

Performance indicators (PIs) are numerical or quantitative indicators that show how well an objective is being met (Pritchard et al., 1990). PIs highlight opportunities for improvement within companies and are applied to find ways to reduce downtime, costs and waste, operate more efficiently, and get more capacity from the operational lines (Parida, 2006). PIs also provide measures of how many resources are being used in relation to available ones, access the extent to which management targets are met and evaluate the general impact of management strategies (Alegre et al., 2017).

PIs can be classified as leading or lagging indicators. Leading indicators warn users about objectives beforehand; thus, they work as performance drivers and support a specific organizational unit in ascertaining the present status in comparison with an acceptable reference. A lagging indicator indicates condition after the performance has

Page 17: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 2

taken place; e.g., maintenance cost per unit (Parida, 2006). As rule, all PIs are tied to long-range corporate business objectives.

When aggregated to the managerial or higher level, PIs at the shop floor level or functional level are called key performance indicators (KPIs). A KPI can indicate the performance of a key result area (KRA). KPIs focus on those aspects of organizational performance that are the most critical for the current and future success of the organization (Parmenter, 2007). They evaluate whether organizational targets have been reached. Unlike PIs, which are mostly general measures, KPIs measure what the organization considers most important. For this reason, some organizations may use a KPI that another company considers a PI and vice versa.

Maintenance performance indicators (MPIs) are used to evaluate the effectiveness of maintenance carried out (Wireman, 2005). The attributes and concept of performance measurement are relevant to maintenance performance measurement (MPM) if a holistic approach is adopted for maintenance, and it is considered part of the business performance. An MPM framework needs to facilitate and support management in controlling and monitoring the performance aligned to the organizational objectives and strategy to permit timely corrective decisions. The framework needs to provide a solution for performance measurements by linking them directly with the organizational strategy and considering criteria consisting of both financial and non-financial indicators (Parida & Kumar, 2006). The link-and-effect model of MPM framework can achieve total maintenance effectiveness, i.e., both external and internal (see Figure 1.2), thus contributing to the overall objectives of the organization and its business units (Parida, 2006).

StrategicLevel Societal responsibility Transparency Good governance

TacticalLevel ROI & Cost effectiveness Life cycle asset utilization Safe environment

OperationalLevel Availability Reliability Capacity

Business Integrity Index (BII)

Asset Integrity Index (AII)

Process Integrity Index (BII)

ROI & HSE

Unsolved conflict

OEENo.Incident/

accident

Relationship of Coustomer &

Employee conflict

Health & safety (No. Incident/accident)

Organization, Engineering, Supply chain management

Objectively measure data from operational to strategical level

MPIs derived from strategical to operational level  

Figure 1.2 Link-and-effect model1 (Parida, 2006)

                                                       1 Notes in Figure 1.2: ROI-Return on Investment; HSE-Health Safety Environment; OEE-Operational Equipment Efficiency.

Page 18: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 3

An MPM framework is needed to meet the eMaintenance requirements of organizations, stakeholders, and the maintenance department in some cases (Parida, 2006).

1.1.2OntologyandtaxonomyineMaintenance

eMaintenance is defined as the materialization of information logistics aimed to support maintenance decision-making (Karim, 2008; Kajko-Mattson et al., 2011). Its solutions integrate information and communications technology (ICT) with maintenance strategies, creating innovative ways to support production (e-manufacturing) and business (e-business) (Koc & Lee, 2003; Muller, Crespo Marque, & Iung, 2008).

eMaintenance solutions are essentially data-driven. Thus, an effective maintenance decision-making process needs a trusted decision support system (DSS) based on knowledge discovery, defined as data acquisition, data transition, data fusion, data mining, information extraction and visualization (Kans & Galar, 2017; Karim et al., 2016).

Since eMaintenance data are often transferred between heterogeneous environments, eMaintenance solutions must have interconnectivity. All systems within the eMaintenance network must interact as seamlessly as possible to exchange information in an efficient and usable way (Aljumaili, 2016). However, in reality, not all data are processed and turned into information; some say there are too many data and too little information (Galar & Kumar, 2016). In the era of Maintenance 4.0, the lack of efficient Information Technology (IT) support adversely affects the planning and optimization of maintenance (Kans & Galar, 2017).

See Figure 1.3, incorporating principles of ontology and taxonomy into eMaintenance solutions will facilitate strategic asset management (Kans & Galar, 2017) and promote maintenance analytics (Karim, et al., 2016).

 

Figure 1.3 Ontology and Taxonomy in eMaintenance

Page 19: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 4

In philosophy, ontology is the study of the nature of being, existence, or reality; in computer science, ontology is a special kind of information object or computational artefact (Rachman, 2019). In eMaintenance, ontologies are represented by the published standards that can be used to support maintenance. These standards offer some stability by proposing information models for data representation, an essential property for long-term data exchange and archiving (Aljumaili, 2016). An ontology model can be described as a set O C, RS, I . In this model, C is a collection of concepts also called classes, I is set of particulars (instances of classes, individuals), and RS is the set of relationships between two concepts or particulars (Schmidt, 2018).

Taxonomy is a hierarchical classification system, often depicted as a tree that starts from a root concept and progressively divides into more specific off-shoot concepts. In eMaintenance, taxonomy refers to the type of relationships among the data.

It is essential to understand the ontology and taxonomy of KPIs if data are to be transformed from information into the knowledge required to develop, implement, assess, and optimize a KPI framework for maintenance management through eMaintenance.

 1.1.3KPIontologyandtaxonomy

KPI ontology supports the construction of a valid reference model that integrates KPI definitions proposed by different engineers in a minimal and consistent manner to increase interoperability and collaboration (Diamantini et al., 2014). Several KPI ontology models have been proposed in the literature in the context of the performance-oriented view of organizations (Popova & Alexei, 2010; Del-Río-Ortega et al., 2010; Del-RíO-Ortega et al., 2013; Negri et al., 2015). These models dwell on description logic and first-order sorted predicate logic to express on an axiomatic basis the relations among indicators, using concepts like causing,correlatedandaggregation_of. However, some argue that these models do not take compositional semantics into account. Furthermore, the models are conceived to define KPIs in a single process-oriented enterprise, and the issue of consistency management is not taken into account.

Diamantini et al. (2014) have considered compositional semantics in developing their KPI model. The proposed method serves as a formal way of describing indicators, with the core of the ontology composed of a set of disjoint classes, detailed as indicator, dimension and formula.

Indicator signifies the key class of the KPI ontology, while its instances (i.e., indicators) describe the metrics enabling performance monitoring. Properties of the indicator include name, identifier, acronym, definition (i.e., a detailed description of meaning and usage), compatible dimensions, formula, unit of measurement chosen for the indicator, business object and aggregation functions.

Dimension is the coordinate or perspective along which a metric is computed; it is structured into a hierarchy of levels, where each level represents a different way of grouping elements of the same dimension.

Page 20: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 5

 

 

Figure 1.4 a fragment of the Indicator taxonomy: an example (Diamantini et al, 2014)

 

Figure 1.5 properties of indicator PersonnelTrainingCosts: an example (Diamantini et al, 2014)

Formula is an algebraic operation used to express the semantics of the indicator. It describes the way the indicator is computed and is characterized by the aggregation function, the way the formula is presented, the semantics (i.e., the mathematical meaning) of the formula, and references to its components, which are, in turn, formulas of indicators.

Page 21: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 6

According to Diamantini et al. (2014), KPI composite indicators can be represented in a tree structure and calculated with full or partial specification of the formula linking the indicator to its component. Figure 1.3 indicates a sample fragment of KPI taxonomy; Figure 1.4 shows an example of properties of the indicator called PersonnelTrainingCosts and an excerpt of the hierarchies for organization and time dimensions (Diamantini et al., 2014).

1.1.4Projectmotivation

The motivation for the research was the project “Key Performance Indicators (KPIs) for control and management of maintenance process through eMaintenance (in Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h a eUnderhåll)”, initiated and financed by a Swedish mining company. The main purpose of this project is to propose an integrated KPI system for the mining company’s maintenance process through eMaintenance, including development, implementation, assessment, and optimization.

The company’s maintenance policy focuses on full capacity utilization, with utilization rate and plant speed considered critical to profitability. The company assesses its success by analysing utilization, availability, and error rate data gathered by one asset management system in the studied company called “Plant Performance/IP21”. As a result, most of its existing KPIs measure maintenance performance relative to the equipment condition with a few KPIs measuring maintenance planning and maintenance execution, as shown in Figure.1.6.

The company’s mission for its maintenance process is to have equipment functioning in an agreed-upon manner. Getting equipment running at full capacity and in an agreed-upon manner requires in-depth knowledge of the equipment and the maintenance process. The company has divided its maintenance process into three levels as shown in Figure 1.6. All numbers in the figure correspond to sections in the maintenance handbook that explain in detail what the numbers mean.

Level 1 encompasses systematically developed maintenance needs to control maintenance work, level 2 encompasses mastering and assembling all work from the minimum maintenance work to the introduction of continuous improvements to control the equipment, and level 3 encompasses control over strategic processes.

Page 22: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 7

Figure 1.6 Maintenance process in the mining company2

                                                       2 Notes in Figure 1.6: PM-Preventive Maintenance; CM-Corrective Maintenance; SIQ-Swedish Institute for Quality.

Page 23: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 8

Simply put, the three levels include introducing working methods, measuring the effect of the work, adjusting the content of the tasks to give the correct effect and working together internally (workplaces, professions) and externally (collaboration with suppliers, other companies and organizations working with the company) to achieve coordination benefits. The company needs an integrated KPI framework because the data gathered are not being optimally used for decision making. Very few KPIs are in use and some KPIs do not work for all three company plants; for example, the speed loss KPI only worked in system KK4 at the time of this report.

In addition to Plant Performance/IP21, the company uses other asset management systems like Movex, LIMS, productions ledger, etc. to collect and store data for later analysis and decision making. New KPIs need to be developed and integrated with existing KPIs to measure efficiency in the maintenance process and to support effective decision making to promote full capacity utilization.

In the studied company, KPI framework comprises two parts3: technical KPIs (linked to machines) and business KPIs (linked to workflow); the latter ones are also called soft KPIs based on the business strategies of the company. Soft KPIs affect technical KPIs in the long run and, thus, can increase or decrease utilization and plant speed. Soft KPIs also affect production KPIs and even the company’s KPIs, as shown in Figure 1.7. The right KPIs can direct decision makers to address maintenance needs and other types of improvement work on equipment, maintenance personnel and the maintenance process as a whole.

 

Figure 1.7 Current KPI structure in the studied company

To improve KPIs and keep them relevant, the company uses the Plan-Do-Check-Act (PDCA) cycle, internally referred to as the “improvement wheel”. The PDCA cycle is similar to Neely’s PDCA cycle for performance measurement shown in Figure 1.3. Both methods help to keep measures alive and relevant.

                                                       3 The definitions of technical and business (soft) KPIs follow internal rules of the studied mining company.

Page 24: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 9

Figure 1.8 illustrates the maintenance, execution, follow-up and analysis of the maintenance plan, as well as continuous improvements. Each section in the cycle shows what is required to achieve the main objective of that part of the cycle.

Figure 1.8 PDCA cycle for maintenance improvement

 

1.2Problemstatement

As discussed above, new multifaceted challenges in the mining company require the development of an integrated KPI framework for maintenance management in an eMaintenance environment. Based on the discussions in Chapter 1 and later in Chapter 2, the problems identified in an initial exploratory study include the following:

Page 25: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 10

Problem1:Lack of a KPI framework for maintenance management considering technical and soft KPIs;

Problem 2: Lack of a specified approach to guide implementation of the developed KPI framework through eMaintenance;

Problem 3: Lack of quantitative approaches for KPI assessment for either technical or soft KPIs;

Problem4: Lack of an optimization approach for a developed KPI framework to optimize it continuously.

1.3Purposeandobjectives

To deal with these problems, the main purpose of this research is to develop and assess an integrated KPI framework for maintenance management in an eMaintenance environment to achieve the overall goals of the organization.

More specifically, the research objectives include:

Objective1:Developing a KPI framework for maintenance management; Objective2: Developing a KPI implementation approach in an eMaintenance

environment which can be improved continuously; Objective3: Developing novel approaches to assessing both technical and soft

KPIs.

The main connections between the problems summarized in Section 1.2, and the research objectives are shown in Table 1.1.

Table 1.1 Connections between problems and objectives

Problems Objective 1 Objective 2 Objective 3

1 X

2 X X

3 X

4 X

 

1.4Researchquestions

To achieve the stated purpose and objectives, the following research questions have been formulated:

Researchquestion1: What is a KPI framework for maintenance management? Researchquestion2: How can the developed KPI framework be implemented

through eMaintenance? Researchquestion3: How can the KPIs be assessed using novel approaches?

Page 26: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 11

Research question 4: How can the developed KPI framework be improved continuously?

The research questions are formulated to achieve the research objectives presented in Section 1.3.

For research question 3, three sub-questions are formulated:

Research question 3.1: How can technical KPIs be assessed using a novel approach?

Research question 3.2: How can technical and soft KPIs be assessed and improved together using a novel approach?

Research question 3.3: How can soft KPIs be assessed using a novel approach?

The main connections between the research questions and research objectives are shown in Table 1.2.

Table 1.2 Connections between RQs and objectives

Research questions (RQs) Objective 1 Objective 2 Objective 3

RQ1 X X

RQ2 X

RQ3

RQ3.1 X

RQ3.2 X

RQ3.3 X

RQ4 X

 

1.5Linkageofresearchquestionstotheappendedpapers

The links between the research questions (RQs) and the appended papers and the PhD thesis, are presented in Table 1.3. RQ1 is answered in Paper A and Chapter 5.1 in the thesis. RQ2 is explored in Paper A and Chapter 5.2. RQ3.1 is addressed in Paper B; RQ3.2 is addressed in Paper C; RQ3.3 is addressed in Chapter 5.3 of the thesis. Finally, RQ4 is explored in Chapter 5.4 of the thesis.

 

 

 

 

Page 27: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 12

Table 1.3 Linkage of RQs and appended papers

Research questions (RQs) Paper A Paper B Paper C PhD thesis

RQ1 X X

RQ2 X X

RQ3

RQ3.1 X X

RQ3.2 X X

RQ3.3 X

RQ4 X

 

1.6Scopeandlimitations

This scope of this research is the study of an integrated KPI framework for maintenance management in an eMaintenance environment. The research covers KPI development, implementation, assessment, and optimization. Specifically, this research develops a four-level KPI framework with 134 indicators for a mining company. It explores the implementation of each proposed KPI in mining environment. Since the research was motivated/financed by a particular mining company, the technical KPIs (linked to machines) and soft KPIs (linked to workflow) are developed based on company’s business strategies.

The limitations of the thesis are the following:

First, the link-and-effect model are not dealt with in this study; Second, costs are not considered sufficiently, as other departments are not

included in the project; Third, the emphasis is on developing a new KPI assessment, so the research uses

only a few KPIs as examples because of time and project limitations; Fourth, the proposed KPIs are general. KPIs for different/specified plants,

processes, maintenance tasks (e.g., condition morning, lubrication, etc.) are not studied separately.

Further work is required to minimize these limitations.

1.7Authorshipofappendedpapers

The content of this section has been accepted by all authors who contributed to the papers and the thesis. The contribution of each author with respect to the following activities is shown in Table 1.4:

1. Formulating the fundamental ideas of the problem (initial idea and model development);

2. Collecting data;

Page 28: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 13

3. Performing the study; 4. Drafting the paper; 5. Revising important intellectual contents; 6. Giving final approval for submission.

Table 1.4 Linkage of RQs and appended papers

Authors Paper A Paper B Paper C Thesis

Esi Saari 1 - 6 1 - 6 1 - 6 1 - 6

Jing (Janet)Lin 1, 5, 6 1, 5, 6 1, 5, 6 1, 5, 6

Ramin Karim 1, 5, 6 5, 6 5, 6 1, 5, 6

Hunling (Natalie)Sun 3, 5 / / /

Liangwei (Levis) Zhang / 5 5 /

Bin Liu / 5 5 /

 

1.8Outlineofthesis

This thesis consists of two parts. The first part summarizes the subject and research and discusses the appended papers, extensions of the research and conclusions. The second part consists of three appended papers.

More specifically, Chapter 1 provides background information on the relevance of this research and its contextual perspective. The chapter introduces the research problem, describes the research purpose, introduces the research questions and explains the scope, limitations and structure. The theoretical framework is presented in Chapter 2. The chapter gives an overview of maintenance performance measurement, KPI implementation through eMaintenance and related areas in the mining industry and quantitative approaches to KPI assessment. Chapter 3 presents the research methodology, including research design, data collection, literature review and data analysis Chapter 4 summarizes the three appended publications. Chapter 5 presents the results and discusses the research. Finally, Chapter 6 gives the findings, explains the contribution of the research and suggests future work.

The first appended paper develops a KPI framework to control and monitor the maintenance process to achieve the studied company’s overall goals. This KPI framework comprises two parts: technical KPIs (linked to machines) and business KPIs (linked to workflow); the latter ones are also called soft KPIs based on business strategies of the mining company. The developed KPI framework has four levels. The second level includes asset operation management with KPIs measuring maintenance performance relative to the equipment condition, maintenance process management with KPIs measuring the efficiency and effectiveness of the consistent application of

Page 29: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 1 Introduction

 

 14

maintenance and maintenance support and maintenance resources management with KPIs measuring spare part management, internal maintenance personnel management and external maintenance personnel management. The third level breaks down the items on the second level, and the fourth level includes the KPIs that are made up of the third level classifications. In all, the framework includes 134 KPIs to measure maintenance performance and streamline maintenance processes. Twenty-three of these KPIs are technical KPIs and 111 are softKPIs. The paper explores the implementation of the framework through eMaintenance, discussing the timeline and general formula for each KPI. Results from this study will be applied by the studied company through eMaintenance.

The second paper proposes a new approach to system availability assessment: a parametric Bayesian approach using Markov Chain Monte Carlo (MCMC), an approach that takes advantages of both analytical and simulation methods. In this approach, Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better reflect reality and compensate for the limitations of simulation data sample size. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems).

The third paper proposes a Bayesian approach to system availability assessment. In this novel approach, 1) MTTF and MTTR are treated as distributions instead of being “averaged” to better reflect reality and compensate for the limitations of simulation data sample size, 2) MCMC simulations are applied to take advantage of the analytical and simulation methods, and 3) a threshold is established for Time to Failure (TTF) data and Time to Repair (TTR) data, and new datasets with right-censored data are created to reveal the connections between technical and soft KPIs. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. By comparing the results with and without considering the threshold for censoring data, we show the threshold can be treated as a monitoring line for continuous improvement in the mining company.

Page 30: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 15

Chapter2TheoreticalFramework

This chapter presents the theoretical framework of this research through a literature review. The literature cited includes conference proceedings, journals, international standards, and other indexed publications.

2.1eMaintenanceandmaintenancedecision‐making

Maintenance is defined as a combination of all technical, administrative, and managerial actions during the life cycle of an item intended to retain it in, or restore it to, a state in which it can perform the required function (CEN, 2007). Maintenance is not confined to technical actions alone but includes other activities such as management, support planning, preparation, execution, assessment, and improvement (IEC, 2004).

The emergence of Information Technology (IT) has changed the way businesses are conducted. As in other fields, maintenance has benefited, with eMaintenance emerging in the early 2000s. There are varying definitions of eMaintenance. Tsang defines it as follows: eMaintenance is a maintenance strategy, where tasks are managed electronically by the use of real-time item data obtained through digital technologies, such as mobile devices, remote sensing, condition monitoring, knowledge engineering, telecommunications and internet technologies (Tsang, 2002). He explains eMaintenance should be considered a model that enhances the efficiency of maintenance activities by applying Information and Communications Technology (ICT) to provide information. Koc and Lee (2001) and Parida and Kumar (2004) define eMaintenance as a predictive maintenance system that provides monitoring and predictive prognostic functions, while Muller, Marquez, and Iung (2008) define it as a support to execute a proactive maintenance decision-making process. In the former definition, eMaintenance supports eOperations through remote diagnostics and asset management and through simulation-based optimization and decision-making in a specific organizational eBusiness scenario (Karim, 2008). Another view of eMaintenance is the integration of all necessary ICT-based tools to optimize costs and improve productivity through the use of Web services (Bangemann et al., 2004, 2006). In this technological approach to eMaintenance, Web service technology is used to facilitate the integration of information sources containing maintenance-relevant content. Kajko-Mattsson, Karim and Mirijamdotter (2011) define eMaintenance as maintenance managed and performed via computing and/or a multidisciplinary domain based on maintenance and ICT ensuring the eMaintenance services are aligned with the needs and business objectives of both customers and suppliers during the whole product lifecycle.

eMaintenance facilitates the bi-directional flow of data and information into the decision-making and planning process at all levels (Ucar & Qiu, 2005). The emergence of

Page 31: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 16

eMaintenance has reduced the problem of ineffective information logistics caused by the vast information associated with the maintenance of complex technical industrial systems. It is now much effective to access hidden information in vast amounts of data, stored for other purposes, at different places, in different formats, and generated throughout the entire life cycle of the system thanks to the coming together of information technology and maintenance (Karim, 2008).

eMaintenance has many other benefits, including energy efficiency, sustainability, safety, quality, and reduced costs (Jantunen, Emmanouilidis, Arnaiz, & Gilabert, 2011). It also offers advanced diagnostics and improved productivity (Kour, Aljumaili, Karim, & Tretten, 2019).

eMaintenance offers enhanced maintenance decision-making by answering the following questions:

Are we doing things right? Are we doing the right things? How do we decide the right things?

To answer these questions, maintenance performance measurements (see Section 2.3) are needed.

 

1. Maintenance Descriptive Analtyics

Whathashappened?

4. Maintenance Prescriptive Analtyics

Whatneedstobedone ?

2. Maintenance Diagnositic Analtyics

Whysomethinghashappened?

3. Maintenance Predictive Analtyics

Whatwillhappeninthefuture ?

MaintenanceAnalytics

 

 

Figure 2.1 The constitution phases of Maintenance Analytics (Karim, et al. 2016)

To support maintenance decision-making smoothly and efficiently, Maintenance Analytics (MA) has been proposed based on four interconnected time-lined phases (See Figure 2.1), which aim to facilitate maintenance actions through enhanced understanding of data and information. The MA phases include: 1) Maintenance Descriptive Analytics; 2) Maintenance Diagnostic Analytics; 3) Maintenance Predictive Analytics; and 4) Maintenance Prescriptive analytics. eMaintenance implies a wide range

Page 32: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 17

tools, technologies, and methodologies aimed for maintenance decision-making, including analytics. Hence, eMaintenance can be considered as a concept through which MA can be materialised (Karim, et al., 2016). By applying MA, KPIs can be monitored in a novel way with high efficiency through KPI ontology and taxonomy in an eMaintenance environment. Therefore, the following information is necessary to create maintenance KPIs:

Content: What KPIs do we need? Why are they needed? Data source: Where can we find the necessary information? Timeline: When do we need to measure? General formula: How do we calculate the KPIs?

While a great deal of work has dealt with maintenance KPIs, few studies have discussed the detailed requirements (content, data sources, timeline, and general formulas) for implementing maintenance KPIs through eMaintenance.

2.2Performancemeasurement

Performance measurement is critical to the success of organizations (Bourne, Melnyk, & Bititci, 2018). Those using a balanced or integrated performance measurement system perform better than those that do not (Lingle & Schiemann, 1996) because performance measures provide an important link between strategies and action and thus support the implementation and execution of improvement initiatives (Muchiri, Pintelon, Gelders, & Martin, 2011).

Performance measurement requires the formulation of Key Performance Indicators (KPIs), a set of measures that focus on those aspects of organizational performance that are most critical for current and future success (Parmenter, 2007). KPIs demonstrate how effectively a company is achieving key business objectives. They evaluate the company’s success in reaching targets and the degree to which areas within the company (e.g., maintenance) achieve their goals.

Many authors have written about performance measurement, including Kaplan and Norton (1992), Neely (1999), Bourne, Mills, Wilcox, Neely, and Platts (2000), Campbell and Reyes-Picknell (2006), Coetzee (1997), Weber and Thomas (2005), Dwight (1995; 1999b), and Tsang (2000).

It is not enough to develop KPIs. The KPIs must be maintained using four fundamental processes: design, plan and build, implement and operate, and refresh. These are shown in Figure 2.2 and described at greater length below.

Page 33: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 18

Figure 2.2 Neely’s fundamental process of performance measurement

Design: This is concerned with understanding what should be measured and defining how it should be measured, i.e. the metric. To achieve the desired ends and encourage the appropriate behaviour, individual measures require precise and careful design. The first step is to create a framework, taking into consideration the company’s goals and objective. This framework shows what will be developed, and what will be measured.

Plan and Build: This includes gaining access to the required data, building the measurement system, configuring data manipulation and distribution and overcoming people’s political and cultural concerns about performance measurement. This part helps develop the general formulas for the KPIs suggested in the framework.

Implement and Operate: This involves actually managing the measures using the measurement data to understand what is going on in the organization and applying that insight to drive improvements in performance. The most difficult part of performance measurement is managing the data. When data are acted upon, there will be value in measuring. In this stage, the actual development of the KPIs can begin.

Refresh: This is concerned with the measurement system itself, making sure it is refreshed and refined continuously, and the measures remain relevant to the needs of the company. A performance measurement system is a living entity which must evolve and be nurtured over time. This part is very important and should be considered to keep the KPIs relevant.

Page 34: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 19

Using Neely’s KPI definition guide, 36 questions must be answered to define each KPI, with the questions divided into ten overarching categories. An indicator is considered to be defined when the following categories of questions are answered:

Measurement: 1. What should the measure be called? 2. Does the title explain what the measure is? 3. Is it a title that everyone will understand? 4. Is it clear why the measure is important?

Purpose: 5. Why is the measure being introduced? 6. What is the aim/intention of the measure? 7. What behaviours should the measure encourage?

Relationships: 8. Which other measures does this one closely relate to? 9. What specific strategies or initiatives does it support?

Metric/Formula: 10. How can this dimension of performance be measured? 11. Can the formula be defined in mathematical terms? 12. Is the metric/formula clear? 13. Does the metric/formula explain exactly what data are required? 14. What behaviour is the metric/formula intended to induce? 15. Is there any other behaviour that the metric/formula should induce? 16. Is there any dysfunctional behaviour that might be induced? 17. Is the scale being used appropriately? 18. How accurate will the data generated be? 19. Are the data accurate enough? 20. If an average is used how much data will be lost? 21. Is the loss of “granularity” acceptable? 22. Would it be better to measure the spread of performance?

Target level(s): 23. What level of performance is desirable? 24. How long will it take to reach this level of performance? 25. Are interim milestone targets required? 26. How do these target levels of performance compare with competitors? 27. How good is the competition currently? 28. How fast is the competition improving?

Frequency: 29. How often should this measure be made? 30. How often should this measure be reported? 31. Is this frequency sufficient to track the effect of actions taken to improve?

Source of data: 32. Where will the data tracking this measure come from?

Page 35: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 20

Who measures: 33. Who, by name, function or external agency, is actually responsible for

collecting, collating and analysing these data? Who acts on the data (owner):

34. Who, by name or function, is actually responsible for initiating actions and ensuring performance along this dimension improves?

What they do: 35. How exactly will the measure owner use the data? 36. What actions will they take to ensure performance along this dimension

improves?

Although Neely’s KPI definition guide is detailed, it has drawbacks. For one thing, the method is very time-consuming.

2.3Maintenanceperformancemeasurement

The influence of maintenance on profitability is too high to ignore (Kumar & Ellingson, 2000). With reduced natural resource reserves, e.g. iron ore, oil and gas, and the unstable prices of these resources on the global market, the process industries working with these resources, such as mining companies, must optimise the maintenance process (Kumar & Ellingson, 2000). Because maintenance performs a service function for production, its merits or shortcomings are not always immediately apparent (Muchiri et al., 2011), but it must be measured for companies to remain profitable. This requires the development and use of a suitable set of KPIs.

Some authors have looked specifically at maintenance performance measurement (MPM), including Parida and Chattopadhyay (2007), Kumar, Galar, Parida, Stenström, and Berges (2013), and Stenström (2014). These authors proposed measuring the performance of maintenance by focusing on the maintenance process or on the maintenance results (Kumar et al., 2013).

Dwight (1999a) suggested a “value-based performance measurement”, a system audit approach to measuring the maintenance system’s contribution to organizational success. His approach takes into account the impact of maintenance activities on the future value of the organization, with an emphasis on variations in the lag between actions and outcomes.

Tsang (1998) proposed a strategic approach to managing maintenance performance using a balanced scorecard (Kaplan and Norton, 1992; Kaplan and Norton, 1996). However, the success of the balanced scorecard approach depends on how individual companies use it.

Löfsten (2000) advocated the use of aggregated measures like the maintenance productivity index, which measures the ratio of maintenance output to maintenance input. But Muchiri et al. (Muchiri et al., 2011) say Löfsten’s approach gives a very limited

Page 36: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 21

view of maintenance performance, as such it is difficult to quantify different types of maintenance inputs.

Parida and Chattopadhyay (2007) proposed a multi-criteria hierarchical framework for MPM; the framework includes multi-criteria indicators for each level of management, i.e. the strategic, tactical and operational levels. These multi-criteria indicators are categorized as equipment-/process-related (e.g. capacity utilization, OEE, availability, etc.), cost-related (e.g. maintenance cost per unit of production cost), maintenance-task-related (e.g. the ratio between planned and total maintenance tasks), customer and employee satisfaction-related, and health, safety and the environment-related, with indicators proposed for each level of management in each category.

Al-Najjar (2007) designed a model to describe and quantify the impact of maintenance on a business’s key competitive objectives related to production, quality and cost. The model can be used to assess the cost effectiveness of maintenance investment and provide strategic decision support for different improvement plans.

Muchiri et al. (2011) proposed an MPM system based on the maintenance process and maintenance results. These authors sought to align maintenance objectives with manufacturing and corporate objectives and provide a link between maintenance objectives, maintenance process/efforts and maintenance results. Based on this conceptual framework, they identified performance indicators of the maintenance process and maintenance results for each category. Their conceptual framework provides a generic approach to developing maintenance performance measures with room for customization for individual company needs.

The above proposals are based on both new and existing techniques; some are quantitative and others are qualitative. At this point, there is no integrated approach to measuring the performance of all components of maintenance. In addition, few studies consider the implementation of a KPI framework through eMaintenance; few discuss data sources or databases, timelines, or general formulas for specified KPIs.

2.4KPIassessment

To develop novel approaches to assessing both technical and soft KPIs, this study selected system availability as an example for illustration. This section focuses on system availability assessment and quantitative approaches that can be used to assess and predict soft KPIs.

2.4.1Systemavailabilityassessment

Availability represents the proportion of a system’s uptime out of the total time in service and is one of the most critical aspects of performance evaluation. Availability is commonly measured as Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR). However, those “mean” values are normally “averaged”; thus, some useful information

Page 37: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 22

(e.g., trends, system complexity) may be neglected, and some problems may even be hidden.

Assessment of system availability has been studied from the design stage to the operational stage in various system configurations (e.g., in series, parallel, k-out-of-n, stand-by, multi-state, or mixed architectures). Approaches to assessing system availability mainly use either analytic or simulation techniques.

In general, analytic techniques represent the system using direct mathematical solutions from applied probability theory to make statements on various performance measures, such as the steady-state availability or the interval availability (Dekker & Groenendijk, 1995; Ocnasu, 2007). Researchers tend to use Markov models to assess dynamic availability or semi-Markov models using Laplace transforms to determine average performance measures (Dekker & Groenendijk, 1995; Faghih-Roohi, et al., 2014). However, such approaches have been criticised as too restrictive to tackle practical problems; they assume constant failure and repair rates which is not likely to be the case in the real world (Raje, et al., 2000; Marquez, et al., 2005). Furthermore, the time dependent availability obtained by a Markovian assumption is not valid for non-Markovian processes (Raje, et al., 2000).

Simulation techniques estimate availability by simulating the actual process and random behaviour of the system. The advantage is that non-Markov failures and repair processes can be modelled easily (Raje, et al., 2000). Researchers are currently working on developing Monte Carlo techniques to model the behaviour of complex systems under realistic time-dependent operational conditions (Marquez, et al., 2005; Marquez & Iung, 2007; Yasseri & Bahai, 2018) or to model multi-state systems with operational dependencies (Zio, et al., 2007). Although simulation is more flexible, it is computationally expensive.

Traditionally, Bayesian approaches have been used to assess system availability as they can solve the problem of complicated system state changes and computationally expensive simulation data; however, their development and application have been stalled by the strict assumptions on prior forms and by computational difficulties. Research is more concerned with the prior’s selection or the posterior’s computation than the reality (Brender, 1968; Kuo, 1985; Sharma & Bhutani, 1993; Khan & Islam, 2012). The recent proliferation of Markov Chain Monte Carlo (MCMC) simulation techniques has led to the use of the Bayesian inference in a wide variety of fields. Because of MCMC’s high dimensional numerical integral calculation (Lin, 2014), the selection of prior information and descriptions of reliability/maintainability can be more flexible and more realistic.

2.4.2Somequantitativeapproaches

This section introduces some quantitative approaches used in the research. Bayesian survival analysis with MCMC is proposed as a novel approach to assessing system

Page 38: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 23

availability; time series analysis, Croston’s method, and the bootstrap method are proposed as methods to assess and predict soft KPIs.

2.4.2.1BayesiansurvivalanalysiswithMCMC

Bayesian theory comes from “An essay towards solving a problem in the doctrine of chances” by Bayes (1958). In this paper, Bayes proposed that based on observed data set 𝐷, any unknown parameter 𝜃 can be viewed as a random parameter. To apply a probability distribution 𝜋 𝜃 to describe 𝜃, the probability distribution must be for prior information which exists before sampling, or prior distribution. Given the sample likelihood function 𝐿 𝜃|𝐷 and the prior distribution 𝜋 𝜃 , we can get the posterior distribution for 𝜃 as

𝜋 𝜃|𝐷𝐿 𝜃|𝐷 𝜋 𝜃

𝐿 𝜃|𝐷 𝜋 𝜃 𝑑𝜃 2.4.1  

Important discussions on Bayesian theory include Box and Tiao (1992), Press (1991), Gelman, et al. (2004), etc.

Survival analysis is a method used to study time-to-event data. In survival analysis, the survival function 𝑆 𝑡 is actually the reliability function 𝑅 𝑡 , which can be defined as

𝑅 𝑡 𝑆 𝑡 𝑃 𝑇 𝑡 1 𝑃 𝑇 𝑡 2.4.2

where 𝑅 0 1 and 𝑅 ∞ 0 . Here, 𝐹 𝑡 is the distribution function of 𝑇 . The relationships between the hazard function ℎ 𝑡 and reliability function 𝑅 𝑡 is

ℎ 𝑡lim

∆ →𝑃 𝑡 𝑇 𝑡 ∆𝑡|𝑇 𝑡

∆𝑡𝑑𝑑𝑡

log 𝑅 𝑡 2.4.3

In practice, lifetime data are usually incomplete, and only a portion of the individual lifetimes of assets are known. Right-censored data are often called Type I censoring in the literature; the corresponding likelihood construction problem has been extensively studied. Suppose there are 𝑛 individuals whose lifetimes and censoring times are independent. The 𝑖th individual has lifetime 𝑇 and censoring time 𝐿 . The 𝑇 s are assumed to have probability density function 𝑓 𝑡 and reliability function 𝑅 𝑡 . The exact lifetime 𝑇 of an individual will be observed only if 𝑇 𝐿 . The lifetime data involving right censoring can be conveniently represented by 𝑛 pairs of random variables 𝑡 , 𝑣 , where 𝑡 min 𝑇 , 𝐿 and 𝑣 1 if 𝑇 𝐿 and 𝑣 0if 𝑇 𝐿 . That is,

Page 39: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 24

𝑣 indicates whether the lifetime 𝑇 is censored or not. The likelihood function is deduced as

𝐿 𝑡 𝑓 𝑡 𝑅 𝑡 2.4.4

Related references include Lawless (1982), Hougaard (2000), Cox and Oakes (1984) and Therneau and Grambsch (2000).

The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches has led to the use of the Bayesian inference in a wide variety of fields. MCMC is essentially Monte Carlo integration using Markov chains (Lin, 2014; Lin, 2016). Monte Carlo integration draws samples from the required distribution and then forms sample averages to approximate expectations. MCMC draws out these samples by running a cleverly constructed Markov chain for a long time. There are many ways of constructing these chains.

The Gibbs sampler is one of the best known MCMC sampling algorithms in the Bayesian computational literature. It adopts the thinking of “divide and conquer”: i.e., when a set of parameters must be evaluated, the other parameters are assumed to be fixed and known. Let θ be an i-dimensional vector of parameters, and let f θ denote the marginal distribution for the j th parameter. The basic scheme of the Gibbs sampler for sampling from p θ is given as follows:

Step 1. Choose an arbitrary starting point 𝜃 𝜃 , … , 𝜃 ;

Step 2. Generate 𝜃 from the conditional distribution 𝑓 𝜃 |𝜃 , … , 𝜃 , and

generate 𝜃 from the conditional distribution 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ;

Step 3. Generate 𝜃 from 𝑓 𝜃 |𝜃 , … , 𝜃 , 𝜃 … , 𝜃 ;

Step 4. Generate 𝜃 from 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ; the one-step transition

from 𝜃 to 𝜃 𝜃 , … , 𝜃 has been completed, where 𝜃 is a one-time accomplishment of a Markov chain.

Step 5. Go to Step2.

After t iterations,θ θ , … , θ can be obtained. Each component of θ can also be obtained. Starting from different θ , as t → ∞, the marginal distribution of θ can be viewed as a stationary distribution based on the theory of the ergodic average. Then, the chain is seen as converging, and the sampling points are seen as observations of the sample.

Bayesian survival analysis has been developed for small samples to make the most of the priors’ information in application, especially to deal with incomplete (truncated or censored) data. It has received much recent attention with advances in computational and modelling techniques. When regression models can consider different environmental factors, the simulations for parameters’ posterior distribution will be easier, and the theory of Bayesian survival analysis better developed.

Page 40: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 25

2.4.2.2Timeseriesanalysisforcontinuousdemandforecasting

Continuous demand forecasting can be modelled using time series analysis. A time series is a set of observations recorded at a specific time; it is useful for serially correlated data. Time series forecasting is used in statistics, finance, econometrics, weather forecasting, earthquake prediction, etc. A continuous time series is obtained when observations are made continuously over a certain time interval. One popular continuous forecasting method is the Autoregressive Moving Average (ARMA) model, also referred to as the Box-Jenkins model (Box & Jenkins, 1968), two authors who were central to its development.

An ARMA 𝑝, 𝑞 model comprises two parts;

1. An AR 𝑝 process:  

𝑋 𝑐 𝜑 𝑋 𝜀 2.4.5  

where 𝑐 is a constant, 𝜑 are parameters of the model and 𝜀  is random noise.  

2. An MA 𝑞 process:

𝑋 𝜇 𝜃 𝜀 𝜀 2.4.6

where 𝜇 is a constant, and 𝜃 are parameters.

These two combine to give the ARMA 𝑝, 𝑞 model:

𝑋 𝑐 𝜀 𝜑 𝑋 𝜃 𝜀 2.4.7

Thus, the ARMA 𝑝, 𝑞 model allows the modelling of points in a time series dependent on the previous 𝑝 points (auto-regressive) and on the previous 𝑞 residuals (moving-average).

To model an ARMA 𝑝, 𝑞 model, the time series data must generally be stationary; i.e. they must have a constant mean, constant variance and constant covariance, irrespective of time. This is not always the case in time series data.

An extended version of the ARMA 𝑝, 𝑞 method, the ARIMA 𝑝, 𝑑, 𝑞 method, exists for such data. ARIMA stands for Autoregressive Integrated Moving Average. The 𝑑 stands for differencing and is a transformation technique applied to time-series data to make

Page 41: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 26

them stationary. Differencing is mathematically shown as 𝑦 , 𝑦 𝑦 ; differencing removes the changes in the level of a time series, thus eliminating trend and seasonality, consequently stabilizing the mean of the time series.

2.4.2.3Croston’smethodforintermittentdemandforecasting

Intermittent demand time series refers to items that are requested infrequently, resulting in sporadic demand (Kourentzes, 2014), thus showing periods of zero demand. A popular approach to forecasting such demand is Croston’s method and its variants. Croston’s method is an ad hoc method with no properly formulated underlying stochastic model, and, as such, it is inconsistent with the properties of intermittent demand data (Shenstone & Hyndman, 2005). Yet its forecasts and prediction intervals based on its underlying models are very useful when predicting intermittent demand (Shenstone & Hyndman, 2005).

Croston’s method was proposed by Croston in 1972. The method estimates demand probability using time interval series and demand series separately, thus making it more intuitive and accurate. It is calculated as below:

If 𝑍 is the estimate of mean non-zero demand series for time 𝑡, 𝑉 is the estimate of mean interval size between non-zero demands, 𝑋 is the actual demand observed at time 𝑡, 𝑞 is the current number of consecutive zero-demand periods, and

𝑌 denotes an estimate of mean demand size considering zero demands, then

if 𝑋 0, then 𝑍 𝛼𝑋 1 𝛼 𝑍𝑉 𝛼𝑞 1 𝛼 𝑉

𝑌 𝑍 𝑉⁄ 2.4.8

Otherwise,

if 𝑋 0, then 𝑍 𝑍𝑉 𝑉𝑌 𝑌

2.4.9

Even though Croston’s method has been demonstrated to give good, useful and robust forecasts in both empirical experiments and practical use, it is biased and lacks independent smoothing of parameters for demand size and interval size; it assumes demand size and demand interval are independent, and there is no way to deal with product obsolescence.

As a result of Croston’s limitations, variants such as the Syntetos-Boylan Approximation (SBA) method, Shale-Boylan-Johnston (SBJ) method and the Teunter, Syntetos and Babai (TSB) method have been proposed. Syntetos and Boylan (2001)

Page 42: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

 27

claimed to have removed biases associated with the original Croston method, thus improving its accuracy. Shale, Boylan and Johnston (2006) considered a Poisson process for the arrival of orders. Teunter, Syntetos and Babai (2011) updated the probability of demand continuously; this method is useful for products nearing the end of their life cycle.

2.4.2.4Bootstrapmethodforslowmovingdemandforecasting

Slow moving demand time series involve a statistical technique called bootstrapping proposed by Willemain, Smart and Schwartz (2004). This method involves random sampling with replacement on previous observations of non-zero demand to forecast demand over some lead-time i.e. the interval between replenishment ordering and arrival of the order. This prediction method is particularly useful when the sample size is relatively small, and difficult to accurately predict based on an assumed distribution of the data.

Bootstrapping generates tens of thousands of demand groups over the lead time period based on the originally small sample and predicts the distribution and average demand of the original time series based on the distribution of the new data generated. The prediction results include both the confidence level and the average demand within the period, so it is a better risk prediction method.

The advantage of bootstrapping is that it does not assume the distribution of samples based on theoretical assumptions. Instead, it uses a computer to perform simulation calculations based on empirical data to obtain large sample data to simulate the previous small sample distribution. The size of the sample generated by the computer is determined by the specific project and can be adjusted according to the complexity of the algorithm and the efficiency of the execution.

2.5Summaryofresearchframework

Theories on maintenance performance measurement have some gaps. First, no KPI framework considers both technical and soft KPIs (see RQ1); second, few studies have focused on implementing KPI measurement through eMaintenance; they do not discuss data sources or databases, timelines, or general formulas for specified KPIs (see RQ2).

Theories on performance measurement provide the basis of this research for developing a KPI framework. Neely’s fundamental process of performance measurement supports a continuous improvement process for the developed KPIs from design, to plan and build, implementation and operation, and refreshment. Although Neely’s KPI guide is detailed, it has drawbacks. For one thing, the method is very time-consuming (see RQ4).

Page 43: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 2 Theoretical Framework

 

 28

There are also gaps in KPI assessment (see RQ3). The current analytical (e.g., Markov/semi-Markov) or simulation approaches (e.g., Monte Carlo simulation-based) cannot handle complicated state changes or are computationally expensive. There is a need to develop novel approaches (see RQ3.1). In addition, few researchers have revealed the connections between technical and soft KPIs (see RQ3.2). For those soft KPIs for which the distribution of data collected from eMaintenance systems (e.g., work orders) is not easily determined, we could apply time series analysis if the data are “fast moving”, the Croston method if the data are “intermittent” or the bootstrap method if the data are “slow moving” (see RQ3.3).

 

Figure 2.2 Theoretical framework in this research

 

Connections of research publications and RQs in the theoretical framework could be found in Figure 2.2.

 

 

Page 44: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

29  

Chapter3ResearchMethodology

This chapter presents the research methodology, including research design, data collection, literature review, and data analysis.

3.1Researchdesign

This section presents the design of this research. As shown in Figure 3.1, the research can be divided into three stages.

The motivation for the research originated in the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance (in Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h aeUnderhåll)”, initiated and supported financially by LKAB. More specifically, LKAB is cooperating with LTU’s eMaintenance Lab to develop an integrated KPI framework to control and monitor its maintenance process to achieve its overall organizational goals. This new KPI framework is expected to comprise two parts: technical KPIs (linked to machines) and business KPIs (linked to workflow); the latter ones are also called soft KPIs based on business strategies of the mining company. LKAB and the eMaintenance Lab at LTU jointly conducted an exploratory study to lay the foundations for further study.

The first stage of the project included a literature review, interviews with infrastructure managers, maintenance engineers at LKAB, researchers for KPI development in EU projects, etc. The interviews, combined with the literature review, revealed the research gaps in LKAB’s current KPI framework development, implementation, assessment, and optimization and allowed the formulation of a problem statement. This, in turn guided the formulation of the research purpose, objectives and four research questions: RQ1, RQ2, RQ3 and RQ4. RQ3 includes three sub-questions. The second stage of the project, exploratory research, examined technical and “soft” KPIs and the connections between them.

In the third stage, the work drew on both descriptive and explanatory research to construct an integrated KPI framework for maintenance management in an eMaintenance environment. The framework includes asset operation management, maintenance process management and maintenance resources management. In all, 134 technical and soft KPIs are proposed to measure maintenance performance and streamline maintenance processes: asset operation management has 23 technical KPIs; maintenance process management has 85 soft KPIs; maintenance resources management has 26 soft KPIs. The novel framework was applied and validated in case studies in the mining company. The case studies indicate that the integrated KPI

Page 45: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

30  

framework will allow the overall business goals to be reached and the system to be optimized continuously.

Generally speaking, the first stage revealed the research gaps, the second stage analysed them, and the third stage resolved the research problems and filled the research gaps.

 

Figure3.1Designoftheresearch

Integrated KPI framework for maintenance management through eMaintenance: development, Implementation, assessment, and optimization

Research Background

Project ideas motivated by LKAB and

eMaintenance LabLiterature Review Interview

Performance measurement

KPI implementation

KPI frameworkKPI assessment

and Optimization

RQ 1: What is the KPI framework?

RQ 2: How can it be implemented?

RQ 3: How can it be assessed?

RQ 4: How can it be improved

continuously?

Integrated KPI framework

Implementation through eMaintenance

New assessment approaches

Optimization approaches

New framework

Asset operation management

Maintenance process management

Maintenance resources management

Des

crip

tive

and

exp

lana

ory

rese

arch

Expl

orat

ory

rese

archA

nswer research questions

Fill research gaps

Ove

rall

Bus

ines

s go

alContinuous

Improvem

ent

Case studies

Research Objectives and questions

Page 46: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

31  

3.2Datacollection

This section explains data collection from interviews, documents received from the mining company, and other data sources.

3.2.1Interviews

Three groups of people were interviewed: infrastructure managers and maintenance engineers at LKAB, and researchers working on KPI development in other industries. Interviews were conducted throughout the research.

At the start of the research, the interviews took advantage of the experience of maintenance personnel to identify the research gaps and formulate the research purpose, objectives and questions.

The purpose for later interviews was to consult experts on the research approach, discussion, conclusions and further research work.

3.2.2Documentsfromtheminingcompany

Internal LKAB documents were consulted to understand the company’s overall business goals, current KPI structure, maintenance process, KPI development plan, etc. These documents formed the foundation of the research. Those without confidentiality problems are mentioned in Chapter 1, Chapter 5 and appended publications.

3.2.3Datasources:operationandmaintenancedata

Historical data were collected from LKAB. Status monitoring data were gathered from Plantperformance/IP21. Historical data on maintenance were collected from Movex, and economic data were collected from total report operation and maintenance (TDU).

Time to Failure (TTF) data and Time to Repair (TTR) data for paper B and paper C were collected for five balling drums from January 2013 to December 2018.

3.3Literaturereview

The literature review drew on scientific publications databases, such as Scopus, Web of Science, Google Scholar, etc. Various types of reference were reviewed, including conference and journal papers, monographs, theses, standards and technical reports. Secondary references were reviewed in some cases. A summary of the results of these literature reviews is given in Chapter 2, “Theoretical framework”, and is applied in Chapter 5 “Results and discussions”. More details can be found in the appended papers.

Page 47: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

32  

3.4DataAnalysis

After data collection, the next step was to analyse the data to produce information, knowledge and insights. During this step, data were categorized, cleaned, transformed, inspected and modelled.

3.4.1DataanalysisinPaperB

Paper B describes a case study illustrating system availability assessment using a parametric Bayesian approach. The main steps of data analysis in Paper B follow the procedure shown in Table 3.1.

Table3.1StepsinthesystemavailabilityassessmentofPaperB

Steps Name Purpose Outputsinthiscase

1 Configuration definition

System configuration and dependencies determined to calculate system availability.

Five balling drum system parallel and independent.

2 Data collection Reliability and maintenance data (and information) collected.

1774 records for failure and repair data of the five balling drums collected from 2013 to 2018.

3 Data preparation

Data cleaned and outliers removed as needed.

Null values removed and abnormal data checked.

4 Preliminary Analysis

Pre-studies for TTF and TTR data performed to decide the baseline distributions.

MTTF fitted a Weibull distribution; MTTR fitted a lognormal distribution.

5 Parametric Bayesian model building

Prior distribution defined, and analytic models developed.

Bayesian Weibull model for MTTF with gamma priors and Bayesian lognormal model with gamma and normal priors constructed.

6 MCMC simulation

Burn-in defined and MCMC simulation implemented; convergence diagnostics and Monte Carlo error checked to confirm the effectiveness of the results.

Burn-in of 1000 samples used with an additional 10,000 Gibbs samples for each Markov chain.

7 Results and analysis

Results, calculation and discussion.

Results for parameters of interest in system availability assessment.

Paper B is motivated by a balling drum system in the mining industry. The case study mine has five balling drums. All five receive their feed for production in the same manner. Each balling drum is expected to produce the same amount of pellets at its maximum. According to the working mechanism and an i. i. d test, they are considered independent; if one breaks down, it does not affect the rest, except that total production

Page 48: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

33  

will be reduced. One assumption is made here that the system will fail only if all subsystems fail; therefore, it is treated as a parallel system.

There are 1782 records. In the first step, the null values are removed, and the data are reduced to 1774 records.

The next step reveals there are different reasons for the TTF and TTR of individual balling drums. It is noticed that, for TTR data, if 150 shutdowns are considered normal (denoted as a threshold; see Figure 3.2), then those exceeding 150 should be treated as abnormal and investigated using Root Cause Analysis (RCA).

When we check the work order types of such kinds of abnormal data, we discover most are caused by “preventive maintenance” and may reflect a lack of maintenance resources. To simplify the study, we assume all maintenance resources are sufficient for “preventive maintenance”; thus, although the abnormal data might be caused by a shortage of spare parts or skilled personnel, this possibility is not examined in the paper.

Figure3.2ExampleofTTRdataforballingdrum1

To determine the baseline distribution of Time to Failure (TTF) and Time to Repair (TTR), we conduct a preliminary study of failure data and repair data using traditional analysis. In this preliminary study, several distributions are considered: exponential distribution, Weibull distribution, normal distribution, log-logistic distribution, lognormal distribution, and extreme value distribution. Table 3.2 lists the results.

Based on the results, the Weibull distribution and lognormal distribution are selected for the TTF and TTR for balling drums 1 to 5; these are applied to the parametric Bayesian models in the study. The main procedure of Bayesian analysis with MCMC follows Figure 3.3 (Lin, 2014).

Page 49: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

34  

Table3.2Preliminarystudyoffailuredataandrepairdata

BallingdrumTTFfitness TTRfitness

1st 2nd 3rd 1st 2nd 3rd 1 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 2 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 3 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 4 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 5 Weibull Log-logistic Lognormal Lognormal Weibull Logistic

 

Figure3.3AProcedureforBayesianReliabilityInferenceviaMCMC

According to the results of paper B, the distribution for TTF and TTR can be achieved separately for balling drums 1 to 5. The traditional method of assessing availability is

𝐴𝑀𝑇𝑇𝐹

𝑀𝑇𝑇𝐹 𝑀𝑇𝑇𝑅

Page 50: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

35  

However, the proposed approach extends the method to

𝐴𝐸 𝑓 𝑇𝑇𝐹

𝐸 𝑓 𝑇𝑇𝐹 𝐸 𝑓 𝑇𝑇𝑅

𝐸 𝑓 𝑡 |𝛼, 𝛾𝐸 𝑓 𝑡 |𝛼, 𝛾 𝐸 𝑓 𝑡 |𝜇, 𝜎 .

The above equation shows the flexibility of assessing availability according to reality. For one thing, the parametric Bayesian models using MCMC make the calculation of posteriors more feasible.

3.4.2DataanalysisinPaperC

The main difference between the procedure in paper B and paper C is the latter’s use of a threshold to censor the data.

In paper C, as in paper B, the proposed Bayesian approach to system availability has seven steps divided into three stages (see Table 3.3). In Stage I, we perform pre-analysis; in Stage II, we create the analytic models (Bayesian) and simulation models (MCMC); in Stage III, we assess system availability.

The seven steps follow a “PDCA” cycle; those in Stage I can be treated as the Plan stage, Stage II as the Do and Check stage, and Stage III as the Action stage. The outputs from Stage III could become input for Stage I for the next calculation period, so the results can be continuously improved.

Table3.3Generalprocedure

Stages Steps Name Description

I

1 Configuration determination

Determine dependencies among units and system configuration.

2 Data collection Collect prior information and event data, including reliability and maintenance data.

3 Data preparation

Clean data and remove outliers as needed. Set up a threshold for censored data.

4 Preliminary analysis

Determine the distribution of prior information, TTF, and TTR for the Bayesian analytics in step 5.

II

5 Bayesian analytic modelling

According to step 3 and step 4, determine the likelihood function and Bayesian analytic models.

6 MCMC simulation

Define burn-in defined and implement MCMC simulation; perform convergence diagnostics and check Monte Carlo error to confirm the effectiveness of the results. If not passed, go back to step 4 and 5; if passed, go to step 7.

III 7 Assessment

According to the simulation results for Bayesian analytic models and system configuration, determine distributions of TTF and TTR and assess system availability. Assessment could start with the prior information collection in step 2 for the next calculation period.

Page 51: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

36  

In paper C, we look for the normal and abnormal values for the TTF and TTR of individual balling drums. If 150 shutdowns in Figure 3.2 are considered normal, for example, then those exceeding 150 are abnormal, and 150 is denoted as a threshold, as shown in Figure 3.2. The work orders show most of these abnormal shutdowns are caused by “preventive maintenance” and may simply reflect a lack of maintenance resources. To simplify the study, we assume that not all maintenance resources are sufficient for “preventive maintenance”; thus, the abnormal data may reflect a shortage of spare parts or skilled personnel.

To establish a more reasonable TTR threshold than the 150 shutdowns, we perform a Pareto analysis for all balling drums. The results appear in Figure 3.4. According to the figure, if the threshold is set up according to the “80-20” rule, the data can be censored at six hours. This explains almost 80% of the data. Therefore, we create a new dataset with TTR censored at six hours.

   

Figure3.4ParetoanalysisforTTRoffiveballingdrums

In addition, we make the following assumptions:

1. Abnormal TTR values exceeding six hours could be improved by implementing maintenance improvements, including RCA, maintenance resource improvement, etc. The goal is to reduce the TTR values exceeding six hours. However, we don’t know how much this reduction can be. Therefore, those values are considered right censored at six;

2. The preventive maintenance plan is not changed. Thus, if one TTR is treated as censored, then in the corresponding maintenance interval, the Time between Failure (TBF), which equals to TTF plus TTR, will not change significantly, and the TTF could be longer than in the collected data. However, we don’t know how much longer the TTF could be. Therefore, TTF data can also be treated as right

Page 52: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

37  

censored. The difference with censored TTR data is that the corresponding TTF data are treated as right-censored at the original value instead of a new value (see Figure 3.5).

 

Figure3.5Datacensoredunderassumptions

We use Figure 3.5 to illustrate assumption 2. TBF equals to the time between t and t . TTR =t - t might be larger than six but it is right censored at six. Then, the original TTR is denoted as six with a right-censored indicator. Since TBF=t - t will not change, the corresponding TTF’= t t will be longer than TTF. However, according to assumption 2, we don’t know how much longer; therefore, TTF’ is denoted as right-censored data with an original value equal to t - t .

After this step, the censored TTF and TTR data represent a total of 20% of all data.

To determine the baseline distribution of TTR and TTF, we conduct a preliminary study of failure data and repair data using traditional analysis. We consider the following distributions: exponential distribution, Weibull distribution, normal distribution, log-logistic distribution, lognormal distribution, and extreme value distribution. Table 3.4 lists the results, including the goodness-of-fit using Anderson-Darling (AD) statistics.

Table3.4Preliminarystudiesoffailuredataandrepairdata

Ballingdrum

TTFfitness TTRfitness1st AD 2nd AD 1st AD 2nd AD

1 Weibull 1.976 Lognormal 11.276 Lognormal 10.068 Weibull 14.607 2 Weibull 1.796 Lognormal 8.274 Lognormal 11.144 Weibull 14.302 3 Weibull 2.115 Lognormal 10.499 Lognormal 8.698 Weibull 14.332 4 Weibull 1.196 Lognormal 6.366 Lognormal 9.245 Weibull 13.106 5 Weibull 2.148 Lognormal 14.416 Lognormal 7.533 Weibull 11.933

Based on the results, we select the Weibull distribution for the TTF and the lognormal distribution for the TTR and apply these to their respective parametric Bayesian models with censored data, as explained in paper C. The main procedure of Bayesian analysis with MCMC also follows Figure 3.3 (Lin, 2014).

Page 53: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

38  

Paper C propose a Bayesian Weibull model for TTF and a Bayesian lognormal model for TTR with considering right-censored data, and explain how to use an MCMC computational scheme to obtain the posterior distributions with or without considering right-censored data.

According to the results of paper C, the distribution for TTF and TTR can be achieved separately for balling drums 1 to 5. The proposed approach extends the method of assessing availability to

𝐴𝐸 𝑓 𝑇𝑇𝐹

𝐸 𝑓 𝑇𝑇𝐹 𝐸 𝑓 𝑇𝑇𝑅

𝐸 𝑓 𝑡 |𝛼, 𝛾𝐸 𝑓 𝑡 |𝛼, 𝛾 𝐸 𝑓 𝑡 |𝜇, 𝜎 .

The above equation shows the flexibility of assessing availability according to reality. And similar to what is shown in Paper B, the parametric Bayesian models using MCMC make the calculation of posteriors more feasible.

As discussed above, system availability can be computed via the TTF and TTR, but we cannot obtain a closed-form distribution of system availability. Therefore, in paper C we use an empirical distribution instead of an analytical one. We generate 10,000 samples from the distributions of TTF and TTF and calculate the associated availability. We use the Kaplan-Meier estimate as the empirical c. d. f. .

3.5Reliabilityandvalidityoftheresearch

Research must be both valid and reliable. Validity refers to studying the right things, while reliability refers to conducting a study in the right way. Validity allows the researcher to measure what was designed to be measured (Karim, 2008), while reliability ensures consistency and repeatability of research procedures, such that the same findings and conclusions are achieved if the same procedure is followed by another researcher (Yin, 2014).

This research achieved validity by using multiple data sources (interviews, workshops, observations and documents) and establishing a chain of evidence. In addition to this use of documentation, Paper A achieved validity by drawing on a case study in the mining industry.

Papers B and C compared the Monte Carlo errors (MC errors) with the Standard Deviation (SD). Note that a MC error less than 5% of SD is considered acceptable and valid, and both papers achieved this; see Table 5.3.1, Table 5.3.2, Table 5.3.4, and Table 5.3.5. Other diagnostics were performed to ascertain the validity of the results, including checking the convergence of the Markov chains. For instance, as Table 5.3.1 shows, the convergence of the Markov chains (i.e., tree chains) of the 𝛼 and 𝛾 parameters in the Bayesian Weibull model of the TTF of the first balling drum could be monitored using the trend of the time series history of the data (see Figure 3.6) and the dynamic trace of the three chains (see Figure 3.7). The convergence of the chains could also be checked

Page 54: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

39  

using Gelman-Rubin-Brooks (GRB) statistics (see Figure 3.8). Details of the methods are discussed elsewhere by Lin (2014; 2016).

Reliability is achieved by showing the steps used to analyse the data (see Table 3.1, Table 3.3, and Table 5.3.12) and by showing the research findings in the tables and figures (Chapter 5). However, some of the data used in this research are confidential and classified for reasons of organizational security, thus limiting accessibility and repeatability.

                 𝛼

   𝛾  

Figure 3.6 History of three chains

                                                 𝛼                                                                                                𝛾

Figure 3.7 Dynamic trace of three chains

Page 55: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 3 Research Methodology

40  

                                                 𝛼                                                                                                𝛾

Figure 3.8 BGR diagnostic Dynamic trace of three chains

 

3.6Inductive,deductiveandabductivereasoning

Deductive reasoning, also called deductive logic, is the process of reasoning from one or more general statements on what is known to reach a logically certain conclusion. Inductive reasoning, also called induction or bottom-up logic, constructs or evaluates general propositions derived from specific examples. Both have shortcomings. A weakness of induction is that a general rule is developed from a limited number of observations; a weakness of deduction is that it establishes a rule, instead of explaining it (Peter, 2005).

Abductive reasoning, also called abduction, is used in many case studies. With this approach, a single case is set within an overarching hypothetical pattern. The interpretation is corroborated with new observations. Consequently, abduction may be considered a combination of induction and deduction. During the research process, the empirical application is developed, and the theory adjusted (Peter, 2005).

This research is founded on a common interest among industry and academia in exploring problems that are important in practice but described in an unsatisfactory manner in the literature. Hence, the research could have a deductive or an inductive approach. While the project from which the research originates is based on industrial interest, however, the literature must be studied to attain a deeper understanding. Therefore, an approach similar to abduction is more appropriate.

The iterative abductive approach of this research combines theory and practice; thus, it contributes to the literature both theoretically and empirically.

Page 56: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

41  

Chapter4

Summaryoftheappendedpublications

This chapter summarizes the appended papers, giving their title, purpose and abstract. The links between the research questions (RQs) and the appended papers appear in Chapter 1.

4.1PaperA

Title: Development and implementation of a KPI framework for maintenance management in a mining company

Purpose: The purpose of this study is to propose a KPI framework for the mining company and propose its implementation in an eMaintenance environment.

This paper answers RQ 1 (What is a KPI framework for maintenance management?) and RQ2 (How can the developed KPI framework be implemented through eMaintenance?).

Abstract: Performance measurement is critical if organizations want to thrive. The motivation for the research originated from the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance”, initiated and financed by a mining company in Sweden. The main purpose is to develop an integrated KPI framework for the studied mining company’s maintenance and its implementation through eMaintenance. The proposed KPI framework has 134 KPIs divided into technical and soft KPIs as follows: asset operation management has 23 technical KPIs, maintenance process management has 85 soft KPIs and maintenance resources management has 26 soft KPIs. Its implementation is discussed, and timelines, definitions and general formulas are given for each specified KPI. Results from this study will be applied to the studied company and supply the guidance of implementing those KPIs through eMaintenance.

4.2PaperB

Title: System availability assessment using a parametric Bayesian approach - A case study of balling drums

Purpose:The purpose of this study is to propose a new approach to system availability assessment: it proposes a parametric Bayesian approach with MCMC, with a focus on the operational stage, using both analytical and simulation methods. MTTF and MTTR are treated as distributions instead of being “averaged” by point estimation, and this is

Page 57: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 4 Summary of the appended publications

 

42  

closer to reality. The study also addresses the limitations of simulation data sample size by using MCMC techniques.

This paper answers RQ 3 (How can the KPIs be assessed using novel approaches?), and more specifically, RQ3.1 (How can technical KPIs be assessed using a novel approach?).

Abstract: Assessment of system availability usually uses either an analytical (e.g., Markov/semi-Markov) or a simulation approach (e.g., Monte Carlo simulation-based). However, the former cannot handle complicated state changes, and the latter is computationally expensive. Traditional Bayesian approaches may solve these problems; however, because of their computational difficulties, they are not widely applied. The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches has led to the use of the Bayesian inference in a wide variety of fields. This study proposes a new approach to system availability assessment: a parametric Bayesian approach using MCMC, an approach that takes advantage of both analytical and simulation methods. In this approach, Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better reflect reality and compensate for the limitations of simulation data sample size. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems).

4.3PaperC

Title:A novel Bayesian approach to system availability assessment using a threshold to censor data - A case study of balling drums in a mining company

Purpose:The purpose of this study is to propose a novel system availability assessment approach in the operational stage. The approach will:

Integrate analytical and simulation methods for system availability assessment and have the potential to be applied to other technical problems in asset management (e.g., other industries, other systems);

Reveal the connections between technical and “soft” KPI; Establish a threshold to censor data; the threshold can become a monitoring line

for continuous improvement in the mining company.

This paper answers RQ 3 (How can the KPIs be assessed using novel approaches?), and more specifically, RQ3.2 (How can technical and soft KPIs be assessed and improved together using a novel approach?).

Abstract: Assessment of system availability has been studied from the design stage to the operational stage in various system configurations using either analytic or

Page 58: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

43  

simulation techniques. However, the former cannot handle complicated state changes and the latter is computationally expensive. This study proposes a Bayesian approach to system availability. In this approach: 1) Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better reflect reality and compensate for the limitations of simulation data sample size; 2) Markov Chain Monte Carlo (MCMC) simulations are applied to take advantage of both analytical and simulation methods; 3) a threshold is established for Time to Failure (TTF) data and Time to Repair (TTR) data, and new datasets with right-censored data are created to reveal the connections between technical and soft KPIs. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. Comparing the results with and without considering the threshold for censoring data, we show the threshold can be treated as a monitoring line for continuous improvement in the mining company.

Page 59: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 4 Summary of the appended publications

 

44  

Page 60: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

45  

Chapter5ResultsandDiscussion

This chapter discusses the research findings for each research question (RQ).

5.1 ResultsanddiscussionrelatedtoRQ1

RQ1: What is a KPI framework for maintenance management?

The first research question is answered by developing a new framework for maintenance management in Paper A.

In all, there are 134 KPIs in this framework. Asset operation management has 23 technical KPIs, Maintenance process management has 85 soft KPIs, and maintenance resources management has 26 soft KPIs.

5.1.1 KPIframework

The proposed KPI framework makes use of four hierarchical levels. The first level, the asset management system, is the highest level in the framework and encapsulates the second, third and fourth levels.

The second level consists of three broad categories: asset operation management, maintenance process management and maintenance resources management. Asset operation management is used to track the technical aspects of the maintenance process while maintenance process management and maintenance resources management are used to track the soft aspects of the maintenance process.

The third level is a further breakdown of the second-level categories. That is, asset operation management is broken down into five categories: overall asset, availability, reliability, maintainability and safety. Maintenance process management is broken down into five categories: maintenance management, maintenance planning, maintenance preparation, maintenance execution and maintenance assessment while maintenance resources management is broken down into three categories: spare parts management, outsourcing management and human resources management (See Figure 5.1).

In level four, the KPIs are grouped into common measures.

Asset operation management KPIs measure maintenance performance relative to the equipment condition. This level takes into consideration standards such as RAMS and BS EN 15341. Maintenance process management KPIs measure the efficiency and effectiveness of the consistent application of maintenance and maintenance support for both in-depth planning and execution of the maintenance process. This level takes into consideration the IEV standard and BS EN 15341. Its five classifications are tailored around the maintenance process implementation of the IEV standard while the actual

Page 61: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

46  

KPIs are created by consulting BS EN 15341, internal documents from LKAB and LKAB supervisors. Maintenance resources management KPIs measure spare part management, internal maintenance personnel management and external maintenance personnel management. This level considers the IEV standard and the BS EN 15341 standard.

Details in the proposed KPI framework could be found in section 5.1.2, section 5.1.3 and section 5.1.4, as well as figures in Appendix.

Overall Asset

Availability

Maintenance Management

Maintenance Planning

Maintenance Preparation

Maintenance Execution

Maintenance Assessment

Spare Parts Management

Outsourcing Management

Human Resources

Management

Reliability

Maintainability

Safety

Ass

et M

anag

emen

t

Ass

et O

pera

tion

Man

agem

ent

Mai

nten

ance

Pro

cess

Man

agem

ent

Mai

nten

ance

Res

ourc

es

Man

agem

ent

Shutdown Statistics Failure Related

Operational Availability

Mean Reliability Measures

Failure Related

Mean Maintainability Measures

Occupational Safety

Maintenance Strategy

Quantity Related

Time RelatedResource Related

Cost Related

Work Order Creation

Work Order Feedback

Work Order Approval

Quantity Related

Time RelatedResource Related

Cost Related

Quality Effectiveness

Inventory Management

Contractor Statistics

Skills Management

Workload Management

Training Management

Competence Development

Level 1 Level 2 Level 3 Level 4

Figure 5.1 KPI framework

Page 62: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

47  

5.1.2 DevelopmentofassetoperationmanagementKPIs

Asset as used here refers to physical parts, components, devices, subsystems, functional units, equipment or systems that can be individually described and accounted for. Asset operation management as used in this framework refers to technical KPIs used to measure asset overall asset, availability, reliability, maintainability and safety.

The technical KPIs are introduced are introduced in this part of the framework (see Table 5.1.1). The purpose of asset operation management is to provide KPIs for overall asset, availability, reliability, maintainability and safety. These indicators will provide maintenance managers with insight into routine maintenance and help them make cost-effective decisions on the operation, maintenance, upgrading and disposal of equipment. The asset operation management KPIs will help to put in place practices to improve availability, reliability, maintainability and safety.

Table 5.1.1: Asset operation management KPIs in Levels 3 and 4

LevelName Context Purpose

3 4

Ove

rall

Ass

et

Shut

dow

n St

atis

tics

Number of Shutdowns

This is the total number of times the asset is out of service.

Helps to understand the number of times the equipment, production line or process unit is out of service during the query period.

Total Shutdown Time

This is the total number of hours the assets are out of service.

Helps to estimate the total loss of the equipment in terms of time during the query period.

Average Shutdown Time

This is the ratio of total shutdown time to number of shutdowns.

Helps to understand the mean time of each shutdown, especially for the failed asset.

Failu

re R

elat

ed

Downtime Ratio/Frequency

This is the ratio of the number of times the equipment, production line or process unit is not producing because it is broken, under repair or idle to the total production time.

Helps to understand the proportion of failures in the total number of stops.

Downtime Ratio/Time

This is the ratio of the number of hours the equipment, production line or process unit is not producing because it is broken down, under repair or idle to the total number of work hours.

Helps to understand the proportion of failures in the total number of stops in terms of time.

Failure Mode Reporting Rate

This is the amount of corrective maintenance work whose failure mode is known.

Helps to understand the proportion of corrective maintenance work orders with failure mode information.

Reason for Failure

Registration Rate

This is the amount of corrective maintenance work with descriptions.

Helps to understand the proportion of work orders entered during the corrective maintenance work with information on causes of failure.

Page 63: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

48  

Ava

ilabi

lity

Ope

rati

onal

Ava

ilabi

lity

Availability

This is the asset’s ability to perform as and when required, under given conditions, assuming that the necessary external resources are provided.

Helps to understand the availability of a product line or equipment.

Rel

iabi

lity

Mea

n R

elia

bilit

y M

easu

res

Mean Time Between Failure

This is the average time between failures of repairable assets and components.

Helps to understand the average time between unexpected breakdowns of an asset or production stoppages of asset.

Mean Time To Failure

This is the average time to failure for non-repairable assets.

Helps to understand the average time that a system is not failed, or is available

Mean Up Time This is the mean time from the system (subsystem) repair to next system (subsystem) failure.

Helps to understand the average time during which a system is in operation.

Failu

re R

elat

ed

Emergency Failure Ratio

This is the proportion of emergency failures in the work orders.

Helps to understand the proportion of emergency failures out of all failures that have occurred.

Emergency Failed

Equipment Ratio

This is the proportion of failed assets in emergency failure work orders.

Helps to understand the proportion of failed assets in emergency failures.

Corrective Maintenance Failure Rate

This is the total number of maintenance actions on failed assets.

Helps to understand the frequency of corrective maintenance activities.

Repeat Failure This is the total number of maintenance actions on failures that occur more than one time.

Helps to understand the proportion of failure modes that occur more than once in the total failure.

Mai

ntai

nabi

lity

Mea

n M

aint

aina

bilit

y M

easu

res

Mean Downtime

This is the mean time that an equipment, production line or process unit is non-operational for reasons other than repair, such as maintenance, and includes the time from failure to restoration of an asset or component.

Helps to understand the average total downtime required to restore an asset to its full operational capabilities.

Mean Time Between

Maintenance

This is the average length of operating time between one maintenance action and another maintenance action for a component.

Helps to understand the average time that a maintenance action needs to fix the failed component or the lowest replaceable unit.

Mean Time To Maintain

This is the average time to maintenance.

Helps to understand the average maintenance duration of equipment.

Mean Time To Repair

This is the average time that a repairable or non-repairable asset and\or component takes to recover from failure.

Helps to understand the average time required to troubleshoot and repair failed equipment and return it to normal operating conditions.

False Alarm Rate

This is the proportion of unwanted alarms given in error for an equipment, production line or process unit.

Helps to understand the number of false positives that occurred for an asset.

Page 64: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

49  

Safe

ty

Occ

upat

iona

l Saf

ety

Number of Safety Incidents

This is the total number of safety incidents that have occurred during maintenance activities.

Helps to understand the number of safety incidents.

Injury Ratio This is the ratio of maintenance personnel injuries to total work hours.

Helps to understand the number of injuries that maintenance personnel sustained on the job.

Injury Ratio per Failure

This is the ratio of failures causing injuries to the total number of failures.

Helps to understand the number of injuries that maintenance personnel sustained compared to the total number of failures.

5.1.3 DevelopmentofmaintenanceprocessmanagementKPIs

Maintenance process management is the process of facilitating all aspects of day-to-day maintenance management activities, including job planning, scheduling, allocation, issuing work orders, execution, and task follow up. Maintenance process management as used in this framework uses “soft” KPIs to measure the effectiveness of the maintenance process.

The purpose of this section of the framework is to provide KPIs for in-depth management of the maintenance process, from maintenance strategy to maintenance planning, maintenance preparation, maintenance execution and assessment of maintenance effectiveness. These indicators will provide maintenance managers with insight into routine maintenance and help them make decisions on deploying maintenance personnel and which parts of the maintenance process to outsource. They will be able to address maintenance needs, thereby improving the maintenance process (see Table 5.1.2).

Table 5.1.2 Maintenance process management KPIs in Levels 3 and 4

LevelName Context Purpose

3 4

Mai

nten

ance

Man

agem

ent

Mai

nten

ance

Str

ateg

y

Critical Equipment Ratio

This is the amount of equipment important to performance, capacity, and throughput and vital to operating all equipment in the company’s plant.

Helps to understand the proportion of critical equipment in the plant or processing unit.

Preventive Maintenance

Rate

This is the proportion of maintenance work carried out at predetermined intervals or according to prescribed criteria, intended to reduce the probability of failure or degradation of asset.

Helps to understand the proportion of equipment with a proactive maintenance strategy in the plant or processing unit.

Predictive Maintenance (PdM) Rate

This is the proportion of condition-based maintenance carried out following a forecast derived from repeated analysis or known characteristics and evaluation of the significant parameters of degrading asset.

Helps to understand the proportion of equipment with a predictive maintenance policy in the plant or processing unit.

Page 65: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

50  

Preventive Maintenance Rate (Critical Equipment)

This is the proportion of maintenance carried out at predetermined intervals or according to prescribed criteria, intended to reduce the probability of failure or degradation of the asset.

Helps to understand the proportion of critical equipment with a proactive maintenance strategy in the plant or processing unit.

Predictive Maintenance Rate (Critical Equipment)

This is the proportion of condition-based maintenance carried out following a forecast derived from repeated analysis or known characteristics and evaluation of the significant parameters of the degrading asset.

Helps to understand the proportion of critical equipment with a predictive maintenance policy in the plant or processing unit.

Run to Failure (RTF) Ratio for

Critical Equipment

This is the ratio of failure management policy for critical equipment without any attempt to anticipate or prevent failure to all policies for critical equipment.

Helps to understand the proportion of critical equipment that does not have any precautionary or predictive maintenance policy in the plant or processing unit.

Planned Maintenance vs

Unplanned Maintenance

This is the ratio of planned maintenance to unplanned maintenance.

Helps to understand the relationship between planned maintenance and unplanned maintenance.

Mai

nten

ance

Pla

nnin

g

Qua

ntit

y R

elat

ed

Number of Planned Work Orders Created

This is the total number of work orders that have been scheduled.

Helps to understand the amount of scheduled maintenance/maintenance work.

Tim

e R

elat

ed

Average Planned Execution Time

This is the mean execution time of all planned work orders.

Helps to understand the average planned execution time of planned maintenance/maintenance work.

Res

ourc

e R

elat

ed

Total Number of Planned Internal

Labour Hours

This is the sum of labour hours attributed to planned maintenance work done by internal maintenance personnel.

Helps to understand the planned man-hours required for planned internal maintenance.

Average Planned Internal Labour

Hours

This is the mean hours for planned internal labour.

Helps to understand the mean man-hours required for planned internal maintenance.

Total Number of Planned External

Labour Hours

This is the sum of labour hours attributed to planned maintenance work by external maintenance personnel.

Helps to understand the planned labour hours required for maintenance work by external maintenance personnel.

Average Planned External Labour

Hours

This is the mean labour hours for planned external labour.

Helps to understand the average time required for planned maintenance by external maintenance personnel.

Planned Number of Materials Used

This is the sum of all materials scheduled to be used for maintenance and/or maintenance work.

Helps to understand the number of spare parts used in the planned maintenance.

Page 66: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

51  

Average Planned Number of

Materials Used

This is the mean number of materials to be used for scheduled maintenance and/or maintenance work.

Helps to understand the average number of spare parts used for planned maintenance.

Cost

Rel

ated

Total Cost of Planned Human

Resources

This is the total cost of manpower used for scheduled maintenance and/or maintenance work.

Helps to understand the manpower cost of planned maintenance.

Average Planned External Human Resource Costs

This is the mean external manpower cost for scheduled maintenance and/or maintenance work.

Helps to understand the average planned manpower cost of external labour for planned maintenance.

Total Cost of Planned

Materials

This is the total cost of materials needed for scheduled maintenance and/or maintenance work.

Helps to understand the cost of materials for planned maintenance.

Planned Average Material Cost

This is the mean cost of materials for scheduled maintenance and/or maintenance work.

Helps to understand the mean planned cost of materials for each scheduled repair or maintenance activity.

Labour Cost Ratio

This is the ratio of manpower cost to the total cost of planned maintenance.

Helps to understand the ratio of manpower cost to total planned cost in planned maintenance.

Planned Material Cost Ratio

This is the ratio of planned material cost to the planned total cost of maintenance.

Helps to understand the proportion of the total costs of planned material allocated to planned maintenance.

Mai

nten

ance

Pre

para

tion

Wor

k O

rder

Cre

atio

n

Planned Start / End Time

Registration Rate

This is the ratio of work orders whose planned start/ end time is known at the time of creation to the total work orders created.

Helps to understand the amount of work orders whose planned start and end time are provided during their creation.

Planned Spare Parts

Registration Rate

This is the ratio of work orders whose spare parts requirement are known at the time of the work order creation to the total work orders created.

Helps to know the planned spare parts registration rate of work orders.

Planned Man-Hour

Registration Rate

This is the number of work orders with labour hours needed recorded during work order creation out of all the work orders created.

Helps to understand the proportion of work orders with the required labour registered during work order creation.

Planned Downtime

Registration Rate

This is the ratio of hours that the plant or asset will be down ahead of time to the total work hours.

Helps to understand the percentage of work orders with planned downtime entered during work order creation.

Standard Operating Plan

Registration Rate

This is ratio of the number of work orders with an SOP to the total work orders.

Helps to understand the proportion of work orders with standard operating procedure plans.

Planned Work Type

Registration Rate

This is the proportion of work orders with required skills registered during their creation.

Helps to understand the proportion of work orders with known skills required in the work category.

Job Priority Registration Rate

This is the number of work orders with job priorities assigned during the work order

Helps to understand the proportion of work orders assigned work priorities during work order creation.

Page 67: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

52  

creation out of all the work orders.

Wor

k O

rder

Fee

dbac

k Actual Spare

Parts Use Registration Rate

This is the amount of spare parts used for maintenance work.

Helps to understand the actual use of spare parts for maintenance jobs.

Actual Man-Hour Registration Rate

This is the proportion of labour used for maintenance work.

Helps to understand the amount of labour used for maintenance tasks.

Actual Downtime Registration Rate

This is the number of work orders causing actual downtime.

Helps to understand the proportion of work orders that lead to downtime.

Work Order Registration

Back-Log

This is the difference between work order registration date and the actual registration date of the work order.

Helps to understand the time interval between the completion of the work order and the completion of registration in the system.

Wor

k O

rder

App

rova

l

Total Number of Work Orders

This is the sum of proposed work orders that have been registered.

Helps to understand the total number of work orders reported.

Total Number of Approved Work

Orders

This is the sum of proposed work orders that have been approved.

Helps to understand the total number of work orders approved in a single pass.

Total Number of Unapproved Work Orders

This is the sum of proposed work orders that have not been approved.

Helps to understand the total number of work orders not approved in a single pass.

Work Order Approval Ratio

This is the ratio of proposed work orders to planned work orders.

Helps to understand the proportion of reported work orders against the total planned work orders.

One-time Approved Work

Order Ratio

This is the ratio of work orders proposals that were approved once to actual work orders.

Helps to understand the rate of one-time approvals for work orders submitted.

Average time lag for Reporting

and Approving Work Orders

This is the difference between approved work orders and proposed work orders.

Helps to understand the average time between submission of a work order and the approval of the issuance of the work order.

Mai

nten

ance

Exe

cuti

on

Qua

ntit

y R

elat

ed

Number of Planned Work

Orders Completed

This is the total number of preventive maintenance work orders that have been resolved.

Helps to understand the planned maintenance work done.

Number of Unplanned Work

Orders Completed

This is the total number of unplanned corrective work orders that have been resolved.

Helps to understand the amount of unplanned maintenance work completed.

Number of Work Orders

Completed Per Shift

This is the total number of work orders completed per shift.

Helps to understand the number of work orders completed in a shift.

Work Order Resolution Rate

This is the ratio of the number of work orders performed as scheduled to the total number of scheduled work orders.

Helps to understand the ratio of the number of work orders completed as scheduled.

Tim

e R

elat

ed

Average Work Order Time

This is the mean execution time for completed work orders.

Helps to understand the average execution time of completed maintenance work.

Average Waiting Time for

Personnel

This is the mean waiting time for maintenance personnel needed to resolve a maintenance

Helps to understand the average logistical waiting time for maintenance staff for completed maintenance work.

Page 68: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

53  

request. Average Waiting Time for Spare

Parts

This is the mean waiting time for spare parts used for completed maintenance work.

Helps to understand the waiting time for spare parts for maintenance work.

Personnel Waiting Time

Ratio

This is the proportion of time it takes to get maintenance personnel to resolve a maintenance task.

Helps to understand the staff waiting time for maintenance work completed.

Spare Parts Waiting Time

Ratio

This is the proportional waiting time for spare parts used for maintenance work.

Helps to understand the spare parts waiting time for completed maintenance work.

Average Maintenance Outage Time

This is the period of time that the asset fails to provide or perform its primary function during maintenance work.

Helps to understand the average execution time of the maintenance work.

Average Waiting Time of

Personnel during Shutdown

This is the mean waiting time for maintenance personnel during shutdown.

Helps to understand the average logistic waiting time for maintenance personnel for maintenance work during shutdown.

Average Waiting Time for Spare

Parts during Shutdown

This is the mean waiting time for spare parts during shutdown.

Helps to understand the average waiting time for spare parts used for completing maintenance work at shutdown.

Average Waiting Time of

Personnel during Shutdown Ratio

This is the mean waiting time for maintenance personnel to mean maintenance outage time during shutdown.

Helps to understand the ratio of waiting time for personnel who have completed the maintenance work at shutdown to the total repair time.

Average Waiting Time for Spare

Parts during Shutdown Ratio

This is the ratio of the mean waiting time for spare parts to the mean maintenance outage time.

Helps to understand the ratio of spare parts waiting time for the repair/maintenance work to total maintenance outage time during the query period.

Estimated Time vs. Actual Time

This is the difference between actual maintenance time and planned maintenance time.

Helps to understand the time variances in work orders.

Res

ourc

e R

elat

ed

Total Number of Internal Labour

Hours

This is the sum of hours used by in-house maintenance personnel for maintenance work.

Helps to understand the total number of hours used by in-house maintenance personnel for maintenance work performed.

Average Internal Labour Hours

Used

This is the mean number of hours used by in-house maintenance personnel for maintenance work.

Helps to understand the average labour hours used for each completed internal maintenance work.

Total Number of External Labour

Hours

This is the sum of hours used by maintenance contractors for maintenance work.

Helps to understand the labour hours used for external maintenance work.

Average External Labour Hours

Used

This is the mean hours used by external maintenance personnel for maintenance work.

Helps to understand the average labour hours for each completed external maintenance action.

Number of Materials Used

This is the total number of spare parts used for maintenance work.

Helps to understand the actual number of spare parts used for maintenance work.

Page 69: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

54  

Average Materials Used

This is the mean number of spare parts used for maintenance work.

Helps to understand the average number of spare parts used for each completed maintenance action.

Cost

Rel

ated

Total Cost of External Human Resources Used

This is the total cost of using maintenance contractors for maintenance work.

Helps to understand the cost of external labour for completed maintenance work.

Average External Human

Resources Costs

This is the mean cost of external maintenance contractors for maintenance work.

Helps to understand the average external labour costs for maintenance work completed.

Total Cost of Materials Used

This is the total cost of materials used for maintenance.

Helps to understand the cost of materials used for maintenance work completed.

Average Cost of Materials Used

This is the mean cost of materials used for maintenance work.

Helps to understand the average cost of materials for completed maintenance work.

External Labour Costs Ratio

This is the ratio of the total external maintenance contractor cost to the total maintenance cost.

Helps to understand the ratio of manpower cost to total cost of maintenance work completed

Actual Materials Cost Ratio

This is the ratio of costs for materials to the total maintenance cost.

Helps to understand the cost of the materials used to complete the maintenance work.

Maintenance Cost per Asset

This is the total cost incurred for maintaining an asset.

Helps to understand the cost incurred for maintenance work.

Mai

nten

ance

Ass

essm

ent

Qua

lity

Number of Completed Work Orders Approved

This is the total number of completed work orders that have been approved after resolution.

Helps to understand the total number of reported approvals for completed work orders.

Work Order Approval Ratio

This is the ratio of completed work orders that need to be approved after resolution to total completed work orders.

Helps to understand the proportion of the work orders that need to be submitted for approval.

One-Time Pass Internal

Completion Rate

This is the ratio of work orders that are resolved the very first time they occur by internal maintenance personnel to total completed work orders.

Helps to understand the number of one-time work orders by internal maintenance personnel that do not need to be reworked.

One-Time Pass External

Completion Rate

This is the ratio of work orders that are resolved the very first time they occur by external maintenance personnel to total completed work orders.

Helps to understand the number of one-time work orders by external maintenance personnel that do not need to be reworked.

Planning Compliance

This is a measure of adherence to maintenance plans.

Helps to understand the amount of planned maintenance work that is started on the same date as planned.

Effe

ctiv

enes

s

Internal Work Completion Rate

This is the ratio of successful work completed by internal maintenance personnel to total completed work.

Helps to understand the proportion of work orders completed by internal maintenance personnel.

Outsourced Work

Completion Rate

This is the ratio of successful work completion by external maintenance personnel to total completed work.

Helps to understand the proportion of work orders completed by external maintenance personnel.

Internal Work This is the ratio of delayed Helps to understand completion delays

Page 70: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

55  

Delay Rate maintenance work by internal maintenance personnel to all internal maintenance.

in internal maintenance work.

Internal Work Average Delay

Period

This is the mean period of delayed work by internal maintenance personnel.

Helps to understand the average delay period of the work orders scheduled to be completed by internal maintenance personnel.

External Work Delay Rate

This is the ratio of delayed maintenance work by external maintenance personnel to all external work.

Helps to understand the delayed completion of external work.

External Work Average Delay

Period

This is the mean period of delayed work by external maintenance personnel.

Helps to understand the average delay period of the work orders scheduled to be completed by external maintenance personnel.

Internal Average Execution Time Deviation Ratio

This is the difference in time between planned and actual maintenance jobs done by internal maintenance personnel.

Helps to understand the difference between the average execution time of the internal maintenance work and the planned time.

External Committee

Execution Time Deviation Ratio

This is the difference in time between planned and actual maintenance jobs done by external maintenance personnel

Helps to understand the difference between the average execution time and the planned time for the external maintenance work completed.

Internal Man-Hour Difference

Ratio

This is the difference in time between planned and actual labour hours used by internal maintenance personnel.

Helps to understand the deviations from the planned labour hours for internal maintenance work.

Internal Average Man-Hour

Difference Ratio

This is the mean difference in time between planned and actual labour hours used by internal maintenance personnel.

Helps to understand the average deviation from the planned average for each completed internal maintenance action.

External Man-Hour Difference

Ratio

This is the difference in time between planned and actual labour hours of external maintenance personnel.

Helps to understand the deviation between actual and planned labour hours for external maintenance work.

External Average Man-Hour

Difference Ratio

This is the mean difference in time between planned and actual labour hours of external maintenance personnel.

Helps to understand the average deviation from the planned average for each external maintenance action.

Material Difference Ratio

This is the difference between planned spare parts and actual spare parts used for maintenance work.

Helps to understand the difference between the actual number of spare parts used for maintenance work and the number of spare parts assigned in the plan.

Average Material Difference Ratio

This is the mean difference between planned spare parts and actual spare parts used for maintenance work.

Helps to understand the difference between the average number of used spare parts and the planned average for each completed maintenance action.

Page 71: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

56  

5.1.4 DevelopmentofmaintenanceresourcemanagementKPIs

Maintenance resources management as used in this framework refers to metrics for tracking maintenance spare parts, outsourced maintenance personnel, internal maintenance personnel, training and training quality of maintenance personnel. Maintenance resources management uses soft KPIs to measure asset resources used for maintenance.

This part of the framework introduces KPIs for tracking spare parts consumption, external maintenance personnel, and internal maintenance personnel. It also includes KPIs that track workload and training of maintenance personnel. These KPIs form part of the business KPIs, to which the mining company refers as soft KPIs based on its business strategies. The purpose of maintenance resource management is to provide KPIs that can measure spare parts usage, the maintenance personally available for maintenance work, the workload of maintenance personnel, and the efficiency and effectiveness of maintenance training. These indicators give maintenance managers insight into what skill sets are needed, when to employ more maintenance personnel and what maintenance training is required to support maintenance in an efficient way (see Table 5.1.3).

Table 5.1.3 Maintenance resources management KPIs in Level 3 and Level 4

LevelName Context Purpose

3 4

Spar

e Pa

rts

Man

agem

ent

Inve

ntor

y M

anag

emen

t

Average Spare Part Quantity

This is the mean number of spare parts in stock.

Helps to understand the average number of spare parts between opening and closing stock.

Spare Part Capital Utilization

This is the mean cost of spare parts utilization.

Helps to understand the average inventory value of the spare parts used compared to the original purchase cost of the equipment. 

Spare Parts Capital Replacement Rate

This is the average cost of spare part replacement.

Helps to understand the average inventory cost of replacing spare parts.

Spare Part Consumption per

Thousand SEK Output

This is the average cost of spare parts for maintenance work per every 1000 SEK output.

Helps to understand the average cost of spare parts for maintenance for every thousand SEK spent on overall maintenance.

Spare Part Turnover Rate

This is the number of spare parts bought to replace failed parts in a quarter or a year.

Helps to understand spare parts turnover rate.

Spare Part Turnover Period

This is the ratio of average inventory value to cost of spare parts within the year.

Helps to understand the spare parts turnover period.

Slow Moving Inventory Ratio

This is the proportion of stock that has not shipped in a certain amount of time, e.g. 90days or 180 days, and includes stock with a low turnover rate relative to the quantity on hand.

Helps to understand periods of no consumption of some types of spare parts from the total spare parts inventory.

Page 72: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

57  

Out

sour

cing

Man

agem

ent

Cont

ract

or S

tati

stic

s

Number of Outsourced Equipment

Breakdowns

This is the total amount of outsourced equipment that is out of service.

Helps to understand the total amount of equipment handled by outsourced maintenance personnel that is not working.

Number of Outsourced

Maintenance Personnel

This is the total number of outsourced maintenance personnel.

Helps to understand the total number of external maintenance personnel.

External Maintenance Cost

Ratio

This is the ratio of cost of outsourced maintenance personnel to the overall maintenance cost.

Helps to understand the cost of external maintenance personnel.

Hum

an R

esou

rces

Man

agem

ent

Skill

s M

anag

emen

t

Total Number of Maintenance

Operators

This is the number of maintenance operators used for maintenance tasks.

Helps to understand the total number of registered maintenance operators assigned to tasks.

Total Number of Maintenance

Engineers

This is the number of maintenance engineers used for maintenance tasks.

Helps to understand the total number of registered maintenance engineers assigned to tasks.

Number of Multi-Skilled

Maintenance Personnel

This is the number of multi-skilled maintenance personnel used for maintenance tasks.

Helps to understand the total number of registered skilled maintenance personnel assigned to tasks.

Maintenance Operator Ratio

This is the ratio of maintenance operators to total maintenance personnel.

Helps to understand the percentage of maintenance personnel who are operators.

Maintenance Engineer Ratio

This is the ratio of maintenance engineers to total maintenance personnel.

Helps to understand the percentage of maintenance personnel who are engineers.

Multi-Skilled Maintenance

Personnel Ratio

This is the ratio of multi- skilled maintenance personnel to total maintenance personnel.

Helps to understand the percentage of maintenance personnel who are multi-skilled.

Wor

k Lo

ad M

anag

emen

t

Average Number of Work Orders

Created per Person

This is the number of work orders created by each maintenance worker.

Helps to understand the average number of work orders created by each maintenance worker.

Average Number of Work Orders Executed per

Person

This is the number of work orders completed per maintenance worker.

Helps to understand the average number of work orders completed by each maintenance worker.

Average Daily Workload per

Person

This is the number of hours for each maintenance worker in a day.

Helps to understand the daily average number of work hours for the implementation of work orders for each maintenance person.

Tra

inin

g

Average Annual Training Hours per

Maintenance Operator

This is the yearly mean training hours per maintenance operator.

Helps to understand the average annual training hours for maintenance operators.

Average Annual Training Hours per

Maintenance Engineers

This is the yearly mean training hours per maintenance engineer.

Helps to understand the average annual training hours for maintenance engineers.

Average Annual Training Hours per

Multi-Skilled Maintenance

Engineers

This is the yearly mean training hours per multi-skilled maintenance engineer.

Helps to understand the average annual training hours for multi-skilled maintenance engineers.

Page 73: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

58  

Com

pete

nce

Dev

elop

men

t

Number of New Senior

Maintenance Engineers

This is the number of maintenance operators who have become maintenance engineers.

Helps to understand the total number of maintenance operators who have risen to the rank of maintenance engineers.

Ratio of New Senior

Maintenance Engineers

This is the ratio of the number of maintenance operators who have become maintenance engineers to the total number of maintenance engineers.

Helps to understand the proportion of maintenance operators who have risen up to the rank of maintenance engineers.

Number of New Multi-Skilled Maintenance

Engineers

This is the number of maintenance engineers who have become multi-skilled maintenance engineers.

Helps to understand the total number of maintenance engineers who have risen to the rank of multi-skilled maintenance engineers.

Ratio of New Multi-Skilled

Maintenance Engineers

This is the ratio of the number of maintenance engineers who have become multi-skilled maintenance engineers to the total number of multi-skilled maintenance engineers.

Helps to understand the proportion of maintenance engineers who have risen to the rank of multi-skilled maintenance engineers.

5.2 ResultsanddiscussionrelatedtoRQ2

RQ2: How can the developed KPI framework be implemented through eMaintenance?

The second research question is answered by proposing data sources, time definitions, and a general formula for all the KPIs developed in the new framework in Paper A. Results of this part will further support to develop KPI ontology and taxonomy of the proposed KPI framework for its implementation in an eMainteannce environement. Results are shown in Table 5.2.1, Table 5.2.2 and Table 5.2.3.

5.2.1 KPIimplementationforassetoperationmanagement

This section presents the results of KPI implementation for asset operation management; it provides data sources, time definitions and a general formula for the related KPIs. These are shown in Table 5.2.1.

Table 5.2.1 Implementation in an eMaintenance environment for asset operation management

LevelName Timeline GeneralFormula

3 4

Ove

rall

Ass

et

Shut

dow

n St

atis

tics

Number of Shutdowns

Stop date/ Registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝𝑠

Total Shutdown Time

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝 𝑇𝑖𝑚𝑒

Average Shutdown Time

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝 𝑇𝑖𝑚𝑒 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝𝑠

Page 74: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

59  

Failu

re R

elat

ed

Downtime Ratio/Frequency

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝𝑠𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑓𝑎𝑢𝑙𝑡 𝑙𝑖𝑛𝑒′

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝𝑠

Downtime Ratio/Time

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑓𝑎𝑢𝑙𝑡 𝑙𝑖𝑛𝑒′

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝 𝑇𝑖𝑚𝑒

Failure Mode Reporting Rate

Work order registration /creation date ⊆ (query start date, query termination date) Item: Work order type; System/section; Work for supplier group; Work supplier attribute

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒

𝑎𝑛𝑑 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 𝑚𝑜𝑑𝑒 𝑖𝑠 𝑛𝑜𝑡 𝑁𝑈𝐿𝐿𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒′

Reason for Failure

Registration Rate

work order registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒 𝑎𝑛𝑑 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 𝑟𝑒𝑎𝑠𝑜𝑛 𝑖𝑠 𝑛𝑜𝑡 𝑁𝑈𝐿𝐿

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒′

Ava

ilabi

lity

Ope

rati

onal

A

vaila

bilit

y

Availability

Registration date / stop record date ⊆ (query start date, query termination date)

𝑇𝑜𝑡𝑎𝑙 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒𝑇𝑜𝑡𝑎𝑙 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒

𝐷𝑜𝑤𝑛 𝑡𝑖𝑚𝑒 𝐷𝑢𝑒 𝑡𝑜 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒

Rel

iabi

lity

Mea

n R

elia

bilit

y M

easu

res Mean Time

Between Failure

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑖𝑡𝑒𝑚 𝑖𝑠 𝑟𝑒𝑝𝑎𝑟𝑖𝑎𝑏𝑙𝑒

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐹𝑎𝑖𝑙𝑢𝑟𝑒𝑠

Mean Time To Failure

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑖𝑡𝑒𝑚 𝑖𝑠 𝑛𝑜𝑡 𝑟𝑒𝑝𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐹𝑎𝑖𝑙𝑢𝑟𝑒𝑠

Mean Up Time

Registration date / start record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑈𝑝𝑡𝑖𝑚𝑒 𝑖𝑛 𝐻𝑜𝑢𝑟𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑈𝑝𝑡𝑖𝑚𝑒 𝐸𝑣𝑒𝑛𝑡𝑠

Failu

re R

elat

ed

Emergency Failure Ratio

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠

′𝑒𝑚𝑒𝑟𝑔𝑒𝑛𝑐𝑦 𝑟𝑒𝑝𝑎𝑖𝑟 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟′𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Emergency Failed Equipment Ratio

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠

′𝑒𝑚𝑒𝑟𝑔𝑒𝑛𝑐𝑦 𝑟𝑒𝑝𝑎𝑖𝑟 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟′ 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Corrective Maintenance Failure Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Repeat Failure

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 𝑚𝑜𝑑𝑒 1

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑣𝑒′

Page 75: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

60  

Mai

ntai

nabi

lity

Mea

n M

aint

aina

bilit

y M

easu

res

Mean Downtime Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝐷𝑜𝑤𝑛𝑡𝑖𝑚𝑒 𝑖𝑛 𝐻𝑜𝑢𝑟𝑠𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐷𝑜𝑤𝑛𝑡𝑖𝑚𝑒 𝐸𝑣𝑒𝑛𝑡𝑠

Mean Time Between

Maintenance

Registration date / start record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑈𝑝𝑡𝑖𝑚𝑒𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝐴𝑐𝑡𝑖𝑜𝑛𝑠

Mean Time To Maintain

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 “𝑛” 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙 𝑈𝑛𝑖𝑡 𝑇𝑖𝑚𝑒𝑠 𝑡𝑜 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝐶𝑜𝑢𝑛𝑡 “𝑛” 𝑈𝑛𝑖𝑡𝑠

Mean Time To Repair

Registration date / stop record date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 “𝑛” 𝐼𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙 𝑈𝑛𝑖𝑡 𝑇𝑖𝑚𝑒𝑠 𝑡𝑜 𝑅𝑒𝑠𝑡𝑜𝑟𝑒

𝐶𝑜𝑢𝑛𝑡 “𝑛” 𝑈𝑛𝑖𝑡𝑠

False Alarm Rate

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐹𝑎𝑙𝑠𝑒 𝐴𝑙𝑎𝑟𝑚𝑠𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑙𝑎𝑟𝑚𝑠

Safe

ty

Occ

upat

iona

l Saf

ety

Number of Safety Incidents

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑎𝑓𝑒𝑡𝑦 𝐼𝑛𝑐𝑖𝑑𝑒𝑛𝑡𝑠

Injury Rate

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑎𝑓𝑒𝑡𝑦 𝐼𝑛𝑐𝑖𝑑𝑒𝑛𝑡𝑠 𝑊ℎ𝑒𝑟𝑒 𝐼𝑛𝑗𝑢𝑟𝑦

𝑏𝑒𝑡𝑤𝑒𝑒𝑛 ′𝑑𝑎𝑡𝑒 1′ 𝑎𝑛𝑑 ′𝑑𝑎𝑡𝑒 2′𝑆𝑢𝑚 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐻𝑜𝑢𝑟𝑠

Injury Rate per Failure

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐹𝑎𝑖𝑙𝑢𝑟𝑒𝑠 𝐶𝑎𝑢𝑠𝑖𝑛𝑔 𝐼𝑛𝑗𝑢𝑟𝑦

𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐹𝑎𝑖𝑙𝑢𝑟𝑒𝑠 ∗ 100

5.2.2 KPIimplementationformaintenanceprocessmanagement

This section presents the results of KPI implementation for maintenance process management; it presents data sources, time definitions and a general formula for the related KPIs. These are shown in Table 5.2.2.

Table 5.2.2 Implementation in an eMaintenance environment for Maintenance Process Management

LevelName Timeline GeneralFormula

3 4

Mai

nten

ance

Man

agem

ent

Mai

nten

ance

Str

ateg

y

Critical Equipment

Ratio

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡

Preventive Maintenance

Rate

Work order registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑃𝑀𝑡𝑦𝑝𝑒 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟′𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡

Predictive Maintenance Rate (PdM)

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑠𝑡𝑎𝑡𝑢𝑠 𝑚𝑜𝑛𝑖𝑡𝑜𝑟𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡

Page 76: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

61  

Preventive Maintenance Rate (Critical Equipment)

Work order registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑃𝑀𝑡𝑦𝑝𝑒 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟′ ′𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙′

Predictive Maintenance Rate (Critical Equipment)

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙′ 𝑖𝑠

′𝑠𝑡𝑎𝑡𝑢𝑠 𝑚𝑜𝑛𝑖𝑡𝑜𝑟𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡 ′ 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙′

Run to Failure (RTF) Ratio for

Critical Equipment

Registration date ⊆ (query start date, query termination date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡𝑊ℎ𝑒𝑟𝑒 𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙 𝑖𝑠 𝑛𝑜

′ 𝑠𝑡𝑎𝑡𝑢𝑠 𝑚𝑜𝑛𝑖𝑡𝑜𝑟𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡 ′ 𝑛𝑜 𝑃𝑀 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙′

Planned Maintenance vs

Unplanned Maintenance

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑢𝑛𝑝𝑙𝑎𝑛𝑒𝑑′

Mai

nten

ance

Pla

nnin

g

Qua

ntit

y R

elat

ed

Number of Planned Work Orders Created

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Tim

e R

elat

ed

Average Planned

Execution Time

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Res

ourc

e R

elat

ed

Total Number of Planned

Internal Labour Hours

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦 ‘’𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙’ & 𝑊𝑂_𝑇𝑦𝑝𝑒

′𝑝𝑙𝑎𝑛′

Average Planned

Internal Labour Hours

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦

‘𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙’ 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Total Number of Planned

External Labour Hours

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦

‘𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙′ & 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Page 77: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

62  

Average Planned External

Labour Hours

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦

‘𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙’ 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Planned Number of

Material Used

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑁𝑢𝑚𝑏𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Average Planned

Number of Materials Used

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑁𝑢𝑚𝑏𝑒𝑟𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Cost

Rel

ated

Total Cost of Planned Human

Resources

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝑅𝑎𝑡𝑒 ∗ 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝑁𝑢𝑚𝑏𝑒𝑟

𝑊ℎ𝑒𝑟𝑒 𝑊𝑂 ′𝑝𝑙𝑎𝑛′ & 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦

′𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙′

Average Planned External Human

Resource Costs

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐿𝑎𝑏𝑜𝑢𝑟 𝑅𝑎𝑡𝑒 ∗ 𝑃𝑙𝑎𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑊𝑂

′𝑝𝑙𝑎𝑛′ 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦 ′𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑊𝑂

′𝑝𝑙𝑎𝑛′ 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦 ′𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙′

Total Cost of Planned

Materials

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 ∗ 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑟𝑖𝑐𝑒

𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Planned Average

Material Cost

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒 𝑝𝑎𝑟𝑡𝑠∗ 𝑆𝑝𝑎𝑟𝑒 𝑃𝑟𝑖𝑐𝑒 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂

′𝑝𝑙𝑎𝑛′𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑝𝑙𝑎𝑛′

Labour Cost Ratio

Work order creation date ⊆ (query start date, query end date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑠

Planned Material Cost

Ratio

Work order creation date ⊆ (query start date, query end date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑠𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙𝑠

Page 78: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

63  

Mai

nten

ance

Pre

para

tion

Wor

k O

rder

Cre

atio

n

Planned Start / End Time

Registration Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 ‘𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑠𝑡𝑎𝑟𝑡/𝑒𝑛𝑑 𝑡𝑖𝑚𝑒’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Planned Spare Parts

Registration Rate

Work order registration / creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 ′𝑠𝑝𝑎𝑟𝑒 𝑝𝑎𝑟𝑡𝑠 𝑝𝑙𝑎𝑛′ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Planned Man-Hour

Registration Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 ‘𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑙𝑎𝑏𝑜𝑢𝑟 ℎ𝑜𝑢𝑟𝑠’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Planned Downtime

Registration Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

𝑊ℎ𝑒𝑟𝑒 ‘𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑙𝑎𝑏𝑜𝑢𝑟 𝑡𝑖𝑚𝑒’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑁𝑢𝑚𝑏𝑒𝑟

Standard Operating Plan

Registration Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 ‘𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑤𝑜𝑟𝑘 𝑝𝑙𝑎𝑛’ 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Planned Work Type

Registration Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 ‘𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦 𝑝𝑙𝑎𝑛′ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Job Priority Registration

Rate

Work order registration/creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑁𝑢𝑚𝑏𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 ′𝑗𝑜𝑏 𝑝𝑟𝑖𝑜𝑟𝑖𝑡𝑦′ 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Wor

k O

rder

Fee

dbac

k

Actual Spare Parts Use

Registration Rate

Work order registration ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 ′𝑟𝑒𝑎𝑙 𝑢𝑠𝑒 𝑜𝑓 𝑠𝑝𝑎𝑟𝑒 𝑝𝑎𝑟𝑡𝑠′ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 ′𝑆𝑝𝑎𝑟𝑒 𝑝𝑎𝑟𝑡𝑠 𝑝𝑙𝑎𝑛′ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑

Actual Man-Hour

Registration Rate

Work order registration ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 ‘𝑤ℎ𝑒𝑛 𝑢𝑠𝑖𝑛𝑔 𝑚𝑎𝑛𝑢𝑎𝑙’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠

Actual Downtime

Registration Rate

Work order registration ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 ‘𝑎𝑐𝑡𝑢𝑎𝑙 𝑑𝑜𝑤𝑛𝑡𝑖𝑚𝑒’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 ‘𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑑𝑜𝑤𝑛𝑡𝑖𝑚𝑒’ 𝑖𝑠 𝑙𝑜𝑔𝑔𝑒𝑑

Work Order Registration

Back-Log

Work order registration ⊆ (query start date, query end date)

𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝐷𝑎𝑡𝑒

𝑎𝑛𝑑 𝑇𝑖𝑚𝑒–

𝐴𝑐𝑡𝑢𝑎𝑙 𝐷𝑎𝑡𝑒 𝑎𝑛𝑑 𝑇𝑖𝑚𝑒

Page 79: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

64  

Wor

k O

rder

App

rova

l Total Number

of Work Orders

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑛𝑒𝑒𝑑 𝑡𝑜 𝑟𝑒𝑝𝑜𝑟𝑡′

Total Number of Approved Work Orders

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑊𝑂 ′𝑛𝑒𝑒𝑑 𝑡𝑜 𝑟𝑒𝑝𝑜𝑟𝑡′ &

′𝑙𝑜𝑔 𝑛𝑜 𝑟𝑒𝑗𝑒𝑐𝑡𝑖𝑜𝑛 𝑟𝑒𝑐𝑜𝑟𝑑′

Total Number of Unapproved Work Orders

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑊𝑂 ′𝑛𝑒𝑒𝑑 𝑡𝑜 𝑟𝑒𝑝𝑜𝑟𝑡′ &

′𝑙𝑜𝑔 𝑟𝑒𝑗𝑒𝑐𝑡𝑖𝑜𝑛 𝑟𝑒𝑐𝑜𝑟𝑑′

Work Order Approval Ratio

Work order creation date ⊆ (query start date, query end date)

𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑡𝑜 𝑏𝑒 𝐴𝑝𝑝𝑟𝑜𝑣𝑒𝑑𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑟𝑒𝑎𝑡𝑒𝑑

One-time Approved Work

Order Ratio

Work order creation date ⊆ (query start date, query end date)

𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑝𝑝𝑟𝑜𝑣𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Average time lag for

Reporting and Approving

Work Orders

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝐴𝑝𝑝𝑟𝑜𝑣𝑎𝑙 𝐷𝑎𝑡𝑒 – 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑅𝑒𝑝𝑜𝑟𝑡 𝐷𝑎𝑡𝑒

𝑤ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑅𝑒𝑝𝑜𝑟𝑡 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑝𝑝𝑟𝑜𝑣𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Mai

nten

ance

Exe

cuti

on

Qua

ntit

y R

elat

ed

Number of Planned Work

Orders Completed

Work order completion date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 ′𝑊𝑂_𝑇𝑦𝑝𝑒 𝑖𝑠 ‘𝑝𝑙𝑎𝑛′

Number of Unplanned

Work Orders Completed

Work order completion date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂_𝑇𝑦𝑝𝑒 𝑖𝑠 ‘𝑢𝑛𝑝𝑙𝑎𝑛𝑛𝑒𝑑’

Number of Work Orders

Completed per Shift

Work order completion date ⊆ (query start date, query end date)

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑒𝑑 𝑎𝑠 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑

𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

∗ 100

Work Order Resolution Rate

Work order completion date ⊆ (query start date, query end date)

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑒𝑑 𝑎𝑠 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑

𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

∗ 100

Tim

e R

elat

ed Average Work

Order Time

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Average Waiting Time for Personnel

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑤𝑖𝑡ℎ 𝑆𝑡𝑎𝑓𝑓 𝑖𝑛 𝑃𝑙𝑎𝑐𝑒 𝑇𝑖𝑚𝑒 𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛 𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Page 80: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

65  

Average Waiting Time

for Spare Parts

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑖𝑛 𝑃𝑙𝑎𝑐𝑒 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛

𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑙𝑎𝑛 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑁𝑜𝑛𝑆𝑡𝑜𝑝 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑙𝑎𝑛

𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Personnel Waiting Time

Ratio

Work order completion date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑇𝑖𝑚𝑒

Spare Parts Waiting Time

Ratio

Work order completion date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 𝑓𝑜𝑟 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑇𝑖𝑚𝑒

Average Maintenance Outage Time

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑙𝑒𝑎𝑑

𝑠𝑡𝑜𝑝 𝑜𝑓 𝑚𝑎𝑖𝑛 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟 𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑆𝑡𝑎𝑟𝑡𝑆𝑡𝑜𝑝 𝑜𝑓 𝑀𝑎𝑖𝑛 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Average Waiting Time of

Personnel during

Shutdown

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑡𝑎𝑓𝑓 𝑖𝑛 𝑃𝑙𝑎𝑐𝑒 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑒𝑥𝑖𝑠𝑡 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑠𝑡𝑎𝑟𝑡

𝑜𝑓 𝑚𝑎𝑖𝑛 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟 𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑆𝑡𝑎𝑟𝑡 𝑜𝑓

𝑀𝑎𝑖𝑛 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Average Waiting Time

for Spare Parts during

Shutdown

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑖𝑛 𝑃𝑙𝑎𝑐𝑒 𝑇𝑖𝑚𝑒𝑊ℎ𝑒𝑟𝑒 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑠𝑡𝑜𝑝 𝑙𝑖𝑛𝑒

𝑤𝑖𝑡ℎ 𝑠𝑝𝑎𝑟𝑒 𝑝𝑎𝑟𝑡𝑠 𝑝𝑙𝑎𝑛 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑆𝑡𝑜𝑝 𝐿𝑖𝑛𝑒

𝑊𝑖𝑡ℎ 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑙𝑎𝑛 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Average Waiting Time of

Personnel during

Shutdown Ratio

Work order completion date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 𝑓𝑜𝑟 𝑃𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙 𝑑𝑢𝑟𝑖𝑛𝑔 𝑆ℎ𝑢𝑡𝑑𝑜𝑤𝑛

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑂𝑢𝑡𝑎𝑔𝑒 𝑇𝑖𝑚𝑒

Average Waiting Time

for Spare Parts during

Shutdown Ratio

Work order completion date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑎𝑖𝑡𝑖𝑛𝑔 𝑇𝑖𝑚𝑒 𝑓𝑜𝑟 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑑𝑢𝑟𝑖𝑛𝑔 𝑆ℎ𝑢𝑡𝑑𝑜𝑤𝑛

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑂𝑢𝑡𝑎𝑔𝑒 𝑇𝑖𝑚𝑒

Estimated Time vs. Actual Time

Work order completion date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑇𝑖𝑚𝑒

Res

ourc

e R

elat

ed

Total Number of Internal

Labour Hours

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠

Average Internal Labour

Hours Used

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Total Number of External

Labour Hours

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠

Page 81: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

66  

Average External

Labour Hours Used

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

Number of Materials Used

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠

Average Material Used

Work order completion date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑂𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

Cost

Rel

ated

Total Cost of External Human

Resources Used

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝐻𝑜𝑢𝑟𝑙𝑦 𝑅𝑎𝑡𝑒 ∗ 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝐹𝑜𝑟𝑒𝑖𝑔𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝐻𝑜𝑢𝑟𝑠

Average External Human

Resources Costs

Work order registration date ⊆ (query start date, query termination date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠 𝑈𝑠𝑒𝑑𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟

𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑐𝑎𝑡𝑒𝑔𝑜𝑟𝑦 ’𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙’

Total Cost of Materials Used

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 ∗ 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑟𝑖𝑐𝑒

Average Cost of Materials Used

Work order registration date ⊆ (query start date, query termination date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙 𝑈𝑠𝑒𝑑𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

External Labour Costs

Ratio

Work order registration date ⊆ (query start date, query termination date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠 𝑈𝑠𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠 𝑈𝑠𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙 𝑈𝑠𝑒𝑑

Actual Materials Cost

Ratio

Work order registration date ⊆ (query start date, query termination date)

𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙 𝑈𝑠𝑒𝑑𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐻𝑢𝑚𝑎𝑛 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠 𝑈𝑠𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑀𝑎𝑡𝑒𝑟𝑖𝑎𝑙 𝑈𝑠𝑒𝑑

Maintenance Cost per Asset

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝐷𝑖𝑠𝑡𝑖𝑛𝑐𝑡 𝑎𝑠𝑠𝑒𝑡 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 ∗ 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑃𝑟𝑖𝑐𝑒 )

Page 82: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

67  

Mai

nten

ance

Ass

essm

ent

Qua

lity

Number of Completed

Work Orders Approved

Work order creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 𝑆𝑡𝑎𝑡𝑢𝑠 ′𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒 𝑎𝑛𝑑 𝑊𝑂_𝑇𝑦𝑝𝑒 ′𝑎𝑝𝑝𝑟𝑜𝑣𝑒𝑑′

Work Order Approval Ratio

Work order creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑊ℎ𝑒𝑟𝑒 𝑊𝑂 ′𝑟𝑒𝑞𝑢𝑖𝑟𝑒 𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

One-Time Pass Internal

Completion Rate

Work order creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑊𝑂

′𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑓𝑜𝑟 𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙′ ‘𝑎𝑝𝑝𝑟𝑜𝑣𝑒𝑑 𝑜𝑛𝑐𝑒’ 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑔𝑟𝑜𝑢𝑝 ′𝑖𝑛𝑡𝑒𝑟𝑛𝑎𝑙′

One-Time Pass External

Completion Rate

Work order creation date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑊𝑂 ′𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑓𝑜𝑟 𝑎𝑝𝑝𝑟𝑜𝑣𝑎𝑙′ ‘𝑎𝑝𝑝𝑟𝑜𝑣𝑒𝑑 𝑜𝑛𝑐𝑒’

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑤ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘 𝑔𝑟𝑜𝑢𝑝 ′𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙′

Planning Compliance

Work order creation date ⊆ (query start date, query end date)

𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑀𝑎𝑛 𝐻𝑜𝑢𝑟𝑠 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑𝑇𝑜𝑡𝑎𝑙 𝑊𝑒𝑒𝑘𝑙𝑦 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝑀𝑎𝑛 𝐻𝑜𝑢𝑟𝑠

∗ 100

Effe

ctiv

enes

s

Internal Work Completion

Rate

Planned work order ompletion date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑤𝑜𝑟𝑘𝑔𝑟𝑜𝑢𝑝

′𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙′

Outsourced Work

Completion Rate

Planned work order completion date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠 𝐶𝑜𝑢𝑛𝑡 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒𝑤𝑜𝑟𝑘 𝑔𝑟𝑜𝑢𝑝 ′𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙′

Internal Work Delay Rate

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒

𝑝𝑙𝑎𝑛 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑁𝑢𝑚𝑏𝑒𝑟

Internal Work Average Delay

Period

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝐷𝑎𝑡𝑒 –

𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝐷𝑎𝑡𝑒𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒

𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′ 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒 𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′

External Work Delay Rate

Work order registration date ⊆ (query start date, query termination date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟𝑠

𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒 𝑝𝑙𝑎𝑛 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′

𝐶𝑜𝑢𝑛𝑡 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑊𝑜𝑟𝑘 𝑂𝑟𝑑𝑒𝑟 𝑁𝑢𝑚𝑏𝑒𝑟

External Work Average Delay

Period

Ticket registration date ⊆ (inquiry start date, inquiry termination date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝐷𝑎𝑡𝑒 – 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝐷𝑎𝑡𝑒

𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒 𝑝𝑙𝑎𝑛𝑛𝑒𝑑 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒𝑑 𝑂𝑟𝑑𝑒𝑟𝑠 𝑊ℎ𝑒𝑟𝑒 𝑟𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒

𝑝𝑙𝑎𝑛 𝑐𝑜𝑚𝑝𝑙𝑒𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑒′

Page 83: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

68  

Internal Average

Execution Time Deviation Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐴𝑐𝑡𝑢𝑎𝑙 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

External Committee

Execution Time Deviation Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐴𝑐𝑡𝑢𝑎𝑙 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝑊𝑜𝑟𝑘𝑖𝑛𝑔 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

Internal Man-Hour Difference

Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒 𝑆𝑢𝑚 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒𝑆𝑢𝑚 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒

Internal Average Man-

Hour Difference Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒 𝐼𝑛𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒

External Man-Hour Difference

Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒 𝑆𝑢𝑚 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒 𝑆𝑢𝑚 𝑃𝑙𝑎𝑛𝑛𝑒𝑑 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒

External Average Man-

Hour Difference Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑙𝑎𝑛 𝐿𝑎𝑏𝑜𝑢𝑟 𝑇𝑖𝑚𝑒

Materials Difference

Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑆𝑢𝑚 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑆𝑢𝑚 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡

Average Materials Difference

Ratio

Work order creation date (date of inquiry, date of inquiry termination)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑑 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠

5.2.3 KPIimplementationformaintenanceresourcemanagement

This section presents the results of KPI implementation for maintenance resource management; it provides data sources, time definitions, and general formulas for the related KPIs. These are shown in Table 5.2.3.

Page 84: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

69  

Table 5.2.3 Implementation in an eMaintenance environment for maintenance resource management

LevelName Timeline GeneralFormula

3 4

Spar

e Pa

rts

Man

agem

ent

Inve

ntor

y M

anag

emen

t

Average Spare Part Quantity

Spare parts stock date (query start date, query end date)

𝑂𝑝𝑒𝑛𝑖𝑛𝑔 𝑆𝑡𝑜𝑐𝑘 𝐸𝑛𝑑𝑖𝑛𝑔 𝑆𝑡𝑜𝑐𝑘 2

Spare Part Capital Utilization

equipment ledger date(query start date, query end date)

𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐼𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝐹𝑢𝑛𝑑𝑠 𝑆𝑢𝑚 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑃𝑢𝑟𝑐ℎ𝑎𝑠𝑒 𝑃𝑟𝑖𝑐𝑒

Spare Parts Capital Replacement Rate

equipment ledger date(query start date, query end date)

𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐼𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝐹𝑢𝑛𝑑𝑠𝑆𝑢𝑚 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝑅𝑒𝑝𝑙𝑎𝑐𝑒𝑚𝑒𝑛𝑡 𝐶𝑜𝑠𝑡

Spare Part Consumption per

Thousand Sek Output

Work order registration date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛𝑇𝑜𝑡𝑎𝑙 𝑂𝑢𝑡𝑝𝑢𝑡 𝑉𝑎𝑙𝑢𝑒 𝑃𝑒𝑟 1000 𝑆𝑒𝑘 𝑂𝑢𝑡𝑝𝑢𝑡

Spare Part Turnover Rate

Work order registration date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐼𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝐹𝑢𝑛𝑑𝑠

Spare Part Turnover Period

Work order registration date ⊆ (query start date, query end date)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐼𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝐹𝑢𝑛𝑑𝑠 𝑆𝑢𝑚 𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛

∗ 365

Slow Moving Inventory Ratio

equipment ledger date(query start date, query end date)

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒 𝑃𝑎𝑟𝑡𝑠 𝑁𝑜𝑡 𝑈𝑠𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑝𝑎𝑟𝑒

Out

sour

cing

Man

agem

ent

Cont

ract

or S

tati

stic

s

Number of Outsourced Equipment

Breakdowns

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑆𝑡𝑜𝑝𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ‘𝑜𝑢𝑡𝑠𝑜𝑢𝑟𝑐𝑒𝑑’

Number of Outsourced

Maintenance Personnel

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ‘𝑜𝑢𝑡𝑠𝑜𝑢𝑟𝑐𝑒𝑑’

External Maintenance Cost

Ratio

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝑃𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙 ∗ 𝑅𝑎𝑡𝑒 𝑜𝑓 𝑊𝑜𝑟𝑘 𝐷𝑜𝑛𝑒

𝑇𝑜𝑡𝑎𝑙 𝑀𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝐶𝑜𝑠𝑡∗ 100

Hum

an R

esou

rces

Man

agem

ent

Skill

s M

anag

emen

t

Total Number of Maintenance

Operators

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟′

Total Number of Maintenance

Engineers

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑤ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟′

Number of Multi-Skilled Maintenance

Personnel

Work supplier list date ⊆ (query start date, query

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑚𝑢𝑙𝑡𝑖 𝑠𝑘𝑖𝑙𝑙′

Page 85: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

70  

end date)

Maintenance Operator Rate

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙′

′𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟′𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

∗ 100

Maintenance Engineer Rate

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙′

′𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟′ 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

∗ 100

Multi-Skilled Maintenance

Personnel Rate

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

′𝑚𝑢𝑙𝑡𝑖 𝑠𝑘𝑖𝑙𝑙′𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

∗ 100

Wor

k Lo

ad M

anag

emen

t

Average Number of Work Orders Created

per Person

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟𝑠𝐶𝑜𝑢𝑛𝑡 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ‘𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑖𝑏𝑙𝑒 𝑓𝑜𝑟 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟 𝑐𝑟𝑒𝑎𝑡𝑖𝑜𝑛’

Average Number of Work Orders

Executed per Person

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑊𝑜𝑟𝑘 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ‘𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑖𝑏𝑙𝑒 𝑓𝑜𝑟 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛’

Average Daily Workload per

Person

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐻𝑜𝑢𝑟𝑠 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ‘𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑖𝑏𝑙𝑒 𝑓𝑜𝑟 𝑤𝑜𝑟𝑘 𝑜𝑟𝑑𝑒𝑟 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛’ ∗

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐷𝑎𝑦𝑠 𝑑𝑢𝑟𝑖𝑛𝑔 𝐼𝑛𝑞𝑢𝑖𝑟𝑦

Tra

inin

g M

anag

emen

t

Average Annual Training Hours per

Maintenance Operator

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐻𝑜𝑢𝑟𝑠 𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 ′𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙′ ′𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟′

∗ 365

Average Annual Training Hours per

Maintenance Engineers

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐻𝑜𝑢𝑟𝑠𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙 ′𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟′

∗ 365

Average Annual

Training Hours per Multi-Skilled Maintenance

Engineers

Work order creation date ⊆ (query start date, query end date)

𝑆𝑢𝑚 𝑅𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝐻𝑜𝑢𝑟𝑠𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠

𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙 ′𝑚𝑢𝑙𝑡𝑖 𝑠𝑘𝑖𝑙𝑙𝑒𝑑′

∗ 365

Com

pete

nce

Dev

elop

men

t

Number of New Senior Maintenance

Engineers

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟′ & 𝑁𝑒𝑤 𝑅𝑜𝑙𝑒 ’𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟’

Percentage of New Senior Maintenance

Engineers

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟′ & 𝑛𝑒𝑤 𝑟𝑜𝑙𝑒 ’𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟’ ∗ 100

Number of New Multi-Skilled Maintenance

Engineers

Work supplier list date ⊆ (query start date, query

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟′ & 𝑛𝑒𝑤 𝑟𝑜𝑙𝑒 ’𝑚𝑢𝑙𝑡𝑖

Page 86: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

71  

end date) 𝑠𝑘𝑖𝑙𝑙𝑒𝑑 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟’

Percentage of New Multi-Skilled Maintenance

Engineers

Work supplier list date ⊆ (query start date, query end date)

𝐶𝑜𝑢𝑛𝑡 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠 𝑊ℎ𝑒𝑟𝑒 𝑖𝑠 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑛𝑒𝑙

& ′𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟′ 𝑎𝑛𝑑 𝑛𝑒𝑤 𝑟𝑜𝑙𝑒 ’𝑚𝑢𝑙𝑡𝑖

𝑠𝑘𝑖𝑙𝑙𝑒𝑑 𝑚𝑎𝑖𝑛𝑡𝑒𝑛𝑎𝑛𝑐𝑒 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟’ ∗ 100

Note that the results from this study can be applied by the studied company. The results will guide their implementation of the KPIs in an eMaintenance environment.

5.3 ResultsanddiscussionrelatedtoRQ3

RQ3: How can the KPIs be assessed using novel approaches?

The third research question is answered by developing approaches to assess the technical KPIs in Paper B and Paper C and to assess the soft KPIs in Section 5.3.3 of this thesis.

In Paper B and Paper C, system availability is selected as a KPI to illustrate the proposed novel approaches.

5.3.1 AssessmentoftechnicalKPIs

This study proposes a new approach to system availability assessment: a parametric Bayesian approach using MCMC, an approach that takes advantage of both analytical and simulation methods. By using this approach, Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged”, which better reflects reality and compensates for the limitations of simulation data sample size. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate the analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems).

5.3.1.1BayesianWeibullmodelforTTF

Suppose the Time to Failure (TTF) data t t , t , ⋯ , t for n individuals are i. i. d., and each corresponds to a 2-parameter Weibull distribution W α, γ , where α 0 and γ 0. Then, the p. d. f. is f t |α, γ αγt exp γt , while the c. d. f. is F t |α, γ 1exp γt . The reliability function is R t |α, γ exp γt .

Denote the observed data set as D0 n, t . Therefore, the likelihood function for α and γ is

𝐿 𝛼, 𝛾|𝐷 𝑓 𝑡 |𝛼, 𝛾 𝛼𝛾𝑡 𝑒𝑥𝑝 𝛾𝑡 5.3.1

Page 87: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

72  

In this study, we assume α to be a gamma distribution (Kuo, 1985), denoted by G a , b as its prior distribution, written as π α |a , b ; we assume γ to be a gamma distribution denoted by G c , d as its prior distribution, written as π γ|c , d . This means

𝜋 𝛼 |𝑎 , 𝑏 ∝ 𝛼 𝑒𝑥𝑝 𝑏 𝛼 (5.3.2)

𝜋 𝛾|𝑐 , 𝑑 ∝ 𝛾 𝑒𝑥𝑝 𝑑 𝛾 (5.3.3)

Therefore, the joint posterior distribution can be obtained according to equations (5.3.1) to (5.3.3) as

𝜋 𝛼, 𝛾|𝐷 ∝ 𝐿 𝛼, 𝛾|𝐷 𝜋 𝛼 |𝑎 , 𝑏 𝜋 𝛾|𝑐 , 𝑑 , 5.3.4

and the parameters’ full conditional distribution with Gibbs sampling can be written as

𝜋 𝛼𝑗|𝛼 𝑗 , 𝛾, 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛼𝑎0 1𝑒𝑥𝑝 𝑏0𝛼 5.3.5

𝜋 𝛾𝑗|𝛼, 𝛾 𝑗 , 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛾𝑐0 1𝑒𝑥𝑝 𝑑0𝛾 5.3.6

5.3.1.2BayesianLognormalmodelforTTR

Suppose the Time to Repair (TTF) data t t , t , ⋯ , t for n individuals are i. i. d., and each ln t corresponds to a normal distribution, N μ, σ . We can get ti’s lognormal distribution with parameters μ and σ2. Then, the p. d. f. and c. d. f. are given by equation (5.3.7) and equation (5.3.8):

𝑓 𝑡 |𝜇, 𝜎1

√2𝜋𝜎𝑡𝑒𝑥𝑝

12𝜎

𝑙𝑛 𝑡 𝜇 5.3.7

𝐹 𝑡 |𝜇, 𝜎 Φ𝑙𝑛 𝑡 𝜇

𝜎 5.3.8

Denote the observed data set as D0 n, t . Therefore, according to equation (5.3.7), the likelihood function for μ and σ becomes

𝐿 𝜇, 𝜎|𝐷 𝑓 𝑡 |𝜇, 𝜎 5.3.9

Page 88: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

73  

In this study, we assume μ to be a normal distribution denoted by N e , f as its prior distribution, written as π μ|e , f ; we assume σ to be a gamma distribution denoted by G g , h as its prior distribution, written as π σ|g , h . This means

𝜋 𝜇|𝑒 , 𝑓 ∝ 𝑓 𝑒𝑥𝑝𝑓2

𝜇 𝑒 5.3.10

𝜋 𝜎|𝑔 , ℎ ∝ 𝜎 𝑒𝑥𝑝 ℎ 𝜎 (5.3.11)

Therefore, the joint posterior distribution can be obtained according to equations (5.3.9) to (5.3.11) as

𝜋 𝜇, 𝜎|𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜋 𝜇 |𝑒 , 𝑓 𝜋 𝜎|𝑔 , ℎ 5.3.12

Then, the parameters’ full conditional distribution with Gibbs sampling can be written as

𝜋 𝜇 |𝜇 , 𝜎, 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝑓0

12𝑒𝑥𝑝

𝑓0

2𝜇 𝑒0

2 5.3.13

π 𝜎 |𝜇, 𝜎 , 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜎 𝑒𝑥𝑝 ℎ 𝜎 5.3.14

5.3.1.3Resultsofthecasestudy

In this case study, a three-chain Markov chain is constructed for each MCMC simulation. A burn-in of 1000 samples is used, with an additional 10,000 Gibbs samples for each Markov chain. Vague prior distributions are adopted as follows:

For Bayesian Weibull model using TTF data:

𝛼~𝐺 0.0001, 0.0001 , 𝛾~𝐺 0.0001, 0.0001 ;

For Bayesian lognormal model using TTR data:

𝜇~𝑁 0, 0.0001 , 𝜎~𝐺 0.0001, 0.0001 .

Using the convergence diagnostics (i.e. checking dynamic traces in Markov chains, determining time series and Gelman-Rubin-Brooks (GRB) statistics, and comparing Monte Carlo error (MC error) with standard deviation (SD)) (Lin, 2014), we consider the following posterior distribution summaries for our models (see Table 5.3.1 and Table 5.3.2), including the parameters’ posterior distribution mean, SD, MC error, and 95% highest posterior distribution density (HPD) interval.

Page 89: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

74  

Table 5.3.1 Posterior statistics in Bayesian Weibull model for TTF

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝛼 0.5409 0.0231 4.288E-4 (0.4964, 0.5867) 𝛾 0.0928 0.0120 2.235E-4 (0.0712, 0.1178)

2 𝛼 0.5747 0.0288 6.289E-4 (0.5195, 0.6324) 𝛾 0.0642 0.0109 2.334E-4 (0.0451, 0.0876)

3 𝛼 0.5975 0.0251 5.004E-4 (0.5974, 0.6481) 𝛾 0.0712 0.0098 1.942E-4 (0.0707, 0.0922)

4 𝛼 0.5745 0.0245 4.885E-4 (0.5272, 0.6236) 𝛾 0.0750 0.0104 2.028E-4 (0.0564, 0.0970)

5 𝛼 0.5560 0.0216 4.135E-4 (0.5558, 0.5988) 𝛾 0.0958 0.0112 2.158E-4 (0.0952, 0.1196)

Table 5.3.2 Posterior statistics in Bayesian lognormal model for TTR

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝜇 -0.1842 0.1107 6.730E-4 (-0.4015, 0.0342) 𝜎 0.2270 0.0169 9.565E-5 ( 0.1951,0.2615 )

2 𝜇 -0.0075 0.1424 8.504E-4 (-0.2845,0.2697) 𝜎 0.1861 0.0161 9.140E-5 ( 0.1556, 0.2193)

3 𝜇 -0.4574 0.1134 6.540E-4 (-0.4578, -0.2354) 𝜎 0.2196 0.0164 9.621E-5 ( 0.2191, 0.2533 )

4 𝜇 -0.3540 0.1145 7.052E-4 (-0.5787, -0.1297) 𝜎 0.2184 0.0166 9.845E-5 ( 0.1871, 0.2523 )

5 𝜇 -0.3484 0.1023 6.265E-4 (-0.3486, -0.1488) 𝜎 0.2195 0.0148 8.614E-5 ( 0.2189, 0.2495 )

Using the results from Table 5.3.1 and Table 5.3.2, we calculate the availability of individual balling drums in Table 5.3.3, where MTTF = E f t |α, γ , and MTTR = E f t |μ, σ .

Table 5.3.3 Statistics of individual availability

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.1, 178.0) 7.779 (5.284, 11.58) 0.9487 (0.9229, 0.9665) 2 196.4 (157.7, 256.0)  15.48 (8.927, 26.60)  0.9265 (0.8766, 0.9582) 3 128.7 (127.9, 155.0)  6.381 (6.194, 9.622)  0.9525 (0.9538, 0.9693) 4 148.5 (122.5, 180.3)  7.178 (4.755, 10.86)  0.9536 (0.9291, 0.9702) 5 115.8 (115.1, 139.0)  7.083 (6.926, 10.22)  0.9420 (0.9433, 0.9610)

According to the assumption that it is treated as a parallel system, the system availability of the five balling drums is

𝐴𝑠𝑦𝑠𝑡𝑒𝑚 1 1 𝐴𝑖

5

𝑖 1

0.99

Page 90: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

75  

5.3.1.4Discussionofthecasestudy

Compared to the traditional method of assessing availability, the proposed approach extends the method to equation (5.3.15), where

𝐴𝐸 𝑓 𝑇𝑇𝐹

𝐸 𝑓 𝑇𝑇𝐹 𝐸 𝑓 𝑇𝑇𝑅

𝐸 𝑓 𝑡 |𝛼, 𝛾𝐸 𝑓 𝑡 |𝛼, 𝛾 𝐸 𝑓 𝑡 |𝜇, 𝜎

5.3.15

Equation (5.3.15) shows the flexibility of assessing availability according to reality. For one thing, the parametric Bayesian models using MCMC make the calculation of posteriors more feasible. More importantly, however, parametric Bayesian models can be applied to predict TTF, TTR, and system availability in the future.

In this study, since the five balling drums are relatively new, the gamma distributions and normal distributions are selected as vague priors due to lack of prior information. This could be improved with more historical data/experience.

The system configurations could be extended to other more complex architectures (series, k-out-of-n, stand-by, multi-state, or mixed).

The data analysis reveals that for TTF data, the shape parameter for the Weibull distribution is less than 1. The TTFs have a decreasing trend (as in an early stage of the bathtub curve) which is not suitable for the experience of mechanical equipment. The TTF data include not only corrective maintenance but also preventive maintenance. In this case study, a high percentage of TTF work orders are for preventive maintenance. The decreasing trends also indicate that a possible way to improve TTF is to improve the preventive maintenance plan.

The three stages (8 steps) presented in section 3.4.1 can set within a PDCA cycle. Steps 2 to 4 can be treated as the Plan stage; Step 5 and Step 6 as the Do and Check stages respectively, and Step 7 as the Action stage. The outputs from Step 7 could become input for Step 2 for the next calculation period. Thus, the eight steps follow the PDCA cycle, and the results could be continuously improved.

5.3.2 AssessmentandconnectionsoftechnicalandsoftKPIs

This study proposes a Bayesian approach to evaluate system availability. In this approach: 1) Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better describe real scenarios and overcome the limitations of data sample size; 2) Markov Chain Monte Carlo (MCMC) simulations are applied to take advantage of analytical and simulation methods; 3) a threshold is set for Time to Failure (TTR) data and Time to Repair (TTR) data, and new datasets with right-censored data are created to reveal the connections between technical and soft KPIs. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined by a Bayesian Weibull model and a Bayesian lognormal model respectively.

Page 91: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

76  

The results show that the proposed approach can integrate the analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems). By comparing the results with and without considering the threshold for censoring data, we show the threshold can be used as a monitoring line for continuous improvement in the investigated mining company.

5.3.2.1Likelihoodconstructionforright‐censoreddata

In practice, lifetime data are usually incomplete, and only a portion of the individual lifetimes of assets are known. Right-censored data are often called Type I censoring in the literature; the corresponding likelihood construction problem has been extensively studied. The right-censored data of this study are illustrated in Figure 3.4 and Figure 3.5.

Suppose there are n individuals whose lifetimes and censoring times are independent. The i th individual has life time T and censoring time L . The T s are assumed to have probability density function f t and reliability function R t . The exact lifetime T of an individual will be observed only if T L . The lifetime data involving right censoring can be conveniently represented by n pairs of random variables t , v , where tmin T , L and v 1 if T L and v 0if T L . That is, v indicates whether the lifetime T is censored or not. The likelihood function is deduced as

𝐿 𝑡 𝑓 𝑡 𝑅 𝑡 5.3.16

5.3.2.2BayesianmodellingforTTFwithright‐censoreddata

Suppose the Time to Failure (TTF) data t t , t , ⋯ , t for n individuals are i. i. d., and each corresponds to a 2-parameter Weibull distribution W α, γ , where α 0 and γ 0. Then, the p. d. f. is f t |α, γ αγt exp γt , while the c. d. f. is F t |α, γ 1exp γt , and the reliability function is R t |α, γ exp γt .

Let v v , v , … , v indicate whether the lifetime is right-censored or not, and let the observed dataset for the study be denoted as D , where D n, t, v , following equation (5.3.16). Therefore, the likelihood function for α and γ is

𝐿 𝛼, 𝛾|𝐷 𝛼∑ 𝑒𝑥𝑝 𝑣 𝑙𝑛 𝛾 𝑣 𝛼 1 𝑙𝑛 𝑡 𝛾𝑡 5.3.17

In this study, we take α and γ to be independent. Furthermore, we assume α to be a gamma distribution, denoted by G a , b as its prior distribution, written as π α |a , b , and we assume γ to be a gamma distribution denoted by G c , d as its prior distribution, written as π γ|c , d . This means

Page 92: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

77  

𝜋 𝛼 |𝑎 , 𝑏 ∝ 𝛼 𝑒𝑥𝑝 𝑏 𝛼 (5.3.18)

𝜋 𝛾|𝑐 , 𝑑 ∝ 𝛾 𝑒𝑥𝑝 𝑑 𝛾 (5.3.19)

Therefore, the joint posterior distribution can be obtained according to equations (5.3.17) to (5.3.19) as

𝜋 𝛼, 𝛾|𝐷 ∝ 𝐿 𝛼, 𝛾|𝐷 𝜋 𝛼 |𝑎 , 𝑏 𝜋 𝛾|𝑐 , 𝑑 5.3.20

The parameters’ full conditional distribution with Gibbs sampling can be written as

𝜋 𝛼𝑗|𝛼 𝑗 , 𝛾, 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛼𝑎0 1𝑒𝑥𝑝 𝑏0𝛼 5.3.21

𝜋 𝛾𝑗|𝛼, 𝛾 𝑗 , 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛾𝑐0 1𝑒𝑥𝑝 𝑑0𝛾 5.3.22

5.3.2.3BayesianmodellingforTTRwithright‐censoreddata

Suppose the Time to Repair (TTR) data t t , t , ⋯ , t for n individuals are i. i. d., and each ln t corresponds to a normal distribution N μ, σ . We can get t ’s lognormal distribution with parameters μ and σ , denoted by LN μ, σ . Then, the p. d. f. and c. d. f. are given, respectively, by equation (5.3.23) and equation (5.3.24):

𝑓 𝑡 |𝜇, 𝜎1

√2𝜋𝜎𝑡𝑒𝑥𝑝

12𝜎

𝑙𝑛 𝑡 𝜇 5.3.23

𝐹 𝑡 |𝜇, 𝜎 Φ𝑙𝑛 𝑡 𝜇

𝜎 5.3.24

The likelihood function related to 𝜇 and 𝜎 , considering the censoring indicators 𝑣𝑣 , 𝑣 , … , 𝑣 and the observed data set 𝐷0 𝑛, 𝑡, 𝑣 , becomes

𝐿 𝜇, 𝜎|𝐷 2𝜋𝜎 ∑ 𝑒𝑥𝑝1

2𝜎𝑙𝑛 𝑡 𝜇

𝑡 1 Φ𝑙𝑛 𝑡 𝜇

𝜎 5.3.25

Page 93: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

78  

In this study, we assume μ to be a normal distribution denoted by N e , f as its prior distribution, written as π μ|e , f , and we assume σ to be a gamma distribution denoted by G g , h as its prior distribution, written as π σ|g , h . This means

𝜋 𝜇|𝑒 , 𝑓 ∝ 𝑓 𝑒𝑥𝑝𝑓2

𝜇 𝑒 5.3.26

𝜋 𝜎|𝑔 , ℎ ∝ 𝜎 𝑒𝑥𝑝 ℎ 𝜎 (5.3.27)

Therefore, the joint posterior distribution can be obtained according to equations (5.3.25) to (5.3.27) as

𝜋 𝜇, 𝜎|𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜋 𝜇 |𝑒 , 𝑓 𝜋 𝜎|𝑔 , ℎ 5.3.28

The parameters’ full conditional distribution with Gibbs sampling can be written as

π 𝜇 |𝜇 , 𝜎, 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝑓0

12𝑒𝑥𝑝

𝑓0

2𝜇 𝑒0

2 5.3.29

π 𝜎 |𝜇, 𝜎 , 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜎 𝑒𝑥𝑝 ℎ 𝜎 5.3.30

5.3.2.4Resultsofthecasestudy

In this case study of five balling drums, the Markov chain is constructed for each MCMC simulation. A burn-in of 1000 samples is used, with an additional 10,000 Gibbs samples for each Markov chain. Vague prior distributions are adopted as follows:

For the Bayesian Weibull model using TTF data:

𝛼~𝐺 0.0001, 0.0001 , 𝛾~𝐺 0.0001, 0.0001 ;

For the Bayesian lognormal model using TTR data:

𝜇~𝑁 0, 0.0001 , 𝜎~𝐺 0.0001, 0.0001 .

Page 94: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

79  

Table 5.3.4 Posterior statistics in Bayesian Weibull model with censored TTF data

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝛼 0.5399 0.0235 4.34E-4 (0.4954, 0.5870) 𝛾 0.0934 0.0122 2.26E-4 (0.0710, 0.1186)

2 𝛼 0.5721 0.0289 6.25E-4 (0.5159, 0.6295) 𝛾 0.0651 0.0110 2.39E-4 (0.0459, 0.0890)

3 𝛼 0.5781 0.0251 5.08E-4 (0.5299, 0.6281) 𝛾 0.0742 0.0104 2.09E-4 (0.0555, 0.0961)

4 𝛼 0.5713 0.0252 5.14E-4 (0.5228, 0.6210) 𝛾 0.0763 0.0109 2.22E-4 (0.0569, 0.0992)

5 𝛼 0.5601 0.0219 3.95E-4 (0.5176, 0.6038) 𝛾 0.0940 0.0111 1.99E-4 (0.0735, 0.1175)

Using convergence diagnostics (i.e. checking dynamic traces in Markov chains, determining time series and Gelman-Rubin-Brooks (GRB) statistics, and comparing MC error with standard deviation (SD)) (Lin, 2014), we consider the posterior distribution statistics shown in Table 5.3.4 and Table 5.3.5, including the parameters’ posterior distribution mean, SD, Monte Carlo error (MC error), and 95% highest posterior distribution density (HPD) interval.

Table 5.3.5 Posterior statistics in Bayesian lognormal model with censored TTR data

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝜇 -0.4501 0.0882 4.98E-4 (-0.6250, -0.2776) 𝜎 0.3585 0.0267 1.50E-4 (0.3078, 0.4125)

2 𝜇 -0.3825 0.1082 6.24E-4 (-0.5959, -0.1719) 𝜎 0.3277 0.0285 1.56E-4 (0.2742, 0.3853)

3 𝜇 -0.4510 0.0839 5.10E-4 (-0.6176, -0.2871) 𝜎 0.4041 0.0305 1.80E-4 (0.3463, 0.4660)

4 𝜇 -0.6124 0.0907 5.29E-4 (-0.7924, -0.4351) 𝜎 0.3516 0.0266 1.49E-4 (0.3010, 0.4057)

5 𝜇 -0.6023 0.0812 4.72E-4 (-0.7633, -0.4432) 𝜎 0.3524 0.0238 1.39E-4 (0.3072, 0.4007)

Using the results from Table 5.3.4 and Table 5.3.5 for balling drums 1 to 5, we derive the distributions of TTF and TTR as shown in Table 5.3.6.

Table 5.3.6 Statistics of individual balling drums with censored data

Ballingdrum

TTF TTR Availability𝑊 𝛼, 𝛾 𝐿𝑁 𝜇, 𝜎 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄

1 𝑊 0.5399, 0.0934 𝐿𝑁 0.4501, 0.3585 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 2 𝑊 0.5721, 0.0651 𝐿𝑁 0.3825, 0.3277 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 3 𝑊 0.5781, 0.0742 𝐿𝑁 0.4510, 0.4041 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 4 𝑊 0.5713, 0.0763 𝐿𝑁 0.6124, 0.3516 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 5 𝑊 0.5601, 0.0940 𝐿𝑁 0.6023, 0.3524 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄

Page 95: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

80  

Using the results in Table 5.3.6, we create the 𝑝. 𝑑. 𝑓. and 𝑐. 𝑑. 𝑓. charts of TTF and TTR data in Figure 5.3.1 and Figure 5.3.2.

(a) 𝑝. 𝑑. 𝑓. of TTF (b) 𝑝. 𝑑. 𝑓. of TTR

Figure 5.3.1 𝑝. 𝑑. 𝑓. of TTF and TTR

(a) 𝑐. 𝑑. 𝑓. of TTF (b) 𝑐. 𝑑. 𝑓. of TTR

Figure 5.3.2 𝑐. 𝑑. 𝑓. of TTF and TTR

As discussed above, system availability can be computed via the TTF and TTR, but we cannot obtain a closed-form distribution of system availability. Therefore, we use an empirical distribution instead of an analytical one. We generate 10,000 samples from the distributions of TTF and TTF and calculate the associated availability. Figure 5.3.3 presents the histogram of availability of the five balling drums. We use the Kaplan-Meier estimate as the empirical 𝑐. 𝑑. 𝑓. Figure 5.3.4 shows the empirical distribution of the availability of the five balling drums.

Page 96: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

81  

Figure 5.3.3 Histogram plot of availability

Figure 5.3.4 Empirical 𝑐. 𝑑. 𝑓. of availability

Table 5.3.7 Statistics of individual balling drums with censored data

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.4, 178.2) 2.616 (2.000, 3.437) 0.9821 (0.9753, 0.9873) 2 197.0 (157.6, 247.5) 3.223 (2.301, 4.540) 0.9837 (0.9759, 0.9893) 3 146.0 (120.7, 177.0) 2.239 (1.741, 2.864) 0.9848 (0.9795, 0.9890) 4 149.0 (122.5, 181.8) 2.289 (1.736, 3.041) 0.9847 (0.9788, 0.9891) 5 115.0 (96.40, 137.5) 2.296 (1.796, 2.958) 0.9803 (0.9736, 0.9855)

Page 97: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

82  

We calculate the availability of the individual balling drums in Table 5.3.7, where MTTF = 𝐸 𝑓 𝑡 |𝛼, 𝛾 , and MTTR = 𝐸 𝑓 𝑡 |𝜇, 𝜎 . Then, according to the assumption that it is treated as a parallel system, the system availability of the five balling drums is

𝐴𝑠𝑦𝑠𝑡𝑒𝑚 1 1 𝐴𝑖

5

𝑖 1

0.99

5.3.2.5Acomparisonstudy

For comparative purposes, Table 5.3.8 and Table 5.3.9 show the statistics of the individual balling drums with no censored data. All TTF and TTR data collected in Stage I are treated as reasonable and require no improvement.

Table 5.3.8 Statistics of individual balling drums with no censored data

Ballingdrum

TTF TTR Availability𝑊 𝛼, 𝛾 𝐿𝑁 𝜇, 𝜎 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

1 𝑊 0.5409, 0.0928 𝐿𝑁 0.1842, 0.2270   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

2 𝑊 0.5747, 0.0642 𝐿𝑁 0.0075, 0.1861   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

3 𝑊 0.5975, 0.0712 𝐿𝑁 0.4574, 0.2196 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 4 𝑊 0.5745, 0.0750 𝐿𝑁 0.3540, 0.2184 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 5 𝑊 0.5660, 0.0958 𝐿𝑁 0.3484, 0.2195 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄

Table 5.3.9 Statistics of individual balling drums with no censored data

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.1, 178.0) 7.779 (5.284, 11.58) 0.9487 (0.9229, 0.9665) 2 196.4 (157.7, 256.0)  15.48 (8.927, 26.60)  0.9265 (0.8766, 0.9582) 3 128.7 (127.9, 155.0)  6.381 (6.194, 9.622)  0.9525 (0.9538, 0.9693) 4 148.5 (122.5, 180.3)  7.178 (4.755, 10.86)  0.9536 (0.9291, 0.9702) 5 115.8 (115.1, 139.0)  7.083 (6.926, 10.22)  0.9420 (0.9433, 0.9610)

For convenience, the results are also listed in Table 5.3.10.

Table 5.3.10 Comparison of statistics with and without censored data

Ballingdrum

MeanofMTTF MeanofMTTR MeanofAvailabilityNo

censored Censored %

No censored

censored % No

censored censored %

1 145.0 145.0 0 7.779 2.616 66.37 0.9487 0.9821 3.52 2 196.4 197.0 0.30 15.48 3.223 79.18 0.9265 0.9837 6.17 3 128.7 146.0 13.4 6.381 2.239 64.91 0.9525 0.9848 3.39 4 148.5 149.0 0.33 7.178 2.289 68.11 0.9536 0.9847 3.26 5 115.8 115.0 0 7.083 2.296 67.58 0.9420 0.9803 4.07

Page 98: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

83  

In Table 5.3.10, “%” denotes the percentage after considering the censored data. For instance, for balling drum 1, after considering the censored data, the mean of MTTF does not change; MTTR improves by 66.37%, and availability improves by 3.52%.

According to the results from Table 5.3.10, if 20% of the abnormal TTR data could be improved (for instance, by applying RCA activities, or more specifically, by improving maintenance resource management, including maintenance skills, spare parts, etc.), the TTR could be improved by 66.37%, 79.18%, 64.91%, 68.11%, and 67.58% for drums 1 to 5, respectively. Meanwhile, the availability would be improved by 3.52%, 6.17%, 3.39%, 3.26%, and 4.07% for drums 1 to 5, respectively.

The improvement of the TTF is not as impressive. We apply right-censored data for the TTRs under the assumption that they can be improved (censored at six), but the corresponding TTFs can only be marked as censored instead of censored at some specified value, under the assumption that the maintenance interval will not change all that much. This implies that if the maintenance interval (for instance, the preventive maintenance) could be improved, the TTFs could be improved (censored at a larger value), thus improving the availability.

5.3.2.6ConnectionbetweentechnicalandsoftKPIs

In the studied company, Key Performance Indicators (KPIs) are divided into two groups: technical KPIs and soft KPIs. The former are related to the performance of equipment, whilst the latter focus on maintenance management.

In this case, the abnormal values of TTR are assumed to be mainly caused by lack of maintenance resources, including personnel with suitable skills, spare parts, etc. KPIs for maintenance resources are treated as soft KPIs in the company. Therefore, using our comparative approach, we can easily find out how the technical KPIs (TTF, availability of assets) would be influenced by improving soft KPIs.

5.3.2.7Applicationofthethresholdasamonitoringline

In this study, the threshold of abnormal TTR values in the work orders is determined by a “80-20” rule in Pareto analysis, in which a TTR value exceeding six is treated as an abnormally long time for TTR and should be improved by RCA activities, including improving maintenance resource management.

Actually, the threshold could be determined by the company according to its business goals; for instance, it could be set at 70% or 90%, or set according to other rules combined with business goals. The threshold could also be changed gradually to improve the maintenance step by step, following a PDCA process. In another words, the so-called abnormal data are not really abnormal. Finally, the threshold could be treated as a monitoring line, permitting the dynamic monitoring of system availability.

Page 99: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

84  

5.3.2.8Otherdiscussionsrelatedtothiscasestudy

In this study, since the five balling drums are relatively new, the gamma distributions and normal distributions are selected as vague priors due to lack of real prior information. This could be improved with more historical data/experience.

The system configurations could be extended to other more complex architectures (series, k-out-of-n, stand-by, multi-state, or mixed).

The results of system availability are all larger than 0.99, with or without considering censored data. The difference is not very obvious for two reasons. First, the system configuration is assumed in parallel; second, the individual balling drums have relatively high availabilities (higher than 0.9). The difference (with or without considering censored data) will be more obvious with other system configurations and less individual availability.

For TTF data, the shape parameter for the Weibull distribution is less than 1. The TTFs have a decreasing trend (as in the early stage of the bathtub curve) which is not suitable for the real-world experience of mechanical equipment. However, the TTF data include not only corrective maintenance but also preventive maintenance. The decreasing trends suggest a possible way to improve TTF is to improve the preventive maintenance plan.

5.3.3 AssessmentofsoftKPIs

This study proposes approaches to assessing soft KPIs using forecasting methods for continuous, intermittent and slow moving data. For soft KPIs (e.g. work orders) whose distribution cannot be easily determined, we apply approaches such as time series analysis if the data are continuous or fast moving. We apply Croston analysis if the data are intermittent and or Bootstrap analysis if the data are slow moving. For the purpose of illustration, this section adopts data from work orders of the studied company.

5.3.3.1Timeseriesforecasting

Suppose that the time series for some kind of work orders (e.g., spare parts with fast consumption) are as below; Table 5.3.11 and Figure 5.3.5 indicate the plots of the times series data. The procedure is shown in Table 5.3.12.

Table 5.3.11 Time series data from the work orders: an example

Year/Month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2013 3 1 5 4 1 4 0 2 1 2 4 4 2014 3 1 4 0 1 3 2 1 3 4 7 7 2015 3 5 1 1 3 1 7 1 4 7 4 2 2016 5 2 3 3 3 6 11 2 1 10 3 6 2017 6 5 4 2 1 1 4 4 3 7 5 3 2018 5 8 3 7 11 0 7 7 4 6 5 8

Page 100: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

85  

 (a) Time series plot of work order data

 (b) Work order data with no trend

Figure 5.3.5 Time series data from the work orders: an example

Using the the auto.arima function, we determine that the best model to fit the data is the ARIMA (2, 1, 2). The auto-correlation function (ACF) and partial auto-correlation function (PACF) diagrams for the data appear below.

Table 5.3.12 The procedure of time series forecasting 

Steps Procedure1 Stationarize the time series data, if necessary, by differencing, 2 Transform the time series data, if necessary to make variance constant.

3 Study the pattern of autocorrelations and partial autocorrelations to determine if lags of the stationarized series and/or lags of the forecasted errors should be included in the forecasting equation.

4 Fit the suggested model and check its residual diagnostics, particularly the residual ACF and PACF plots, to see if all coefficients are significant and the entire pattern has been explained.

5 Patterns that remain in the ACF and PACF may suggest the need for additional AR or MA terms.

Page 101: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

86  

(a) ACF diagram

(b) PACF diagram

Figure 5.3.6 Time series data from the work orders: an example

Results from the forecasted model are shown in Figure 5.3.7; it is shown with 12 months(year 2019) of detailed estimates in Table 5.3.13.

Figure 5.3.7 Forecasted work orders

Table 5.3.13 Forecast of 12 months of work orders

Year/Month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 5 5 6 5 6 5 6 6 6 6 6 6

Page 102: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

87  

 (a) ACF diagram of residuals

 (b) Histogram of forecast errors

Figure 5.3.8 Histogram of forecast errors

Figure 5.3.8 (a) shows that none of the sample autocorrelations for lags 1-20 exceed the significance bounds; meanwhile, in this case the p-value for the Ljung-Box test is 0.78, therefore, we conclude that the evidence for non-zero autocorrelations in the forecast errors at lags 1-20 is not obvious.

Figure 5.3.8 (b) shows that the forecast errors are roughly normally distributed and the mean seems to be close to zero. Therefore, it is plausible that the forecast errors are normally distributed with mean zero and constant variance.

Additionally, since successive forecast errors do not seem to be correlated, and the forecast errors seem to be normally distributed with mean zero and constant variance, the ARIMA(2,1,2) does seem to provide an adequate predictive model for the number of work orders in this study.

5.3.3.2Crostonforecasting

The Croston method is used to predict intermittent demand. It performs exponential smoothing prediction by dividing the general time series into non-zero demand time interval series and non-zero demand series. An example is shown in Table 5.3.14, Figure 5.3.9 and 5.3.10 for spare parts consumption.

 

Table 5.3.14 Spare parts demand

Months 1 2 3 4 5 6 7 8 Demand 0 0 2 0 0 5 0 2

The forecasts are made using R. Table 5.3.15 shows the results of the forecasts using the original Croston method, referred to in the table as Croston, and two variants of Croston: Syntetos-Boylan Approximation (SBA) and Shale-Boylan-Johnston (SBJ). As the table shows, Croston forecasts a bit less demand at shorter intervals and very high average

Page 103: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

88  

demand rates. SBA and SBJ forecast higher demand rates and higher intervals but less average demand rates. In Table 5.3.15, estimated results is calculated according to predicted results; since in reality the amount of spare parts should be an integer. The estimated results show only Croston forecasts a bit less demand at shorter intervals.

Figure 5.3.9 Histogram of spare parts

Figure 5.3.10 Line graph of spare parts demand

Table 5.3.15 Result among variants of the Croston method

Croston SBA SBJ

Predicted Estimated Predicted Estimated Predicted Estimated Demand 2.54 3 2.66 3 2.65 3 Interval 2.16 2 2.86 3 2.83 3

Demand Rate 1.18 1 0.88 1 0.88 1

Figures 5.3.11 to 5.3.13 plot the predicted outputs as graphs. The first red line indicates in-sample fit while the second red line from 9 to 14 indicates the predicted demand for the next six months.

Page 104: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

89  

Figure 5.3.11 Croston forecast

 

Figure 5.3.12 Croston-SBA forecast

 

Figure 5.3.13 Croston-SBJ forecast

Page 105: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

90  

5.3.3.3Bootstrapforecasting

The Bootstrap method is a prediction method when the sample size is relatively small and it is difficult to accurately predict based on the assumed distribution. It generates tens of thousands of demand groups accumulated over the lead time period based on the originally small sample and predicts the distribution and average demand of the original time series based on the distribution of the new data generated.

The bootstrap method's prediction results include both the confidence level and the average demand within the period, so it is a better risk prediction method. Simply put, the bootstrap method is a resampling of the sample and the estimated estimator is obtained from the sampled sample. Assuming the time series data are as shown in Table 5.3.16, then Figures 5.3.14 and 5.3.15 show the histogram and line graph of the data.

Table 5.3.16 Bootstrap data sample

Month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Demand 0 0 19 0 0 0 4 18 17 0 0 0 Month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Month Jan

Demand 0 0 3 0 0 19 0 0 0 5 4 5

  

Figure 5.3.14 Histogram of spares

Page 106: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

91  

Figure 5.3.15 Line graph of spares

The demand for this spare part over the past 24 months is shown in Table 5.3.16, as is the intermittent demand, with non-zero demand (19, 4, 18, 17, 3, 19, 5, 4, 5). Assuming the lead time for the spare part is three months, we now need to predict how much will be consumed in the next three months to prepare our order.

We randomly generate a group of packet data from the lead period size (three months) based on the past 24 data. If we choose 4, 8, 15, the corresponding demand is 0, 18, 3 in advance. The total demand during the period is 21.

According to the above method, we randomly generate 100,000 sets of such data from the computer and calculate the total demand in the lead time.

Based on the large sample data generated randomly, we plot the demand distribution over the lead time as shown in Figure 5.3.16.

Figure 5.3.16 Line graph of spares

Page 107: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

92  

As can be seen in the figure, the most demanding demand in the lead time is 0, and the probability of its occurrence is about 25%. The demand during the lead time may also be 3, 4, 5, 7, 56 or even more.

The average demand during the lead time is around 14.6. From the distribution of the above data, if the demand in the lead time is predicted with a 95% confidence level, that is, the spare parts inventory is required to have 95% spare parts availability, there should be 41 units of spare parts in the warehouse.

5.4 ResultsanddiscussionrelatedtoRQ4

RQ4: How can the developed KPI framework be improved continuously?

This KPI framework must be improved continuously. To ensure this is possible, we perform a comparison study to reveal the gaps between current and proposed KPIs in the studied mining company, and we adapt a roadmap from the railway industry to develop and review new KPIs.

 

5.4.1 ComparisonofcurrentandproposedKPIs

Table 5.4.1 is a summary of current KPIs used in the studied company.

When we compare this list with the KPI framework presented in section 5.1, se see that in the studied mining company, at the moment, the existing KPIs are mostly used to measure the performance of the technical system. There is a lack of KPIs to measure the overall maintenance process, especially soft KPIs.

Our proposed KPI framework has 111 soft KPIs, 85 for the maintenance process and 26 for maintenance resources. Following the maintenance process in the IEV standard, this KPI framework provides KPIs to measure maintenance strategy, maintenance planning, maintenance preparation, maintenance execution and maintenance assessment. These form the basis of the maintenance process. Thus, the proposed system is a more holistic performance measurement system; it will be beneficial to this mining organization, as there are some dependencies between soft KPIs and other organizational KPIs as shown in Figure 1.2.

Soft KPIs can affect the technical KPIs in the long run and increase or decrease utilization and plant speed. When utilization and plant speed decrease, total production output will also decrease. In some cases, quality can be affected. Thus, both the soft KPIs and the technical KPIs affect the production KPIs. The values of the soft and technical KPIs reflect how well maintenance activities are going. Ineffective maintenance will not give optimal production and can affect the quality of the product, in this case, iron ore, and/or reduce production times because of breakdowns. Poor production, in turn, will not give good manufacturing execution system (MES) KPIs. This will eventually reduce the marketing KPI values, as customers will not buy products that are not of the highest

Page 108: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

93  

quality for high prices, and the company’s overall KPIs will suffer. Each KPI in the proposed framework has a relationship of some kind with the KPIs above and below it; thus, changes in one KPI have a ripple effect on other KPIs. Recognizing appropriate soft KPIs and improving their values will increase overall capacity utilization, not just in maintenance but in all areas of the organization.

Table 5.4.1 Current KPIs in the studied mining company

Abbreviation (if there is one)

Name of current KPIs

Internal description of current KPIs

T Availability Ready time divided by planned operating (calendar) time. U Utilization Utilized time divided by planned operating (calendar) time.

FU Scheduled

Maintenance Maintenance time.

/

Internal Interference

Disturbance that interferes with its own facility, section, equipment.

External Interference

Disturbance for/after own installation, section, equipment.

Stop Object Time Standing time for equipment. Stop Object

Number Number of stops for equipment.

Stop Cause Time Standby time for breakdown. Stop Cause

Number Number of stops for breakdown.

MTBF Mean Time

Between Failure Operating time added with distortion time divided by the number of disturbances.

MTTF Mean Time To

Failure Operating time divided by number of disturbances.

MDT Mean Down Time Stop time divided by number of disturbances.

TK

1: C

ondi

tion

Mon

itor

ing

Ope

rati

on

TK

2: C

ondi

tion

Mon

itor

ing

Mai

nten

ance

SM: L

ubri

cati

ng R

ound

s

SR: S

top

Rou

nds

Percentage of Completed

Rounds Completed rounds divided with planned rounds.

Share Wrong Completed rounds wrongly shared with completed rounds.

Proportion of Delayed Rounds

Successful rounds reported late divided by completed rounds.

Executed Inspection Points

Completed inspection points.

Unsecured Inspection Points

Unfinished inspection points.

Misspelled Inspection Points

Inspection points with errors.

Completed Work Orders from

Error Message TK1/TK2/SM/SR

Completed work orders from error reporting.

/  

Proportion of Planned

Maintenance All maintenance in addition to corrective maintenance.

Share of Planned Work Orders

Percentage of work orders with a scheduled start time, planned completion time, actual start time, actual completion time, real time and calculated time divided by total number of work orders.

Planning Security How estimated time complies with real time. Delivery

Reliability Percentage of work orders completed (actual completion time) before the scheduled completion time.

Page 109: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

94  

Back Log Open delayed work orders divided by the total number of open AOs.

Mai

nten

ance

Sto

p The Proportion of Planned

maintenance

Percentage of work orders with a scheduled start time, planned completion time, actual start time and actual completion time divided by total number of work orders.

Planning Security How scheduled time matches real time. Delivery

Reliability Percentage of work orders completed (actual completion time) before the scheduled completion time.

Backlog Open delayed work orders divided by the total number of open work orders.

Mai

nten

ance

wor

k or

der

Trend Error Report / AO

proposal

Registration date and number or error reports accumulated over time.

Trend Released Work Orders

Date of release and number of work orders accumulated over time

Status Closed Work Order

Date of actual completion time and number of work orders accumulated over time.

/ SEK / Ton Financial outcome divided by tons of pellets. K Product Quality Approved production divided by total production. / Speed loss Under study.

5.4.2 OptimizationoftheproposedKPIs

A world class organization is dynamic and relies on continuous improvement of its work processes to retain its position as a world leader. An approach from the railway industry, the short name of which is In2Rail, is useful for developing new KPIs, and reviewing or improving existing KPIs so that they do not become old and irrelevant.

As reported by In2Rail, different organizations use different methods to define and measure their indicators. For example, a group may discuss the company’s required KPIs and decide their relevance. This approach is highly subjective, however; a better and more objective approach is to follow established guidelines to define and evaluate indicators.

A good performance measurement system provides the data to answer the questions an organization needs to answer if it is to manage its performance effectively. For the studied mining company, the proposed KPIs will help to ensure high capacity utilization which will translate into good product quality, reduced costs, improved employee skills, and maintenance innovation. In addition to ensuring the company’s vision is upheld, the KPIs will meet the following goals:

Be one of the three best in the world for maintenance in the mining industry. Work in a safe and environmentally friendly manner. Take a uniform and systematic approach to maintenance. Have committed and competent staffs who considers working in maintenance to

be attractive. Ensure staffs have knowledge of the construction and function of the plants,

making it possible to increase the proportion of operator maintenance. Design operational safety from a cost-effective perspective.

Page 110: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

95  

Work on continuous improvements as a natural way of working. Design maintenance measures based on facts, analysis and long-term needs. Design maintenance based on plant values unless otherwise decided.

Although Neely’s KPI definition guide is detailed which is introduced in Chapter 2, it has drawbacks. For one thing, the method is very time-consuming. For this reason, we have modified the ten steps of the KPI definition and use what is of interest to us; we provide a context and a purpose, a time frame definition and a general formula for each KPI suggested in the framework. We have left out who measures and who acts on the data (owner) as this will be taken care of by the studied company when they use the KPIs. We have also compressed the other eight steps and made them into four broad steps. See “Number of Shutdowns” for clarification.

****************************************************************************************

NumberofShutdownsContextandPurpose:Numberofshutdownsisthetotalnumberoftimestheassetisoutofservice.ThisKPIhelpstounderstandthenumberoftimestheequipment,productionlineorprocessunitisoutofserviceduringthequeryperiod. Italso includesthestoppingofequipmentortheshuttingdownofaproductionlineorprocessunittoconductplannedmaintenance.Thelowerthenumberof shutdown times, in particular, the asset failure times, the better the assetmanagement. Anavailableassetensureshigherproduction.TimeDefinition:Stopdate/workorderregistrationdate�(querystartdate,queryterminationdate)GeneralFormula:Count(numberofregisteredstops)

****************************************************************************************

After the KPI has been defined, we need to verify it. This can be done with the following verification questions provided by Neely:

1. The Truth Test: Are we really measuring what we set out to measure? 2. The Focus Test: Are we only measuring what we set out to measure? 3. The Relevancy Test: Is it the right measure of the performance factor we want to

track? 4. The Consistency Test: Will the data always be collected in the same way whoever

measures them? 5. The Access Test: Is it easy to locate and capture the data needed to make the

measurement? 6. The Clarity Test? 7. The So-What Test: Can and will the reported data be acted upon? 8. The Timeliness Test: Can the data be accessed rapidly and frequently enough for

action? 9. The Cost Test: Is the measure worth the cost of measurement? 10. The Gaming Test: Is the measure likely to encourage undesirable or

inappropriate behaviours? Since the development of KPIs is an ongoing process, the verification will also be ongoing.

Page 111: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 5 Results and Discussion  

96  

Page 112: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

97  

Chapter6

Conclusions,contributionsandfutureresearch

This chapter concludes the research, summarizes the contributions and suggests future research.

6.1Conclusions

This study develops an integrated KPI framework for maintenance management in an eMaintenance environment. It explores the implementation of each proposed KPI in the mining environment. The study proposes a novel approach to assess technical and soft KPIs and discovers connections between technical and soft KPIs. By comparing the current situation in the mining company to the experience of other industries, it suggests ways to optimize the proposed KPIs through continuous improvement.

The three research questions (RQs) given in Chapter 1 have been answered as follows:

RQ1: What is a KPI framework for maintenance management?

This study develops a KPI framework consisting of technical KPIs (linked to machines) and soft KPIs (linked to maintenance workflow) to control and monitor the entire maintenance process to achieve the overall goals of the organization.  

The proposed KPI framework has 134 KPIs divided into technical and soft KPIs as follows: asset operation management has 23 technical KPIs, maintenance process management has 85 soft KPIs and maintenance resources management has 26 soft KPIs. 

The proposed KPI framework makes use of four hierarchical levels. o The first level, the asset management system, is the highest level in the

framework and encapsulates the second, third and fourth levels. o The second level consists of three broad categories; asset operation

management, maintenance process management and maintenance resources management. Asset operation management is used to track the technical aspects of the maintenance process while maintenance process management and maintenance resources management are used to track the soft aspects of the maintenance process.

o The third level is a further breakdown of the second-level categories. That is, asset operation management is broken down into five categories: overall asset, availability, reliability, maintainability and safety.

Page 113: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 6 Conclusions, contributions and future research  

98  

Maintenance process management is broken down into five categories: maintenance management, maintenance planning, maintenance preparation, maintenance execution and maintenance assessment. Finally, maintenance resources management is broken down into three categories: spare parts management, outsourcing management and human resources management.

o In level four, the KPIs are grouped into common measures. Results from this study will be applied to the studied company and supply the

guidance of developing the KPI framework.

RQ2: How can the developed KPI framework be implemented through eMaintenance?

Implementation of the proposed KPI framework has been discussed and timelines, definitions and general formulas given for each specified KPI. These will further support to develop KPI ontology and taxonomy of the proposed KPI.

Results from this study will be applied to the studied mining company and guide the implementation of the proposed KPIs in an eMaintenance environment.

RQ3: How can the KPIs be assessed using novel approaches?

This study proposes parametric Bayesian approaches to assess system availability in the operational stage. With these approaches, MTTF and MTTR can be treated as distributions instead of being “averaged” by point estimation. This better reflects reality.

MCMC is adopted to take advantage of both analytical and simulation methods. Because of MCMC’s high dimensional numerical integral calculation, the selection of prior information and descriptions of reliability/maintainability can be more flexible and realistic. The limitations of simulation data sample size are also overcome.

In the case studies, TTF and TTR are determined using a Bayesian Weibull model and a Bayesian lognormal model, respectively.

The proposed approach can integrate analytical and simulation methods for system availability assessment and could be applied to other technical problems in asset management (e.g., other industries, other systems).

By comparing the results with and without considering the threshold for censoring data, we show there is a connection between technical and soft KPIs, and the threshold can be used as a monitoring line for continuous improvement in the investigated mining company.

For those soft KPIs for which the distribution of data collected from eMaintenance system (e.g., work orders) is not easily determined, we could apply approaches, such as Time series analysis (if the data are “fast moving”), Croston analysis (if the data are “intermittent”), or Bootstrap analysis (if the data are “slow moving”). The proposed approaches from this study could also be applied to other technical problems in asset management (e.g., other industries, other system).

Page 114: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

99  

RQ4: How can the developed KPI framework be improved continuously?

A comparison study shows that, at the moment, the studied company uses about 26 KPIs. Out of the 26 KPIs, 1 KPI called “speed loss” is under study and only works in one processing plant. The existing KPIs are mostly used to measure the performance of the technical system. The company lacks KPIs to measure the overall maintenance process, especially soft KPIs.

This study introduces an approach for developing new KPIs, reviewing or improving existing KPIs so that they do not become old and irrelevant. This is necessary because organizations are not static entities. A world class organization is dynamic and relies on continuous improvement of its work processes to retain its position as a world leader. 

6.2Contributions

The main contributions of this research can be summarized as follows:

An integrated KPI framework for maintenance management in a mining company is developed. This framework consists of technical KPIs (linked to machines) and soft KPIs (linked to maintenance workflow) to control and monitor the entire maintenance process to achieve the overall goals of the organization.  

Implementation of the developed KPI framework is discussed, and timelines, definitions and general formulas are given for each specified KPI. Results from this study could be applied to the studied company to develop KPI ontology and taxonomy and guide the implementation of the proposed KPIs through eMaintenance.

Novel approaches to assessing both technical and soft KPIs are proposed. In particular, Bayesian approaches using MCMC can take advantage of the analytical and simulation methods for assessing system availability; by setting up a threshold for censoring data, we show there is a connection between technical and soft KPIs, and the threshold can be used as a monitoring line for continuous improvement in the investigated mining company.

Optimization of the developed KPI framework is discussed and an approach to continuous improvement is suggested.

6.3Futureresearch

The following are considered interesting topics for future research.

Since the research was motivated/ financed by a particular mining company, the developed technical (linked to machines) and soft (linked to workflow) KPIs are based on the company’s business strategies. In the future, a more general framework can be studied for other companies and industries.

The link-and-effect model is not deal with in this study, which should be focused on in the future.

Page 115: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Chapter 6 Conclusions, contributions and future research  

100  

In this study, costs are not considered sufficiently, as only maintenance is included. In the future, costs could be highlighted in the KPI frameworks in forms of development, implementation, assessment, and optimization.

In this study, the emphasis is on developing a new KPI assessment, so the research uses only a few KPIs as examples because of time and project limitations. In the future, more assessment approaches could be explored.

The integrated KPIs are proposed in general. KPIs for different/specified plants, processes, maintenance tasks (e.g., condition morning, lubrication, etc.) are not studied separately. Further work could be done to minimize this limitation.

Page 116: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

101  

REFERENCES

Al-Najjar, B. (2007). The lack of maintenance and not maintenance which costs: A model to describe and quantify the impact of vibration-based maintenance on company's business.InternationalJournalofProductionEconomics,107(1), 260-273.

Alegre, H., Baptista, J. M., Cabrera Jr, E., Cubillo, F., Duarte, P., Hirner, W., Parena, R. (2017). Performanceindicatorsforwatersupplyservices.ThirdEdition. London, UK: IWA publishing.

Aljumaili, M. (2016). Data Quality Assessment: Applied in Maintenance (PhD Thesis). Luleå University of Technology, Luleå, Sweden.

Bourne, M., Melnyk, S., & Bititci, U. S. (2018). Performance measurement and management: Theory and practice. International Journal of Operations & Production Management, 38(11), 2010-2021.

Bourne, M., Mills, J., Wilcox, M., Neely, A., & Platts, K. (2000). Designing, implementing and updating performance measurement systems.International JournalofOperations&ProductionManagement,20(7), 754-771.

Box, G. E., & Jenkins, G. M. (1968). Some recent advances in forecasting and control.JournaloftheRoyalStatisticalSociety.SeriesC(AppliedStatistics),17(2), 91-109.

Box, G., & Tiao, G. (1992). Bayesianinferenceinstatisticalanalysis. New York: John Wiley & Sons.

Brender, D. M. (1968). The Bayesian assessment of system availability: Advanced applications and techniques. IEEEtransactionsonReliability,17(3), 138-147.

Brender, D. M. (1968). The Prediction and measurement of system availability: A Bayesian treatment. IEEETransactionsonReliability,17(3), 127-138.

Campbell, J. D., & Reyes-Picknell, J. (2006). Strategiesforexcellenceinmaintenancemanagement.Productivity Press.

CEN. (2007). SS‐EN15341:2007maintenance:Maintenancekeyperformanceindicators. (Swedish Standards Institute ed.) European Committee for Standardization (CEN).

Coetzee, J. (1997). Towards a general maintenance model.ProceedingsofIFRIM‘97,1-9.

Cox, D., & Oakes, D. (1984). Analysisofsurvivaldata. New York: Chapman and Hall.

Dekker, R. & Groenendijk, W. (1995). Availability assessment methods and their application in practice. MicroelectronicsReliability,35(9-10), 1257-1274.

Del-Río-Ortega, A., Resinas, M., & Ruiz-Cortés, A. (2010). Defining process performance indicators: An ontological approach. OTMConfederatedInternationalConferences:OntheMoveto

Page 117: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

REFERENCES

102  

MeaningfulInternetSystems (ss. 555-572). Berlin, Heidelberg: Springer.

Del-RíO-Ortega, A., Resinas, M., Cabanillas, C., & & Ruiz-Cortés, A. (2013). On the definition and design-time analysis of process performance indicators. InformationSystems,38(4), 470-490.

Diamantini, C., Genga, L., Potena, D., & Storti, E. (2014). Collaborative building of an ontology of key performance indicators. Conference proceeding. OTM Confederated InternationalConferences. October 27-31, 2014. Amantea, Italy.

Dwight, R. (1995). Concepts for measuring maintenance performance.NewDevelopments inMaintenance,109-125.

Dwight, R. (1999a). Frameworksformeasuringtheperformanceofthemaintenancesysteminacapitalintensiveorganisation.(PhD Thesis). DepartmentofMechanicalEngineering,UniversityofWollongong,Wollongong.

Dwight, R. (1999b). Searching for real maintenance performance measures.JournalofQualityinMaintenanceEngineering,5(3), 258-275.

Faghih-Roohi, S., Xie, M., Ng, K. M. & Yam, R. C. (2014). Dynamic Availability Assessment and Optimal Component Design of Multi-state Weighted k-out-of-n Systems. ReliabilityEngineeringandSystemSafety, 123, 57-62.

Galar, D. & Kumar, U. (2016). Maintenance audits handbook: A performance measurementframework (First Edition). Florida, USA: CRC Press.

Gelman, A., Carlin, J., Stern, H., & Rubin, D. (2004). Bayesiandataanalysis. New York: Chapman and Hall/CRC.

Hougaard, P. (2000). Analysisofmultivariatesurvivaldata. New York: Springer-Verlag.

IEC. (2004). 60300-3-14, “Dependability management–Part 3-14: Application guide–Maintenance and maintenance support”.BritishStandard.

Jantunen, E., Emmanouilidis, C., Arnaiz, A., & Gilabert, E. (2011). E-maintenance: Trends, challenges and opportunities for modern industry.IFACProceedings,44(1), 453-458.

Kajko-Mattsson, M., Karim, R., & Mirijamdotter, A. (2011). Essential components of eMaintenance. International JournalofPedagogy, InnovationandNewTechnologies,7(6), 555-571.

Kans, M., & Galar, D. (2017). The Impact of Maintenance 4.0 and Big Data Analytics within Strategic Asset Management. Conference proceedings. 6th International Conference onMaintenancePerformanceMeasurement andManagement (MPMM 2016), 28 November 2016, Luleå, Sweden.

Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard-measures that drive performance. Harvard Business Review, 70(1), 71-79.

Karim, R. (2008). Aservice‐orientedapproachtoe‐Maintenanceofcomplextechnicalsystems(PhDThesis).LuleåUniversityofTechnology,Luleå,Sweden.

Page 118: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

103  

Karim, R., Westerberg, J., Galar, D. & Kumar, U., (2016) . Maintenance analytics–the new know in maintenance. IFAC-PapersOnLine, 49(28), 214-219.

Khan, M. A. & Islam, H. (2012). Bayesian analysis of system availability with half-normal life time. QualityTechnologyandQuantitativeManagement,9(2), 203-209.

Koc, M., & Lee, J. (2001). A system framework for next-generation eMaintenance systems.ChinaMechanicalEngineering,5, 14.

Koc, M. & Lee, J., 2003. E-manufacturing-fundamentals, requirements and expected impacts. InternationalJournalofAdvancedManufacturingSystems, 6(1), 29–46.

Kour, R., Aljumaili, M., Karim, R., & Tretten, P. (2019). eMaintenance in railways: Issues and challenges in cybersecurity.ProceedingsoftheInstitutionofMechanicalEngineers,PartF:JournalofRailandRapidTransit.doi:doi.org/10.1177/0954409718822915

Kourentzes, N. (2014). On intermittent demand model optimisation and selection.InternationalJournalofProductionEconomics,156, 180-190.

Kumar, U., & Ellingson, H. (2000). Development and implementation of maintenance performance indicators for the norweigan oil and gas industry. Conference Proceeding. The14thInternationalMaintenanceCongress(Euro‐maintenance,2000),7–10 March 2000, Gothenburg, Sweden.

Kumar, U., Galar, D., Parida, A., Stenström, C., & Berges, L. (2013). Maintenance performance metrics: A State-of-the-art review.JournalofQualityinMaintenanceEngineering,19(3), 233-277.

Kuo, W. (1985). Bayesian availability using gamma distributed priors. IIETransactions,17(2), 132-140.

Lawless, J. (1982). Statisticalmodelsandmethodsforlifetimedata. New York: John Wiley & Sons.

Lin, J. (2014). An integrated procedure for Bayesian reliability inference using Markov Chain Monte Carlo Methods. JournalofQualityandReliabilityEngineering,Volume 2014, 1-16.

Lin J. (2016). Bayesian Reliability with MCMC: Opportunities and Challenges. Conference paper published in Book Chapter of Current Trends in Reliability, Availability, Maintainability and Safety. International Conference on Reliability, Safety and Hazard – Advances in Reliability,MaintenanceandSafety(ICRESH‐ARMS2015). June 14-16, Luleå, Sweden.

Lingle, J. H., & Schiemann, W. A. (1996). From balanced scorecard to strategic gauges: Is measurement worth it?ManagementReview,85(3), 56.

Löfsten, H. (2000). Measuring maintenance performance–in search for a maintenance productivity index.InternationalJournalofProductionEconomics,63(1), 47-58.

Marquez, A. C. & Iung, B. (2007). A Structured approach for the assessment of system availability and reliablity using Monte Carlo Simulatoin. Journal of Quality inMaintenance Engineering,13(2), 125-136.

Marquez, A. C., Heguedas, A. S. & Iung, B. (2005). Monte Carlo-based assessment of system

Page 119: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

REFERENCES

104  

availability. A Case study for cogeneration plants. ReliabilityEngineeringandSystemSafety,88, 273-289.

Muchiri, P., Pintelon, L., Gelders, L., & Martin, H. (2011). Development of maintenance function performance measurement framework and indicators. International Journal of ProductionEconomics,131(1), 295-302.

Muller, A., Crespo Marquez, A., & Iung, B. (2008). On the concept of eMaintenance: Review and current research. ReliabilityEngineering&SystemSafety, 93(8), 1165-1187.

Neely, A. (1999). The performance measurement revolution: Why now and what next?InternationalJournalofOperations&ProductionManagement,19(2), 205-228.

Negri, E., Fumagalli, L., & Garetti, M. (2015). Approach for the use of ontologies for KPI calculation in the manufacturing domain. XX Summer School "Francesco Turco" ‐ IndustrialSystemsEngineering, (ss. 16-18). Napoli, Italy.

Ocnasu, A. B., Bésanger, Y., Rognon, J. P., & Carer, P. (2007). Distribution system availability assessment monte carlo and antithetic variates method. Conference Proceeding. 19thInternationalConferenceonElectricityDistribution, 21st May -24th May, 2007. Vienna, Austria.

Parida, A. (2006). Development of a multi‐criteria hierarchical framework for maintenanceperformance measurement: Concepts, issues and challenges (PhD Thesis). Luleå University ofTechnology,Luleå,Sweden.

Parida, A., & Chattopadhyay, G. (2007). Development of a multi-criteria hierarchical framework for maintenance performance measurement (MPM). Journal of Quality in MaintenanceEngineering,13(3), 241-258.

Parida, A., & Kumar, U. (2004). Managing information is key to maintenance effectiveness. Conference Proceeding. IMS2004 InternationalConferenceon IntelligentMaintenanceSystems, 15th July -17th July. Arles, France.

Parida, A. & Kumar, U., (2006). Maintenance performance measurement (MPM): issues and challenges. JournalofQualityinMaintenanceEngineering,12(3), 239-251.

Parmenter, D. (2007). Keyperformanceindicators:Developing,implementing,andusingwinningKPIs John Wiley & Sons.

Pascual, D. G., & Kumar, U. (2016). Maintenanceauditshandbook:Aperformancemeasurementframework. CRC Press.

Popova, V., & Alexei, S. (2010). Modeling organizational performance indicators. Informationsystems,35(4), 505-527.

Press, S. (1991). Bayesianstatistics:Principles,models,andapplications. New York: John Wiley & Sons Inc.

Pritchard, R. D., Roth, P. L., Jones, S. D., & Roth, P. G. (1990). Implementing feedback systems to enhance productivity: A practical guide. NationalProductivityReview,10(1), 57–67.

Rachman, A. (2019). ArtificialIntelligenceApproachestoLeanRisk‐BasedInspectionAssessment

Page 120: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

105  

(PhDThesis). University of Stavanger, Stavanger, Norway

Raje, D., Olaniya, R., Wakhare, P. & Deshpande, A. (2000). Availability assessment of a two-unit stand-by pumping system. ReliabilityEngineeringandSystemSafety,68, 269-274.

Schmidt, B. (2018). Toward PredictiveMaintenance in a CloudManufacturingEnvironment: apopulation‐wideapproach(PhDthesis). University of Skövde, Skövde, Sweden.

Shale, E. A., Boylan, J. E., & Johnston, F. (2006). Forecasting for intermittent demand: The estimation of an unbiased average.JournaloftheOperationalResearchSociety,57(5), 588-592.

Sharma, K. & Bhutani, R. (1993). Bayesian analysis of system availability. MicroelectronicReliability,33(6), 809-811.

Shenstone, L., & Hyndman, R. J. (2005). Stochastic models underlying croston's method for intermittent demand forecasting.JournalofForecasting,24(6), 389-402.

Stenström, C. (2014). Operationandmaintenanceperformanceofrailinfrastructure:ModelandMethods.Doctoralthesis,LuleåUniversityofTechnology,Sweden.

Syntetos, A. A., & Boylan, J. E. (2001). On the bias of intermittent demand estimates.InternationalJournalofProductionEconomics,71(1-3), 457-466.

Söderholm, P. (2005). Maintenance and continuous improvement of complex systems: linkingstakeholder requirements to theuseofbuilt‐in test systems.Doctoral thesis,LuleåUniversityofTechnology,Sweden.

Teunter, R. H., Syntetos, A. A., & Babai, M. Z. (2011). Intermittent demand: Linking forecasting to inventory obsolescence.EuropeanJournalofOperationalResearch,214(3), 606-615.

Therneau, T., & Grambsch, P. (2000). Appliedsurvivalanalysis. New York: Springer.

Tsang, A. H. (2000). Maintenance performancemanagement in capital intensive organizations(PhD Thesis). UniversityofToronto,Toronto,Canada.

Tsang, A. H. (2002). Strategic dimensions of maintenance management. JournalofQuality inMaintenanceEngineering,8(1), 7-39.

Ucar, M., & Qiu, R. G. (2005). eMaintenance in support of e-automated manufacturing systems.JournaloftheChineseInstituteofIndustrialEngineers,22(1), 1-10.

Weber, A., & Thomas, R. (2005). Key performance indicators: Measuring and managing the maintenance function.IvaraCorporation.

Wireman, T. (2005). Developing performance indicators for managing maintenance. SecondEdition. New York: Industrial Press Inc.

Yasseri, S. F. & Bahai, H. (2018). Availability assessment of subsea distribution systems at the architectural level. OceanEngineering, 153, 399-411.

Yin, Robert K. (2014). Casestudyresearch:designandmethods.Thirdedition. Sage Publications, Thousands Oaks, California, USA.

Page 121: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

REFERENCES

106  

Zio, E., Marella, M. & podofillini, L. (2007). A Monte Carlo simulation approach to the availability assessment of multi-state system with operational dependencies. ReliabilityEngineeringandSystemSafety,92, 871-882.

   

Page 122: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

107  

APPENDIX

 

Figure A.1 Overall asset KPIs

 

Figure A.2 Availability KPIs

 

Figure A.3 Reliability KPIs

Page 123: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

APPENDIX

108  

 

Figure A.4 Maintainability KPIs

 

 

Figure A.5 Safety KPIs

 

Page 124: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

109  

 

 

Figure A.6 Maintenance Strategy KPIs

 

 

 

 

 

 

 

   

Page 125: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

APP

END

IX

110

 

Num

ber

of P

lann

ed

Wor

k O

rder

s Cr

eate

dA

vera

ge P

lann

ed

Exec

utio

n T

ime

Mai

nten

ance

Pla

nnin

gA

sset

Man

agem

ent

Tim

e R

elat

edQ

uant

iity

Rel

ated

Mai

nten

ance

Pro

cess

M

anag

emen

tR

esou

rce

Rel

ated

Cost

Rel

ated

Tot

al N

umbe

r of

Pl

anne

d In

tern

al

Labo

ur H

ours

Tot

al C

ost o

f Pla

nned

H

uman

Res

ourc

es

Ave

rage

Pla

nned

In

tern

al L

abou

r H

ours

Tot

al N

umbe

r of

Pl

anne

d Ex

tern

al

Labo

ur H

ours

Ave

rage

Pla

nned

Ex

tern

al L

abou

r H

ours

Plan

ned

Num

ber

of

Mat

eria

ls U

sed

Ave

rage

Pla

nned

N

umbe

r of

Mat

eria

ls

Use

d

Ave

rage

Pla

nned

Ex

tern

al H

uman

R

esou

rce

Cost

s

Tot

al C

ost o

f Pla

nned

M

ater

ials

Plan

ned

Ave

rage

M

ater

ial C

ost

Labo

ur C

ost R

atio

Plan

ned

Mat

eria

l Cos

t R

atio

 

 

 

Figu

re A

.7 M

aint

enan

ce P

lann

ing

KPI

s

     

Page 126: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI

fram

ewor

k fo

r m

aint

enan

ce m

anag

emen

t thr

ough

eM

aint

enan

ce 

 

111

 

Plan

ned

Star

t / E

nd

Tim

e R

egis

trat

ion

Rat

eA

ctua

l Spa

re P

arts

Use

R

egis

trat

ion

Rat

e

Mai

nten

ance

Pr

epar

atio

nA

sset

Man

agem

ent

Wor

k O

rder

Fee

dbac

kW

ork

Ord

er C

reat

ion

Mai

nten

ance

Pro

cess

M

anag

emen

tW

ork

Ord

er A

ppro

val

Tot

al N

umbe

r of

Wor

k O

rder

s

Tot

al N

umbe

r of

A

ppro

ved

Wor

k O

rder

s

Tot

al N

umbe

r of

U

napp

rove

d W

ork

Ord

ers

Wor

k O

rder

App

rova

l R

atio

One

-tim

e A

ppro

ved

Wor

k O

rder

Rat

io

Ave

rage

tim

e la

g fo

r R

epor

ting

and

A

ppro

ving

Wor

k O

rder

s

Plan

ned

Spar

e Pa

rts

Reg

istr

atio

n R

ate

Plan

ned

Man

-Hou

r R

egis

trat

ion

Rat

e

Plan

ned

Dow

ntim

e R

egis

trat

ion

Rat

e

Stan

dard

Ope

rati

ng

Plan

Reg

istr

atio

n R

ate

Plan

ned

Wor

k T

ype

Reg

istr

atio

n R

ate

Job

Prio

rity

R

egis

trat

ion

Rat

e

Act

ual M

an-H

our

Reg

istr

atio

n R

ate

Act

ual D

ownt

ime

Reg

istr

atio

n R

ate

Wor

k O

rder

R

egis

trat

ion

Bac

k-Lo

g

 

 

Figu

re A

.8 M

aint

enan

ce P

repa

rati

on K

PIs

 

Page 127: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

APP

END

IX

112

 

 

Figu

re A

.9 M

aint

enan

ce E

xecu

tion

KPI

s

Page 128: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

113  

  

Figure A.10 Maintenance Assessment KPIs

 

Page 129: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

APPENDIX

114  

  

Figure A.11 Spare Parts Management KPIs

 

  

 

Figure A.12 Outsourcing Management KPIs

 

 

Page 130: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI

fram

ewor

k fo

r m

aint

enan

ce m

anag

emen

t thr

ough

eM

aint

enan

ce 

 

115

  

  

Figu

re A

.13

Hum

an R

esou

rces

Man

agem

ent K

PIs

 

 

Page 131: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

APP

END

IX

116

 

Page 132: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

KPI framework for maintenance management through eMaintenance  

 

PartII

AppendedPapers

Page 133: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 134: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

Paper A

Development and implementation of a KPI framework for maintenance management in a mining company

Saari, E., Sun, H-L., Lin, J. and Karim, R. 2019. Development and implementation of a KPI framework for maintenance management in a mining company. International Journal of System

Assurance Engineering and Management. Under Review.

Page 135: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 136: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

1

Development and implementation of a KPI framework for maintenance management in a mining company

Esi Saari1, Huiling Sun2, Jing Lin1*, Ramin Karim1

1. Division of Operation and Maintenance Engineering, Luleå University of Technology, Luleå, Sweden 2. SKF(China) Co.Ltd., Beijing, China

*Corresponding author; E-mail address: [email protected]

Abstract: Performance measurement is critical if organizations want to thrive. The motivation for the research originated from the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance”, initiated and financed by a mining company in Sweden. The main purpose is to develop an integrated KPI framework for the studied mining company’s maintenance and its implementation through eMaintenance. The proposed KPI framework has 134 KPIs divided into technical and soft KPIs as follows: asset operation management has 23 technical KPIs, maintenance process management has 85 soft KPIs and maintenance resources management has 26 soft KPIs. Its implementation is discussed, and timelines, definitions and general formulas are given for each specified KPI. Results from this study will be applied to the studied company and supply the guidance of implementing those KPIs through eMaintenance.

Keywords: asset management, performance management, Key Performance Indicator (KPI), mining industry.

1. Introduction Performance measurement (PM) is critical to the success of organizations (Bourne, Melnyk, & Bititci, 2018). Those using a balanced or integrated performance measurement system perform better than those that do not (Lingle & Schiemann, 1996) because performance measures provide an important link between strategies and action and thus support the implementation and execution of improvement initiatives (Muchiri, Pintelon, Gelders, & Martin, 2011).

PM requires the formulation of Key Performance Indicators (KPIs), a set of measures that focus on those aspects of organizational performance that are most critical for current and future success (Parmenter, 2007). KPIs demonstrate how effectively a company is achieving key business objectives. They evaluate the company’s success in reaching targets and the degree to which areas within the company (e.g., maintenance) achieve their goals.

The influence of maintenance on profitability is too high to ignore (Kumar & Ellingson, 2000). With reduced natural resource reserves, e.g. iron ore, oil and gas, and the unstable prices of these resources on the global market, the process industries working with these resources, such as mining companies, must optimise the maintenance process (Kumar & Ellingson, 2000). Because maintenance performs a service function for production, its merits or shortcomings are not always immediately apparent (Muchiri et al., 2011), but it must be measured for companies to remain profitable. This requires the development and use of a suitable set of KPIs.

Page 137: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

2

In this paper, we propose a KPI framework to measure the maintenance performance of a Swedish mining company. We define the KPI framework as a system that combines all facets of maintenance actions into a set of measures focusing on aspects of maintenance performance that are most critical for the current and future success of the organization, thus providing a means to quantify the efficiency and effectiveness of its maintenance actions. The importance of an integrated KPI framework for controlling and monitoring the maintenance process cannot be underestimated. It will enable the organization to create internal benchmarks, produce high-quality products at moderate prices, and retain the organization’s place as a market leader.

Many authors have written about PM, including Kaplan and Norton (1992), Neely (1999), Bourne, Mills, Wilcox, Neely, and Platts (2000), Campbell and Reyes-Picknell (2006), Coetzee (1997), Weber and Thomas (2005), Dwight (1995; 1999b), and Tsang (2000). Some authors have looked specifically at maintenance performance measurement (MPM), including Kumar, Galar, Parida, Stenström, and Berges (2013), Parida and Chattopadhyay (2007). These authors proposed measuring the performance of maintenance by focusing on the maintenance process or on the maintenance results (Kumar et al., 2013).

Dwight (1999a) suggested a “value-based performance measurement”, a system audit approach to measuring the maintenance system’s contribution to organizational success. His approach takes into account the impact of maintenance activities on the future value of the organization, with an emphasis on variations in the lag between actions and outcomes.

Tsang (1998) proposed a strategic approach to managing maintenance performance using a balanced scorecard (Kaplan and Norton, 1992; Kaplan and Norton, 1996). However, the success of the balanced scorecard approach depends on how individual companies use it.

Löfsten (2000) advocated for the use of aggregated measures like the maintenance productivity index, which measures the ratio of maintenance output to maintenance input. But Muchiri et al. (2011) say Löfsten’s approach gives a very limited view of maintenance performance, and it is difficult to quantify different types of maintenance inputs.

More recently, Parida and Chattopadhyay (2007) proposed a multi-criteria hierarchical framework for MPM; the framework includes multi-criteria indicators for each level of management, i.e. the strategic, tactical and operational levels. These multi-criteria indicators are categorized as equipment-/process-related (e.g. capacity utilization, OEE, availability, etc.), cost-related (e.g. maintenance cost per unit of production cost), maintenance-task-related (e.g. the ratio between planned and total maintenance tasks), customer and employee satisfaction, and health, safety and the environment, with indicators proposed for each level of management in each category.

Al-Najjar (2007) designed a model to describe and quantify the impact of maintenance on a business’s key competitive objectives related to production, quality and cost. The model can be used to assess the cost effectiveness of maintenance investment and provide strategic decision support for different improvement plans.

Muchiri et al. (2011) proposed an MPM system based on the maintenance process and maintenance results. These authors sought to align maintenance objectives with manufacturing and corporate objectives and provide a link between maintenance objectives, maintenance

Page 138: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

3

process/efforts and maintenance results. Based on this conceptual framework, they identified performance indicators of the maintenance process and maintenance results for each category. Their conceptual framework provides a generic approach to developing maintenance performance measures with room for customization for individual company needs.

The above proposals are based on both new and existing techniques; some are quantitative and others are qualitative. At this point, there is no integrated approach to measuring the performance of all components of maintenance.

The case study mining company lacks an integrated KPI framework to monitor its maintenance activities. It tried a balanced scorecard, but this was not compatible with the organizational culture. The company has many technical KPIs, i.e. KPIs linked to machines, but very few soft KPIs, i.e. KPIs linked to the maintenance workflow. Whilst it measures the former, it does not measure the latter. Therefore, this study develops a KPI framework consisting of technical KPIs (linked to machines) and soft KPIs (linked to maintenance workflow) to control and monitor the entire maintenance process to achieve the overall goals of the organization. Besides the KPI framework, another contribution and novelty in this study is addressing its implementation by introducing time definition and general formula of each specified KPI.

The paper is set up as follows. The introductory section defines the problem. Section 2 describes the proposed framework. Sections 3 to 5 present the developed KPIs: Section 3 has KPIs for asset operation management, Section 4 has maintenance process management, and Section 5 has maintenance resources management. Section 6 implements the KPIs in the case study mine. Sections 7 and 8 present the discussion and conclusion respectively.

2. KPI Framework A framework is a basic structure underlying a system or concept. It has also been defined as a meta-level model or a higher-level abstraction through which a range of concepts, models, techniques, and methodologies can either be clarified and/or integrated (Jayaratna, 1994).

The proposed KPI framework makes use of a four hierarchical levels. The first level, the asset management system, is the highest level in the framework and encapsulates the second, third, and fourth levels. The second level consists of three broad categories; asset operation management, maintenance process management and maintenance resources management. Asset operation management is used to track the technical aspects of the maintenance process while maintenance process management and maintenance resources management are used to track the soft aspects of the maintenance process. The third level is a further breakdown of the second-level categories. That is, asset operation management is broken down into five categories: overall asset, availability, reliability, maintainability and safety. Maintenance process management is broken down into five categories: maintenance management, maintenance planning, maintenance preparation, maintenance execution and maintenance assessment while maintenance resources management is broken down into three categories: spare parts management, outsourcing management and human resources management (See Figure 1). In level four, the KPIs are grouped into common measures.

Page 139: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

4

In all, there are 134 KPIs in this framework. Asset operation management has 23 technical KPIs, listed in Table 1 Maintenance process management has 85 soft KPIs, listed in Table 2, and maintenance resources management has 26 soft KPIs, listed in Table 3.

Asset Management

KPIs

Asset Operation

Management

Maintenance Process

Management

Shutdown Statistics Failure Related

Quantity Related Time Related Resource

Related Cost Related

Quality Effectiveness

Mean Reliability Measures Failure Related

Operational Availability

Mean Maintainability Measures

Maintenance Strategy

Quantity Related Time Related Resource

Related Cost Related

Inventory Management

Contractor Statistics

Maintenance Resources

Management

Skills Management

Workload Management

Training Management

Competence Development

Work Order Approval

Work Order Creation

Work Order Feedback

Overall Asset

Availability

Maintenance Management

Maintenance Planning

Maintenance Preparation

Maintenance Execution

Maintenance Assessment

Spare Parts Management

Outsourcing Management

Human Resources

Management

Reliability

Maintainability

Safety Occupational Safety

Level 1 Level 2 Level 3 Level 4

Figure 1 KPI framework

Page 140: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

5 3.

Ass

et O

pera

tion

Man

agem

ent

This

sect

ion

desc

ribe

s ass

et o

pera

tion

man

agem

ent,

incl

udin

g KP

I nam

es, c

onte

xt a

nd p

urpo

ses.

The

KPIs

are

sum

med

up

and

expl

aine

d in

Tab

le 1

.

Tabl

e 1:

Ass

et O

pera

tion

Man

agem

ent K

PIs

Leve

l N

ame

Cont

ext

Purp

ose

3 4

Overall Asset

Shutdown Statistics

Num

ber o

f Shu

tdow

ns

This

is th

e to

tal n

umbe

r of t

imes

the

asse

t is o

ut o

f ser

vice

. H

elps

to u

nder

stan

d th

e nu

mbe

r of t

imes

the

equi

pmen

t, pr

oduc

tion

line

or p

roce

ss u

nit i

s out

of

serv

ice

duri

ng th

e qu

ery

peri

od.

Tota

l Shu

tdow

n Ti

me

This

is th

e to

tal n

umbe

r of h

ours

the

asse

ts a

re o

ut o

f ser

vice

. H

elps

to e

stim

ate

the

tota

l los

s of t

he e

quip

men

t in

term

s of t

ime

duri

ng th

e qu

ery

peri

od.

Aver

age

Shut

dow

n Ti

me

This

is th

e ra

tio o

f tot

al sh

utdo

wn

time

to n

umbe

r of s

hutd

owns

. H

elps

to u

nder

stan

d th

e m

ean

time

of e

ach

shut

dow

n, e

spec

ially

for t

he fa

iled

asse

t.

Failure Related

Dow

ntim

e Ra

tio/F

requ

ency

This

is th

e ra

tio o

f the

num

ber o

f tim

es th

e eq

uipm

ent,

prod

uctio

n lin

e or

pr

oces

s uni

t is n

ot p

rodu

cing

bec

ause

it is

bro

ken,

und

er re

pair

or i

dle

to

the

tota

l pro

duct

ion

time.

Hel

ps to

und

erst

and

the

prop

ortio

n of

the

faile

d as

set i

n th

e to

tal n

umbe

r of s

tops

.

Dow

ntim

e Ra

tio/T

ime

This

is th

e ra

tio o

f the

num

ber o

f hou

rs th

e eq

uipm

ent,

prod

uctio

n lin

e or

pro

cess

uni

t is n

ot p

rodu

cing

bec

ause

it is

bro

ken

dow

n, u

nder

repa

ir

or id

le to

the

tota

l num

ber o

f wor

k ho

urs.

Hel

ps to

und

erst

and

the

prop

ortio

n of

the

faile

d as

set i

n th

e to

tal n

umbe

r of s

tops

in te

rms o

f tim

e.

Failu

re M

ode

Repo

rtin

g Ra

te

This

is th

e am

ount

of c

orre

ctiv

e m

aint

enan

ce w

ork

who

se fa

ilure

mod

e is

kno

wn.

Hel

ps to

und

erst

and

the

prop

ortio

n of

corr

ectiv

e m

aint

enan

ce w

ork

orde

rs w

ith fa

ilure

mod

e in

form

atio

n.

Reas

on fo

r Fai

lure

Re

gist

ratio

n Ra

te

Thi

s is t

he a

mou

nt o

f cor

rect

ive

mai

nten

ance

wor

k w

ith d

escr

iptio

ns.

Hel

ps to

und

erst

and

the

prop

ortio

n of

wor

k or

ders

ent

ered

dur

ing

the

corr

ectiv

e m

aint

enan

ce w

ork

with

info

rmat

ion

on ca

uses

of

failu

re.

Availability

Operational Availability

Avai

labi

lity

This

is th

e as

set’s

abi

lity

to p

erfo

rm a

s and

whe

n re

quir

ed, u

nder

giv

en

cond

ition

s, as

sum

ing

that

the

nece

ssar

y ex

tern

al re

sour

ces a

re p

rovi

ded.

H

elps

to u

nder

stan

d th

e av

aila

bilit

y of

a p

rodu

ct

line

or e

quip

men

t.

Page 141: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

6

Reliability

Mean Reliability Measures

Mea

n Ti

me

Betw

een

Failu

re

This

is th

e av

erag

e tim

e be

twee

n fa

ilure

s of r

epai

rabl

e as

sets

and

co

mpo

nent

s.

Hel

ps to

und

erst

and

the

aver

age

time

in b

etw

een

unex

pect

ed b

reak

dow

n of

an

asse

t or p

rodu

ctio

n st

oppa

ges o

f ass

et.

Mea

n Ti

me

To F

ailu

re

This

is th

e av

erag

e tim

e to

failu

re fo

r non

-rep

aira

ble

asse

ts.

Hel

ps to

und

erst

and

the

aver

age

time

that

a

syst

em is

not

faile

d, o

r is a

vaila

ble

Mea

n Up

Tim

e Th

is is

the

mea

n tim

e fr

om th

e sy

stem

(sub

syst

em) r

epai

r to

next

syst

em

(sub

syst

em) f

ailu

re.

Hel

ps to

und

erst

and

the

aver

age

time

duri

ng

whi

ch a

syst

em is

in o

pera

tion.

Failure Related

Emer

genc

y Fa

ilure

Ra

tio

This

is th

e pr

opor

tion

of e

mer

genc

y fa

ilure

s in

the

wor

k or

ders

. H

elps

to u

nder

stan

d th

e pr

opor

tion

of

emer

genc

y fa

ilure

s out

of a

ll fa

ilure

s tha

t hav

e oc

curr

ed.

Emer

genc

y Fa

iled

Equi

pmen

t Rat

io

This

is th

e pr

opor

tion

of fa

iled

asse

ts in

em

erge

ncy

failu

re w

ork

orde

rs.

Hel

ps to

und

erst

and

the

prop

ortio

n of

faile

d as

sets

in e

mer

genc

y fa

ilure

s.

Corr

ectiv

e M

aint

enan

ce F

ailu

re

Rate

Th

is is

the

tota

l num

ber o

f mai

nten

ance

act

ions

on

faile

d as

sets

. H

elps

to u

nder

stan

d th

e fr

eque

ncy

of co

rrec

tive

mai

nten

ance

act

iviti

es.

Repe

at F

ailu

re

This

is th

e to

tal n

umbe

r of m

aint

enan

ce a

ctio

ns o

n fa

ilure

s tha

t occ

ur

mor

e th

an o

ne ti

me.

Hel

ps to

und

erst

and

the

prop

ortio

n of

failu

re

mod

es th

at o

ccur

mor

e th

an o

nce

in th

e to

tal

failu

re.

Maintainability

Mean Maintainability Measures

Mea

n D

ownt

ime

This

is th

e m

ean

time

that

an

equi

pmen

t, pr

oduc

tion

line

or p

roce

ss u

nit

is n

on-o

pera

tiona

l for

reas

ons o

ther

than

repa

ir, s

uch

as m

aint

enan

ce,

and

incl

udes

the

time

from

failu

re to

rest

orat

ion

of a

n as

set o

r co

mpo

nent

.

Hel

ps to

und

erst

and

the

aver

age

tota

l dow

ntim

e re

quir

ed to

rest

ore

an a

sset

to it

s ful

l ope

ratio

nal

capa

bilit

ies.

Mea

n Ti

me

Betw

een

Mai

nten

ance

Th

is is

the

aver

age

leng

th o

f ope

ratin

g tim

e be

twee

n on

e m

aint

enan

ce

actio

n an

d an

othe

r mai

nten

ance

act

ion

for a

com

pone

nt.

Hel

ps to

und

erst

and

the

aver

age

time

that

a

mai

nten

ance

act

ion

requ

ires

to fi

x th

e fa

iled

com

pone

nt o

r the

low

est r

epla

ceab

le u

nit.

Mea

n Ti

me

To

Mai

ntai

n Th

is is

the

aver

age

time

to m

aint

enan

ce.

Hel

ps to

und

erst

and

the

aver

age

mai

nten

ance

du

ratio

n of

equ

ipm

ent.

Mea

n Ti

me

To R

epai

r Th

is is

the

aver

age

time

that

a re

pair

able

or n

on-r

epai

rabl

e as

set a

nd\o

r co

mpo

nent

take

s to

reco

ver f

rom

failu

re.

Hel

ps to

und

erst

and

the

aver

age

time

requ

ired

to

trou

bles

hoot

and

repa

ir fa

iled

equi

pmen

t and

re

turn

it to

nor

mal

ope

ratin

g co

nditi

ons.

Fals

e Al

arm

Rat

e Th

is is

the

prop

ortio

n of

unw

ante

d al

arm

s giv

en in

err

or fo

r an

equi

pmen

t, pr

oduc

tion

line

or p

roce

ss u

nit.

Hel

ps to

und

erst

and

the

num

ber o

f fal

se

posi

tives

that

occ

urre

d fo

r an

asse

t.

Page 142: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

7

Safety

Occupational Safety

Num

ber o

f Saf

ety

Inci

dent

s Th

is is

the

tota

l num

ber o

f saf

ety

inci

dent

s tha

t hav

e oc

curr

ed d

urin

g m

aint

enan

ce a

ctiv

ities

. H

elps

to u

nder

stan

d th

e nu

mbe

r of s

afet

y in

cide

nts.

Inju

ry R

atio

Th

is is

the

ratio

of m

aint

enan

ce p

erso

nnel

inju

ries

to to

tal w

ork

hour

s. H

elps

to u

nder

stan

d th

e nu

mbe

r of i

njur

ies t

hat

mai

nten

ance

per

sonn

el su

stai

ned

on th

e jo

b.

Inju

ry R

atio

per

Fa

ilure

Th

is is

the

ratio

of f

ailu

res c

ausi

ng in

juri

es to

the

tota

l num

ber o

f fa

ilure

s.

Hel

ps to

und

erst

and

the

num

ber o

f inj

urie

s tha

t m

aint

enan

ce p

erso

nnel

sust

aine

d co

mpa

red

to

the

tota

l num

ber o

f fai

lure

s.

4. M

aint

enan

ce P

roce

ss M

anag

emen

t

This

sec

tion

desc

ribe

s m

aint

enan

ce p

roce

ss m

anag

emen

t w

hich

dea

ls w

ith K

PIs

that

mea

sure

effi

cien

cy a

nd e

ffect

iven

ess

of t

he c

onsi

sten

t ap

plic

atio

n of

mai

nten

ance

and

mai

nten

ance

supp

ort;

incl

udin

g na

mes

, con

text

and

pur

pose

s. Th

e KP

Is fo

r thi

s lev

el a

re li

sted

and

exp

lain

ed in

Tab

le

2.

Tabl

e 2:

Mai

nten

ance

Pro

cess

Man

agem

ent K

PIs

Leve

l N

ame

Cont

ext

Purp

ose

3 4

Maintenance Management

Maintenance Strategy

Criti

cal E

quip

men

t Rat

io

This

is th

e am

ount

of e

quip

men

t im

port

ant t

o pe

rfor

man

ce,

capa

city

, and

thro

ughp

ut a

nd v

ital t

o op

erat

ion

to a

ll eq

uipm

ent i

n th

e co

mpa

ny’s

plan

t.

Hel

ps to

und

erst

and

the

prop

ortio

n of

cri

tical

equ

ipm

ent i

n th

e pl

ant o

r pro

cess

ing

unit.

Prev

entiv

e M

aint

enan

ce

Rate

This

is th

e pr

opor

tion

of m

aint

enan

ce w

ork

carr

ied

out a

t pr

edet

erm

ined

inte

rval

s or a

ccor

ding

to p

resc

ribe

d cr

iteri

a,

inte

nded

to re

duce

the

prob

abili

ty o

f fai

lure

or d

egra

datio

n of

as

set.

Hel

ps to

und

erst

and

the

prop

ortio

n of

equ

ipm

ent w

ith a

pr

oact

ive

mai

nten

ance

stra

tegy

in th

e pl

ant o

r pro

cess

ing

unit.

Pred

ictiv

e M

aint

enan

ce

(PdM

) Rat

e

This

is th

e pr

opor

tion

of co

nditi

on-b

ased

mai

nten

ance

carr

ied

out f

ollo

win

g a

fore

cast

der

ived

from

repe

ated

ana

lysi

s or

know

n ch

arac

teri

stic

s and

eva

luat

ion

of th

e si

gnifi

cant

pa

ram

eter

s of d

egra

ding

ass

et.

Hel

ps to

und

erst

and

the

prop

ortio

n of

equ

ipm

ent w

ith a

pr

edic

tive

mai

nten

ance

pol

icy

in th

e pl

ant o

r pro

cess

ing

unit.

Prev

entiv

e M

aint

enan

ce

Rate

(Cri

tical

Eq

uipm

ent)

This

is th

e pr

opor

tion

of m

aint

enan

ce ca

rrie

d ou

t at

pred

eter

min

ed in

terv

als o

r acc

ordi

ng to

pre

scri

bed

crite

ria,

in

tend

ed to

redu

ce th

e pr

obab

ility

of f

ailu

re o

r deg

rada

tion

of

the

asse

t.

Hel

ps to

und

erst

and

the

prop

ortio

n of

cri

tical

equ

ipm

ent

with

a p

roac

tive

mai

nten

ance

stra

tegy

in th

e pl

ant o

r pr

oces

sing

uni

t.

Pred

ictiv

e M

aint

enan

ce

Rate

(Cri

tical

Eq

uipm

ent)

This

is th

e pr

opor

tion

of co

nditi

on-b

ased

mai

nten

ance

carr

ied

out f

ollo

win

g a

fore

cast

der

ived

from

repe

ated

ana

lysi

s or

know

n ch

arac

teri

stic

s and

eva

luat

ion

of th

e si

gnifi

cant

Hel

ps to

und

erst

and

the

prop

ortio

n of

cri

tical

equ

ipm

ent

with

a p

redi

ctiv

e m

aint

enan

ce p

olic

y in

the

plan

t or

proc

essi

ng u

nit.

Page 143: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

8

pa

ram

eter

s of t

he d

egra

ding

ass

et.

Run

to F

ailu

re (R

TF)

Ratio

for C

ritic

al

Equi

pmen

t

This

is th

e ra

tio o

f fai

lure

man

agem

ent p

olic

y fo

r cri

tical

eq

uipm

ent w

ithou

t any

att

empt

to a

ntic

ipat

e or

pre

vent

failu

re

to a

ll po

licy

for c

ritic

al e

quip

men

t.

Hel

ps to

und

erst

and

the

prop

ortio

n of

cri

tical

equ

ipm

ent

that

doe

s not

hav

e an

y pr

ecau

tiona

ry o

r pre

dict

ive

mai

nten

ance

pol

icy

in th

e pl

ant o

r pro

cess

ing

unit.

Pl

anne

d M

aint

enan

ce v

s Un

plan

ned

Mai

nten

ance

Th

is is

the

ratio

of p

lann

ed m

aint

enan

ce to

unp

lann

ed

mai

nten

ance

. H

elps

to u

nder

stan

d th

e re

latio

nshi

p be

twee

n pl

anne

d m

aint

enan

ce a

nd u

npla

nned

mai

nten

ance

.

Maintenance Planning

Quantity Related

Num

ber o

f Pla

nned

W

ork

Orde

rs C

reat

ed

This

is th

e to

tal n

umbe

r of w

ork

orde

rs th

at h

ave

been

sc

hedu

led.

H

elps

to u

nder

stan

d th

e pl

anne

d am

ount

of s

ched

uled

m

aint

enan

ce/m

aint

enan

ce w

ork.

Time Related

Aver

age

Plan

ned

Exec

utio

n Ti

me

This

is th

e m

ean

exec

utio

n tim

e of

all

plan

ned

wor

k or

ders

. H

elps

to u

nder

stan

d th

e av

erag

e pl

anne

d ex

ecut

ion

time

of

plan

ned

mai

nten

ance

/mai

nten

ance

wor

k.

Resource Related

Tota

l Num

ber o

f Pl

anne

d In

tern

al L

abou

r H

ours

This

is th

e su

m o

f lab

our h

ours

att

ribu

ted

to p

lann

ed

mai

nten

ance

wor

k do

ne b

y in

tern

al m

aint

enan

ce p

erso

nnel

. H

elps

to u

nder

stan

d th

e pl

anne

d m

an-h

ours

requ

ired

for

plan

ned

inte

rnal

mai

nten

ance

.

Aver

age

Plan

ned

Inte

rnal

Lab

our H

ours

Th

is is

the

mea

n ho

urs f

or p

lann

ed in

tern

al la

bour

. H

elps

to u

nder

stan

d th

e m

ean

man

-hou

rs re

quir

ed fo

r pl

anne

d in

tern

al m

aint

enan

ce.

Tota

l Num

ber o

f Pl

anne

d Ex

tern

al L

abou

r H

ours

This

is th

e su

m o

f lab

our h

ours

att

ribu

ted

to p

lann

ed

mai

nten

ance

wor

k by

ext

erna

l mai

nten

ance

per

sonn

el.

Hel

ps to

und

erst

and

the

plan

ned

labo

ur h

ours

requ

ired

for

mai

nten

ance

wor

k by

ext

erna

l mai

nten

ance

per

sonn

el.

Aver

age

Plan

ned

Exte

rnal

Lab

our H

ours

Th

is is

the

mea

n la

bour

hou

rs fo

r pla

nned

ext

erna

l lab

our.

Hel

ps to

und

erst

and

the

aver

age

time

requ

ired

for p

lann

ed

mai

nten

ance

by

exte

rnal

mai

nten

ance

per

sonn

el.

Plan

ned

Num

ber o

f M

ater

ial U

sed

This

is th

e su

m o

f all

mat

eria

ls sc

hedu

led

to b

e us

ed fo

r m

aint

enan

ce a

nd/o

r mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

num

ber o

f spa

re p

arts

use

d in

the

plan

ned

mai

nten

ance

. Av

erag

e Pl

anne

d N

umbe

r of M

ater

ials

Us

ed

This

is th

e m

ean

num

ber o

f mat

eria

ls to

be

used

for s

ched

uled

m

aint

enan

ce a

nd/o

r mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

aver

age

num

ber o

f spa

re p

arts

us

ed fo

r pla

nned

mai

nten

ance

.

Cost Related

Tota

l Cos

t of P

lann

ed

Hum

an R

esou

rces

Th

is is

the

tota

l cos

t of m

anpo

wer

use

d fo

r sch

edul

ed

mai

nten

ance

and

/or m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e m

anpo

wer

cost

of p

lann

ed

mai

nten

ance

. Av

erag

e Pl

anne

d Ex

tern

al H

uman

Re

sour

ce C

osts

This

is th

e m

ean

exte

rnal

man

pow

er co

st fo

r sch

edul

ed

mai

nten

ance

and

/or m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e av

erag

e pl

anne

d m

anpo

wer

cost

of

exte

rnal

labo

ur fo

r pla

nned

mai

nten

ance

.

Tota

l Cos

t of P

lann

ed

Mat

eria

ls

This

is th

e to

tal c

ost o

f mat

eria

ls n

eede

d fo

r sch

edul

ed

mai

nten

ance

and

/or m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e co

st o

f mat

eria

ls fo

r pla

nned

m

aint

enan

ce.

Page 144: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

9

Plan

ned

Aver

age

Mat

eria

l Cos

t Th

is is

the

mea

n co

st o

f mat

eria

ls fo

r sch

edul

ed m

aint

enan

ce

and/

or m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e m

ean

plan

ned

cost

of m

ater

ials

for

each

sche

dule

d re

pair

or m

aint

enan

ce a

ctiv

ity.

Labo

ur C

ost R

atio

Th

is is

the

ratio

of m

anpo

wer

cost

to th

e to

tal c

ost o

f pla

nned

m

aint

enan

ce.

Hel

ps to

und

erst

and

the

ratio

of m

anpo

wer

cost

to to

tal

plan

ned

cost

in p

lann

ed m

aint

enan

ce.

Plan

ned

Mat

eria

l Cos

t Ra

tio

This

is th

e ra

tio o

f pla

nned

mat

eria

l cos

t to

the

plan

ned

tota

l co

st o

f mai

nten

ance

. H

elps

to u

nder

stan

d th

e pr

opor

tion

of th

e to

tal c

osts

of

plan

ned

mat

eria

l allo

cate

d to

pla

nned

mai

nten

ance

.

Maintenance Preparation

Work Order Creation

Plan

ned

Star

t / E

nd

Tim

e Re

gist

ratio

n Ra

te

This

is th

e ra

tio o

f wor

k or

ders

who

se p

lann

ed st

art/

end

tim

e is

kno

wn

at th

e tim

e of

crea

tion

to th

e to

tal w

ork

orde

rs

crea

ted.

Hel

ps to

und

erst

and

the

amou

nt o

f wor

k or

ders

who

se

plan

ned

star

t and

end

tim

e ar

e pr

ovid

ed d

urin

g th

eir

crea

tion.

Plan

ned

Spar

e Pa

rts

Regi

stra

tion

Rate

This

is th

e ra

tio o

f wor

k or

der w

hose

spar

e pa

rts r

equi

rem

ent

are

know

n at

the

time

of th

e w

ork

orde

r cre

atio

n to

the

tota

l w

ork

orde

rs cr

eate

d.

Hel

ps to

kno

w th

e pl

anne

d sp

are

part

s reg

istr

atio

n ra

te o

f w

ork

orde

rs.

Plan

ned

Man

-Hou

r Re

gist

ratio

n Ra

te

This

is th

e nu

mbe

r of w

ork

orde

rs w

ith la

bour

hou

rs n

eede

d re

cord

ed d

urin

g w

ork

orde

r cre

atio

n ou

t of a

ll th

e w

ork

orde

rs

crea

ted.

Hel

ps to

und

erst

and

the

prop

ortio

n of

wor

k or

ders

with

th

e re

quir

ed la

bour

regi

ster

ed d

urin

g w

ork

orde

r cre

atio

n.

Plan

ned

Dow

ntim

e Re

gist

ratio

n Ra

te

This

is th

e ra

tio o

f hou

rs th

at th

e pl

ant o

r ass

et w

ill b

e do

wn

ahea

d of

tim

e to

the

tota

l wor

k ho

urs.

Hel

ps to

und

erst

and

the

perc

enta

ge o

f wor

k or

ders

that

w

ere

ente

red

for p

lann

ed d

ownt

ime

duri

ng w

ork

orde

r cr

eatio

n.

Stan

dard

Ope

ratin

g Pl

an

Regi

stra

tion

Rate

Th

is is

ratio

of t

he n

umbe

r of w

ork

orde

rs w

ith a

n SO

P to

the

tota

l wor

k or

ders

. H

elps

to u

nder

stan

d th

e pr

opor

tion

of w

ork

orde

rs w

ith

stan

dard

ope

ratin

g pr

oced

ure

plan

s.

Plan

ned

Wor

k Ty

pe

Regi

stra

tion

Rate

Th

is is

the

prop

ortio

n of

wor

k or

ders

with

requ

ired

skill

s re

gist

ered

dur

ing

thei

r cre

atio

n.

Hel

ps to

und

erst

and

the

prop

ortio

n of

wor

k or

ders

with

kn

own

skill

s req

uire

d in

the

wor

k ca

tego

ry.

Job

Prio

rity

Reg

istr

atio

n Ra

te

This

is th

e nu

mbe

r of w

ork

orde

rs w

ith jo

b pr

iori

ties a

ssig

ned

duri

ng th

e w

ork

orde

r cre

atio

n ou

t of a

ll th

e w

ork

orde

rs.

Hel

ps to

und

erst

and

the

prop

ortio

n of

wor

k or

ders

as

sign

ed w

ork

prio

ritie

s dur

ing

wor

k or

der c

reat

ion.

Work Order Feedback

Actu

al S

pare

Par

ts U

se

Regi

stra

tion

Rate

Th

is is

the

amou

nt o

f spa

re p

arts

use

d fo

r mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

actu

al u

se o

f spa

re p

arts

for

mai

nten

ance

jobs

.

Actu

al M

an-H

our

Regi

stra

tion

Rate

Th

is is

the

prop

ortio

n of

labo

ur u

sed

for m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e am

ount

of l

abou

r use

d fo

r m

aint

enan

ce ta

sks.

Actu

al D

ownt

ime

Regi

stra

tion

Rate

Th

is is

the

num

ber o

f wor

k or

ders

caus

ing

actu

al d

ownt

ime.

H

elps

to u

nder

stan

d th

e pr

opor

tion

of w

ork

orde

rs th

at

lead

to d

ownt

ime.

Wor

k Or

der R

egis

trat

ion

Back

-Log

Th

is is

the

diffe

renc

e be

twee

n w

ork

orde

r reg

istr

atio

n da

te a

nd

the

actu

al re

gist

ratio

n da

te o

f the

wor

k or

der.

Hel

ps to

und

erst

and

the

time

inte

rval

bet

wee

n th

e co

mpl

etio

n of

the

wor

k or

der a

nd th

e co

mpl

etio

n of

re

gist

ratio

n in

the

syst

em.

Page 145: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

10

Work Order Approval

Tota

l Num

ber o

f Wor

k Or

ders

Th

is is

the

sum

of p

ropo

sed

wor

k or

ders

that

hav

e be

en

regi

ster

ed.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f wor

k or

ders

re

port

ed.

Tota

l Num

ber o

f Ap

prov

ed W

ork

Orde

rs

This

is th

e su

m o

f pro

pose

d w

ork

orde

rs th

at h

ave

been

ap

prov

ed.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f wor

k or

ders

ap

prov

ed in

a si

ngle

pas

s. To

tal N

umbe

r of

Unap

prov

ed W

ork

Orde

rs

This

is th

e su

m o

f pro

pose

d w

ork

orde

rs th

at h

ave

not b

een

appr

oved

.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f wor

k or

ders

not

ap

prov

ed in

a si

ngle

pas

s.

Wor

k Or

der A

ppro

val

Ratio

Th

is is

the

ratio

of p

ropo

sed

wor

k or

ders

to p

lann

ed w

ork

orde

rs.

Hel

ps to

und

erst

and

the

prop

ortio

n of

repo

rted

wor

k or

ders

aga

inst

the

tota

l pla

nned

wor

k or

ders

. On

e-tim

e Ap

prov

ed

Wor

k Or

der R

atio

This

is th

e ra

tio o

f wor

k or

ders

pro

posa

ls th

at w

ere

appr

oved

on

ce to

act

ual w

ork

orde

rs.

Hel

ps to

und

erst

and

the

rate

of o

ne-t

ime

appr

oval

s for

w

ork

orde

rs su

bmitt

ed.

Aver

age

time

lag

for

Repo

rtin

g an

d Ap

prov

ing

Wor

k Or

ders

This

is th

e di

ffere

nce

betw

een

appr

oved

wor

k or

ders

and

pr

opos

ed w

ork

orde

rs.

Hel

ps to

und

erst

and

the

aver

age

time

betw

een

subm

issi

on

of a

wor

k or

der a

nd th

e ap

prov

al o

f the

issu

ance

of t

he

wor

k or

der.

Maintenance Execution

Quantity Related

Num

ber o

f Pla

nned

W

ork

Orde

rs C

ompl

eted

Th

is is

the

tota

l num

ber o

f pre

vent

ive

mai

nten

ance

wor

k or

ders

that

hav

e be

en re

solv

ed.

Hel

ps to

und

erst

and

the

plan

ned

mai

nten

ance

wor

k do

ne.

Num

ber o

f Unp

lann

ed

Wor

k Or

ders

Com

plet

ed

This

is th

e to

tal n

umbe

r of u

npla

nned

corr

ectiv

e w

ork

orde

rs

that

hav

e be

en re

solv

ed.

Hel

ps to

und

erst

and

the

amou

nt o

f unp

lann

ed m

aint

enan

ce

wor

k co

mpl

eted

. N

umbe

r of W

ork

Orde

rs

Com

plet

ed P

er S

hift

This

is th

e to

tal n

umbe

r of w

ork

orde

rs co

mpl

eted

per

shift

. H

elps

to u

nder

stan

d th

e nu

mbe

r of w

ork

orde

rs co

mpl

eted

in

a sh

ift.

Wor

k Or

der R

esol

utio

n Ra

te

This

is th

e ra

tio o

f the

num

ber o

f wor

k or

ders

per

form

ed a

s sc

hedu

led

to th

e to

tal n

umbe

r of s

ched

uled

wor

k or

ders

. H

elps

to u

nder

stan

d th

e ra

tio o

f the

num

ber o

f wor

k or

ders

co

mpl

eted

as s

ched

uled

.

Time Related

Aver

age

Wor

k Or

der

Tim

e Th

is is

the

mea

n ex

ecut

ion

time

for c

ompl

eted

wor

k or

ders

. H

elps

to u

nder

stan

d th

e av

erag

e ex

ecut

ion

time

of

com

plet

ed m

aint

enan

ce w

ork.

Av

erag

e W

aitin

g Ti

me

for P

erso

nnel

Th

is is

the

mea

n w

aitin

g tim

e fo

r mai

nten

ance

per

sonn

el

need

ed to

reso

lve

a m

aint

enan

ce re

ques

t. H

elps

to u

nder

stan

d th

e av

erag

e lo

gist

ical

wai

ting

time

for

mai

nten

ance

staf

f for

com

plet

ed m

aint

enan

ce w

ork.

Av

erag

e W

aitin

g Ti

me

for S

pare

Par

ts

This

is th

e m

ean

wai

ting

time

for s

pare

par

ts u

sed

for

com

plet

ed m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e w

aitin

g tim

e fo

r spa

re p

arts

for

mai

nten

ance

wor

k.

Pers

onne

l Wai

ting

Tim

e Ra

tio

This

is th

e pr

opor

tion

of ti

me

it ta

kes t

o ge

t mai

nten

ance

pe

rson

nel t

o re

solv

e a

mai

nten

ance

task

. H

elps

to u

nder

stan

d th

e st

aff w

aitin

g tim

e fo

r mai

nten

ance

w

ork

com

plet

ed.

Spar

e Pa

rts W

aitin

g Ti

me

Ratio

Th

is is

the

prop

ortio

nal w

aitin

g tim

e fo

r spa

re p

arts

use

d fo

r m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e sp

are

part

s wai

ting

time

for

com

plet

ed m

aint

enan

ce w

ork.

Av

erag

e M

aint

enan

ce

Outa

ge T

ime

This

is th

e pe

riod

of t

ime

that

the

asse

t fai

ls to

pro

vide

or

perf

orm

its p

rim

ary

func

tion

duri

ng m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e av

erag

e ex

ecut

ion

time

of th

e m

aint

enan

ce w

ork.

Av

erag

e W

aitin

g Ti

me

of

Pers

onne

l dur

ing

This

is th

e m

ean

wai

ting

time

for m

aint

enan

ce p

erso

nnel

du

ring

shut

dow

n.

Hel

ps to

und

erst

and

the

aver

age

logi

stic

wai

ting

time

for

mai

nten

ance

per

sonn

el fo

r mai

nten

ance

wor

k du

ring

Page 146: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

11

Shut

dow

n sh

utdo

wn.

Av

erag

e W

aitin

g Ti

me

for S

pare

Par

ts d

urin

g Sh

utdo

wn

This

is th

e m

ean

wai

ting

time

of w

aitin

g fo

r spa

re p

arts

dur

ing

shut

dow

n.

Hel

ps to

und

erst

and

the

aver

age

wai

ting

time

for s

pare

pa

rts u

sed

for c

ompl

etin

g m

aint

enan

ce w

ork

at sh

utdo

wn.

Aver

age

Wai

ting

Tim

e of

Pe

rson

nel d

urin

g Sh

utdo

wn

Ratio

This

is th

e m

ean

wai

ting

time

for m

aint

enan

ce p

erso

nnel

to

mea

n m

aint

enan

ce o

utag

e tim

e du

ring

shut

dow

n.

Hel

ps to

und

erst

and

the

ratio

of w

aitin

g tim

e fo

r per

sonn

el

who

hav

e co

mpl

eted

the

mai

nten

ance

wor

k at

shut

dow

n to

th

e to

tal r

epai

r tim

e.

Aver

age

Wai

ting

Tim

e fo

r Spa

re P

arts

dur

ing

Shut

dow

n Ra

tio

This

is th

e m

ean

wai

ting

time

for s

pare

par

ts to

the

mea

n m

aint

enan

ce o

utag

e tim

e.

Hel

ps to

und

erst

and

the

prop

ortio

n of

spar

e pa

rts w

aitin

g tim

e fo

r the

repa

ir/m

aint

enan

ce w

ork

to to

tal m

aint

enan

ce

outa

ge ti

me

duri

ng th

e qu

ery

peri

od.

Estim

ated

Tim

e vs

. Ac

tual

Tim

e Th

is is

the

diffe

renc

e be

twee

n ac

tual

mai

nten

ance

tim

e an

d pl

anne

d m

aint

enan

ce ti

me.

H

elps

to u

nder

stan

d th

e tim

e va

rian

ces i

n w

ork

orde

r.

Resource Related

Tota

l Num

ber o

f In

tern

al L

abou

r Hou

rs

This

is th

e su

m o

f hou

rs u

sed

by in

-hou

se m

aint

enan

ce

pers

onne

l for

mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f hou

rs u

sed

by in

-ho

use

mai

nten

ance

per

sonn

el fo

r mai

nten

ance

wor

k pe

rfor

med

. Av

erag

e In

tern

al L

abou

r H

ours

Use

d Th

is is

the

mea

n nu

mbe

r of h

ours

use

d by

in-h

ouse

m

aint

enan

ce p

erso

nnel

for m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e av

erag

e la

bour

hou

rs u

sed

for e

ach

com

plet

ed in

tern

al m

aint

enan

ce w

ork.

To

tal N

umbe

r of

Exte

rnal

Lab

our H

ours

This

is th

e su

m o

f hou

rs u

sed

by m

aint

enan

ce co

ntra

ctor

s for

m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e la

bour

hou

rs u

sed

for e

xter

nal

mai

nten

ance

wor

k.

Aver

age

Exte

rnal

Lab

our

Hou

rs U

sed

This

is th

e m

ean

hour

s use

d by

ext

erna

l mai

nten

ance

pe

rson

nel f

or m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e av

erag

e la

bour

hou

rs fo

r eac

h co

mpl

eted

ext

erna

l mai

nten

ance

act

ion.

N

umbe

r of M

ater

ials

Us

ed

This

is th

e to

tal n

umbe

r of s

pare

par

ts u

sed

for m

aint

enan

ce

wor

k.

Hel

ps to

und

erst

and

the

actu

al n

umbe

r of s

pare

par

ts u

sed

for m

aint

enan

ce w

ork.

Aver

age

Mat

eria

ls U

sed

This

is th

e m

ean

spar

e pa

rts u

sed

for m

aint

enan

ce w

ork.

H

elps

to u

nder

stan

d th

e av

erag

e nu

mbe

r of s

pare

par

ts

used

for e

ach

com

plet

ed m

aint

enan

ce a

ctio

n.

Cost Related

Tota

l Cos

t of E

xter

nal

Hum

an R

esou

rces

Use

d Th

is is

the

tota

l cos

t of u

sing

mai

nten

ance

cont

ract

ors f

or

mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

cost

of e

xter

nal l

abou

r for

co

mpl

eted

mai

nten

ance

wor

k.

Aver

age

Exte

rnal

H

uman

Res

ourc

es C

osts

Th

is is

the

mea

n co

st o

f ext

erna

l mai

nten

ance

cont

ract

ors f

or

mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

aver

age

exte

rnal

labo

ur co

sts u

sed

for m

aint

enan

ce w

ork

com

plet

ed.

Tota

l Cos

t of M

ater

ials

Us

ed

This

is th

e to

tal c

ost o

f mat

eria

ls u

sed

for m

aint

enan

ce.

Hel

ps to

und

erst

and

the

cost

of m

ater

ials

use

d fo

r m

aint

enan

ce w

ork

com

plet

ed.

Aver

age

Cost

of

Mat

eria

ls U

sed

This

is th

e m

ean

cost

of m

ater

ials

use

d fo

r mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

aver

age

cost

of m

ater

ials

for

com

plet

ed m

aint

enan

ce w

ork.

Ex

tern

al L

abou

r Cos

ts

Ratio

Th

is is

the

ratio

of t

he to

tal e

xter

nal m

aint

enan

ce co

ntra

ctor

co

st to

the

tota

l mai

nten

ance

cost

. H

elps

to u

nder

stan

d th

e co

st o

f man

pow

er co

st to

tota

l cos

t of

mai

nten

ance

wor

k co

mpl

eted

Page 147: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

12

Actu

al M

ater

ials

Cos

t Ra

tio

This

is th

e ra

tio o

f cos

ts fo

r mat

eria

ls to

the

tota

l mai

nten

ance

co

st.

Hel

ps to

und

erst

and

the

cost

of t

he m

ater

ials

use

d to

co

mpl

ete

the

mai

nten

ance

wor

k.

Mai

nten

ance

Cos

t per

As

set

This

is th

e to

tal c

ost i

ncur

red

for m

aint

aini

ng a

n as

set.

Hel

ps to

und

erst

and

the

cost

incu

rred

for m

aint

enan

ce

wor

k.

Maintenance Assessment

Quality

Num

ber o

f Com

plet

ed

Wor

k Or

ders

App

rove

d Th

is is

the

tota

l num

ber o

f com

plet

ed w

ork

orde

rs th

at h

ave

been

app

rove

d af

ter r

esol

utio

n.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f rep

orte

d ap

prov

als f

or co

mpl

eted

wor

k or

ders

. W

ork

Orde

r App

rova

l Ra

tio

This

is th

e ra

tio o

f com

plet

ed w

ork

orde

rs th

at n

eed

to b

e ap

prov

ed a

fter r

esol

utio

n to

tota

l com

plet

ed w

ork

orde

rs.

Hel

ps to

und

erst

and

the

prop

ortio

n of

the

wor

k or

ders

that

ne

ed to

be

subm

itted

for a

ppro

val.

One-

Tim

e Pa

ss In

tern

al

Com

plet

ion

Rate

This

is th

e ra

tio o

f wor

k or

ders

that

are

reso

lved

the

very

firs

t tim

e th

ey o

ccur

by

inte

rnal

mai

nten

ance

per

sonn

el to

tota

l co

mpl

eted

wor

k or

ders

.

Hel

ps to

und

erst

and

the

num

ber o

f one

-tim

e w

ork

orde

rs

by in

tern

al m

aint

enan

ce p

erso

nnel

that

do

not n

eed

to b

e re

wor

ked.

One-

Tim

e Pa

ss E

xter

nal

Com

plet

ion

Rate

This

is th

e ra

tio o

f wor

k or

ders

that

are

reso

lved

the

very

firs

t tim

e th

ey o

ccur

by

exte

rnal

mai

nten

ance

per

sonn

el to

tota

l co

mpl

eted

wor

k or

ders

.

Hel

ps to

und

erst

and

the

num

ber o

f one

-tim

e w

ork

orde

rs

by e

xter

nal m

aint

enan

ce p

erso

nnel

that

do

not n

eed

to b

e re

wor

ked.

Plan

ning

Com

plia

nce

This

is a

mea

sure

of a

dher

ence

to m

aint

enan

ce p

lans

. H

elps

to u

nder

stan

d th

e am

ount

of p

lann

ed m

aint

enan

ce

wor

k th

at is

star

ted

on th

e sa

me

date

as p

lann

ed co

mpa

red

to im

plem

enta

tion

and

eval

uatio

n pl

ans.

Effectiveness

Inte

rnal

Wor

k Co

mpl

etio

n Ra

te

This

is th

e ra

tio o

f suc

cess

ful w

ork

com

plet

ed b

y in

tern

al

mai

nten

ance

per

sonn

el to

tota

l com

plet

ed w

ork.

H

elps

to u

nder

stan

d th

e pr

opor

tion

of w

ork

orde

rs

com

plet

ed b

y in

tern

al m

aint

enan

ce p

erso

nnel

. Ou

tsou

rced

Wor

k Co

mpl

etio

n Ra

te

This

is th

e ra

tio o

f suc

cess

ful w

ork

com

plet

ion

by e

xter

nal

mai

nten

ance

per

sonn

el to

tota

l com

plet

ed w

ork.

H

elps

to u

nder

stan

d th

e pr

opor

tion

of w

ork

orde

rs

com

plet

ed b

y ex

tern

al m

aint

enan

ce p

erso

nnel

. In

tern

al W

ork

Dela

y Ra

te

This

is th

e ra

tio o

f del

ayed

mai

nten

ance

wor

k by

inte

rnal

m

aint

enan

ce p

erso

nnel

to a

ll in

tern

al m

aint

enan

ce.

Hel

ps to

und

erst

and

com

plet

ion

dela

ys in

inte

rnal

m

aint

enan

ce w

ork.

Inte

rnal

Wor

k Av

erag

e De

lay

Peri

od

This

is th

e m

ean

peri

od o

f del

ayed

wor

k by

inte

rnal

m

aint

enan

ce p

erso

nnel

.

Hel

ps to

und

erst

and

the

aver

age

dela

y pe

riod

of t

he w

ork

orde

rs sc

hedu

led

to b

e co

mpl

eted

by

inte

rnal

mai

nten

ance

pe

rson

nel.

Exte

rnal

Wor

k D

elay

Ra

te

This

is th

e ra

tio o

f del

ayed

mai

nten

ance

wor

k by

ext

erna

l m

aint

enan

ce p

erso

nnel

to a

ll ex

tern

al w

ork.

H

elps

to u

nder

stan

d th

e de

laye

d co

mpl

etio

n of

ext

erna

l w

ork.

Exte

rnal

Wor

k Av

erag

e De

lay

Peri

od

This

is th

e m

ean

peri

od o

f del

ayed

wor

k by

ext

erna

l m

aint

enan

ce p

erso

nnel

.

Hel

ps to

und

erst

and

the

aver

age

dela

y pe

riod

of t

he w

ork

orde

rs sc

hedu

led

to b

e co

mpl

eted

by

exte

rnal

mai

nten

ance

pe

rson

nel.

Inte

rnal

Ave

rage

Ex

ecut

ion

Tim

e De

viat

ion

Ratio

This

is th

e di

ffere

nce

in ti

me

betw

een

plan

ned

and

actu

al

mai

nten

ance

jobs

don

e by

inte

rnal

mai

nten

ance

per

sonn

el.

Hel

ps to

und

erst

and

the

diffe

renc

e be

twee

n th

e av

erag

e ex

ecut

ion

time

of th

e in

tern

al m

aint

enan

ce w

ork

and

the

plan

. Ex

tern

al C

omm

ittee

Ex

ecut

ion

Tim

e De

viat

ion

Ratio

This

is th

e di

ffere

nce

in ti

me

betw

een

plan

ned

and

actu

al

mai

nten

ance

jobs

don

e by

ext

erna

l mai

nten

ance

per

sonn

el

Hel

ps to

und

erst

and

the

diffe

renc

e be

twee

n th

e av

erag

e ex

ecut

ion

time

and

plan

of t

he e

xter

nal m

aint

enan

ce w

ork

com

plet

ed.

Page 148: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

13

Inte

rnal

Man

-Hou

r Di

ffere

nce

Ratio

Th

is is

the

diffe

renc

e in

tim

e be

twee

n pl

anne

d an

d ac

tual

la

bour

hou

rs u

sed

by in

tern

al m

aint

enan

ce p

erso

nnel

. H

elps

to u

nder

stan

d th

e de

viat

ions

from

the

plan

ned

labo

ur u

sed

for i

nter

nal m

aint

enan

ce w

ork.

Inte

rnal

Ave

rage

Man

-H

our D

iffer

ence

Rat

io

This

is th

e m

ean

diffe

renc

e in

tim

e be

twee

n pl

anne

d an

d ac

tual

la

bour

hou

rs u

sed

by in

tern

al m

aint

enan

ce p

erso

nnel

.

Hel

ps to

und

erst

and

the

aver

age

devi

atio

n fr

om th

e pl

anne

d av

erag

e fo

r eac

h co

mpl

eted

inte

rnal

mai

nten

ance

ac

tion.

Ex

tern

al M

an-H

our

Diffe

renc

e Ra

tio

This

is th

e di

ffere

nce

in ti

me

betw

een

plan

ned

and

actu

al

labo

ur h

ours

of e

xter

nal m

aint

enan

ce p

erso

nnel

. H

elps

to u

nder

stan

d th

e de

viat

ion

betw

een

actu

al a

nd

plan

ned

labo

ur h

ours

of

exte

rnal

mai

nten

ance

wor

k.

Exte

rnal

Ave

rage

Man

-H

our D

iffer

ence

Rat

io

This

is th

e m

ean

diffe

renc

e in

tim

e be

twee

n pl

anne

d an

d ac

tual

la

bour

hou

rs o

f ext

erna

l mai

nten

ance

per

sonn

el.

Hel

ps to

und

erst

and

the

aver

age

devi

atio

n fr

om th

e pl

anne

d av

erag

e fo

r eac

h ex

tern

al m

aint

enan

ce a

ctio

n.

Mat

eria

l Diff

eren

ce

Ratio

Th

is is

the

diffe

renc

e be

twee

n pl

anne

d sp

are

part

s and

act

ual

spar

e pa

rts u

sed

for m

aint

enan

ce w

ork.

Hel

ps to

und

erst

and

the

diffe

renc

e be

twee

n th

e ac

tual

nu

mbe

r of s

pare

par

ts u

sed

for m

aint

enan

ce w

ork

and

the

num

ber o

f spa

re p

arts

ass

igne

d in

the

plan

.

Aver

age

Mat

eria

l Di

ffere

nce

Ratio

Th

is is

the

mea

n di

ffere

nce

betw

een

plan

ned

spar

e pa

rts a

nd

actu

al sp

are

part

s use

d fo

r mai

nten

ance

wor

k.

Hel

ps to

und

erst

and

the

diffe

renc

e be

twee

n th

e av

erag

e nu

mbe

r of u

sed

spar

e pa

rts a

nd th

e pl

anne

d av

erag

e fo

r ea

ch co

mpl

eted

mai

nten

ance

act

ion.

5. M

aint

enan

ce R

esou

rces

Man

agem

ent

This

sec

tion

desc

ribe

s Le

vel

III,

mai

nten

ance

res

ourc

es m

anag

emen

t, w

hich

dea

ls w

ith K

PIs

that

mea

sure

spa

re p

art

man

agem

ent,

inte

rnal

m

aint

enan

ce p

erso

nnel

man

agem

ent a

nd e

xter

nal m

aint

enan

ce p

erso

nnel

man

agem

ent;

incl

udin

g KP

I nam

es, c

onte

xt a

nd p

urpo

ses.

The

KPIs

are

lis

ted

and

expl

aine

d in

Tab

le 3

.

Tabl

e 3:

Mai

nten

ance

Res

ourc

es M

anag

emen

t KPI

s

Leve

l N

ame

Cont

ext

Purp

ose

3 4

Spare Parts Management

Inventory Management

Aver

age

Spar

e Pa

rt Q

uant

ity

This

is th

e m

ean

num

ber o

f spa

re p

arts

in st

ock.

H

elps

to k

now

the

aver

age

num

ber o

f spa

re p

arts

be

twee

n op

enin

g an

d cl

osin

g st

ocks

Spar

e Pa

rt C

apita

l Util

izat

ion

This

is th

e m

ean

cost

of s

pare

par

ts u

tiliz

atio

n.

Hel

ps to

und

erst

and

the

aver

age

inve

ntor

y va

lue

of u

sing

spar

e pa

rts c

ompa

red

to th

e or

igin

al

purc

hase

cost

of t

he e

quip

men

t. Sp

are

Part

s Cap

ital

Repl

acem

ent R

ate

This

is th

e av

erag

e co

st o

f spa

re p

art r

epla

cem

ent.

Hel

ps to

und

erst

and

the

aver

age

inve

ntor

y co

st

of re

plac

ing

spar

e pa

rts.

Spar

e Pa

rt C

onsu

mpt

ion

per

Thou

sand

SEK

Out

put

This

is th

e av

erag

e co

st o

f spa

re p

arts

for m

aint

enan

ce w

ork

per

ever

y 10

00 S

EK o

utpu

t. H

elps

to k

now

the

aver

age

cost

of s

pare

par

ts fo

r m

aint

enan

ce fo

r eve

ry th

ousa

nd S

EK sp

ent o

n

Page 149: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

14

over

all m

aint

enan

ce.

Spar

e Pa

rt T

urno

ver R

ate

This

is th

e nu

mbe

r of s

pare

par

ts b

ough

t to

repl

ace

faile

d pa

rts

in a

qua

rter

or a

yea

r. H

elps

to u

nder

stan

d sp

are

part

s tur

nove

r rat

e.

Spar

e Pa

rt T

urno

ver P

erio

d Th

is is

the

ratio

of a

vera

ge in

vent

ory

valu

e to

cost

of s

pare

par

ts

with

in th

e ye

ar.

Hel

ps to

und

erst

and

the

spar

e pa

rts t

urno

ver

peri

od.

Slow

Mov

ing

Inve

ntor

y Ra

tio

This

is d

efin

ed a

s the

pro

port

ion

of st

ock

that

has

not

ship

ped

in

a ce

rtai

n am

ount

of t

ime,

e.g

. 90d

ays o

r 180

day

s, an

d in

clud

es

stoc

k w

ith a

low

turn

over

rate

rela

tive

to th

e qu

antit

y on

han

d.

Hel

ps to

und

erst

and

peri

ods o

f no

cons

umpt

ion

of so

me

type

s of s

pare

par

ts fr

om th

e to

tal s

pare

pa

rts i

nven

tory

.

Outsourcing Management

Contractor Statistics

Num

ber o

f Out

sour

ced

Equi

pmen

t Bre

akdo

wns

Th

is is

the

tota

l am

ount

of o

utso

urce

d eq

uipm

ent t

hat i

s out

of

serv

ice.

Hel

ps to

und

erst

and

the

tota

l am

ount

of

equi

pmen

t han

dled

by

outs

ourc

ed m

aint

enan

ce

pers

onne

l tha

t is n

ot w

orki

ng.

Num

ber o

f Out

sour

ced

Mai

nten

ance

Per

sonn

el

This

is th

e to

tal n

umbe

r of o

utso

urce

d m

aint

enan

ce p

erso

nnel

. H

elps

to u

nder

stan

d th

e to

tal n

umbe

r of e

xter

nal

mai

nten

ance

per

sonn

el.

Exte

rnal

Mai

nten

ance

Cos

t Ra

tio

This

is th

e ra

tio o

f cos

t of o

utso

urce

d m

aint

enan

ce p

erso

nnel

to

the

over

all m

aint

enan

ce co

st.

Hel

ps to

und

erst

and

the

cost

of e

xter

nal

mai

nten

ance

per

sonn

el.

Human Resources Management

Skills Management

Tota

l Num

ber o

f Mai

nten

ance

Op

erat

ors

This

is th

e nu

mbe

r of m

aint

enan

ce o

pera

tors

use

d fo

r m

aint

enan

ce ta

sks.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f re

gist

ered

mai

nten

ance

ope

rato

rs a

ssig

ned

to

task

s.

Tota

l Num

ber o

f Mai

nten

ance

En

gine

ers

This

is th

e nu

mbe

r of m

aint

enan

ce e

ngin

eers

use

d fo

r m

aint

enan

ce ta

sks.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f re

gist

ered

mai

nten

ance

eng

inee

rs a

ssig

ned

to

task

s.

Num

ber o

f Mul

ti-Sk

illed

M

aint

enan

ce P

erso

nnel

Th

is is

the

num

ber o

f mul

ti-sk

illed

mai

nten

ance

per

sonn

el u

sed

for m

aint

enan

ce ta

sks.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f re

gist

ered

skill

ed m

aint

enan

ce p

erso

nnel

as

sign

ed to

task

s.

Mai

nten

ance

Ope

rato

r Rat

io

This

is th

e ra

tio o

f mai

nten

ance

ope

rato

rs to

tota

l mai

nten

ance

pe

rson

nel.

Hel

ps to

und

erst

and

the

perc

enta

ge o

f m

aint

enan

ce p

erso

nnel

who

are

ope

rato

rs.

Mai

nten

ance

Eng

inee

r Rat

io

This

is th

e ra

tio o

f mai

nten

ance

eng

inee

rs to

tota

l mai

nten

ance

pe

rson

nel.

Hel

ps to

und

erst

and

the

perc

enta

ge o

f m

aint

enan

ce p

erso

nnel

who

are

eng

inee

rs.

Mul

ti-Sk

illed

Mai

nten

ance

Pe

rson

nel R

atio

Th

is is

the

ratio

of m

ulti-

skill

ed m

aint

enan

ce p

erso

nnel

to to

tal

mai

nten

ance

per

sonn

el.

Hel

ps to

und

erst

and

the

perc

enta

ge o

f m

aint

enan

ce p

erso

nnel

who

are

mul

ti-sk

illed

.

Work Load Management

Aver

age

Num

ber o

f Wor

k Or

ders

Cre

ated

per

Per

son

This

is th

e nu

mbe

r of w

ork

orde

rs cr

eate

d by

eac

h m

aint

enan

ce

wor

ker.

Hel

ps to

und

erst

and

the

aver

age

num

ber o

f wor

k or

ders

crea

ted

by e

ach

mai

nten

ance

wor

ker.

Aver

age

Num

ber o

f Wor

k Or

ders

Exe

cute

d pe

r Per

son

This

is th

e nu

mbe

r of w

ork

orde

rs co

mpl

eted

per

mai

nten

ance

w

orke

r. H

elps

to u

nder

stan

d th

e av

erag

e nu

mbe

r of w

ork

orde

rs co

mpl

eted

by

each

mai

nten

ance

wor

ker.

Aver

age

Daily

Wor

kloa

d pe

r Pe

rson

Th

is is

the

num

ber o

f hou

rs fo

r eac

h m

aint

enan

ce w

orke

r in

a da

y.

Hel

ps to

und

erst

and

the

daily

ave

rage

num

ber o

f w

ork

hour

s for

the

impl

emen

tatio

n of

wor

k or

ders

for e

ach

mai

nten

ance

per

son.

Page 150: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

15

Training Management

Aver

age

Annu

al T

rain

ing

Hou

rs p

er M

aint

enan

ce

Oper

ator

This

is th

e ye

arly

mea

n tr

aini

ng h

ours

per

mai

nten

ance

op

erat

or.

Hel

ps to

und

erst

and

the

aver

age

annu

al tr

aini

ng

hour

s for

mai

nten

ance

ope

rato

rs.

Aver

age

Annu

al T

rain

ing

Hou

rs p

er M

aint

enan

ce

Engi

neer

s

This

is th

e ye

arly

mea

n tr

aini

ng h

ours

per

mai

nten

ance

en

gine

er.

Hel

ps to

und

erst

and

the

aver

age

annu

al tr

aini

ng

hour

s for

mai

nten

ance

eng

inee

rs.

Aver

age

Annu

al T

rain

ing

Hou

rs p

er M

ulti-

Skill

ed

Mai

nten

ance

Eng

inee

rs

This

is th

e ye

arly

mea

n tr

aini

ng h

ours

per

mul

ti-sk

illed

m

aint

enan

ce e

ngin

eers

. H

elps

to u

nder

stan

d th

e av

erag

e an

nual

trai

ning

ho

urs f

or m

ulti-

skill

ed m

aint

enan

ce e

ngin

eers

.

Competence Development

Num

ber o

f New

Sen

ior

Mai

nten

ance

Eng

inee

rs

This

is th

e nu

mbe

r of m

aint

enan

ce o

pera

tors

who

hav

e be

com

e m

aint

enan

ce e

ngin

eers

.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f m

aint

enan

ce o

pera

tors

who

hav

e ri

sen

to th

e ra

nk o

f mai

nten

ance

eng

inee

rs.

Ratio

of N

ew S

enio

r M

aint

enan

ce E

ngin

eers

This

is th

e ra

tio o

f the

num

ber o

f mai

nten

ance

ope

rato

rs w

ho

have

bec

ome

mai

nten

ance

eng

inee

rs to

the

tota

l num

ber o

f m

aint

enan

ce e

ngin

eers

.

Hel

ps to

und

erst

and

the

prop

ortio

n of

m

aint

enan

ce o

pera

tors

who

hav

e ri

sen

up to

the

rank

of m

aint

enan

ce e

ngin

eers

N

umbe

r of N

ew M

ulti-

Skill

ed

Mai

nten

ance

Eng

inee

rs

This

is th

e nu

mbe

r of m

aint

enan

ce e

ngin

eers

who

hav

e be

com

e m

ulti-

skill

ed m

aint

enan

ce e

ngin

eers

.

Hel

ps to

und

erst

and

the

tota

l num

ber o

f m

aint

enan

ce e

ngin

eers

who

hav

e ri

sen

to th

e ra

nk o

f mul

ti-sk

illed

mai

nten

ance

eng

inee

rs.

Ratio

of N

ew M

ulti-

Skill

ed

Mai

nten

ance

Eng

inee

rs

This

is th

e ra

tio o

f the

num

ber o

f mai

nten

ance

eng

inee

rs w

ho

have

bec

ome

mul

ti-sk

illed

mai

nten

ance

eng

inee

rs to

the

tota

l nu

mbe

r of m

ulti-

skill

ed m

aint

enan

ce e

ngin

eers

.

Hel

ps to

und

erst

and

the

prop

ortio

n of

m

aint

enan

ce e

ngin

eers

who

hav

e ri

sen

to th

e ra

nk o

f mul

ti-sk

illed

mai

nten

ance

eng

inee

rs.

6. Im

plem

enta

tion

of P

ropo

sed

KPI

s in

a M

inin

g Co

mpa

ny

Besi

des

the

prop

osed

KPI

fram

ewor

k, a

noth

er c

ontr

ibut

ion

in th

is s

tudy

pre

sent

ed in

this

sec

tion

is a

ddre

ssin

g its

impl

emen

tatio

n by

intr

oduc

ing

time

defin

ition

and

gen

eral

form

ula

of e

ach

spec

ified

KPI

. Res

ults

from

this

sec

tion

will

sup

ply

the

guid

ance

of i

mpl

emen

ting

thos

e KP

Is th

roug

h eM

aint

enan

ce. T

he p

roce

dure

, inc

ludi

ng th

e fo

rmul

a us

ed fo

r the

calc

ulat

ion

of K

PI v

alue

s, is

show

n in

Tab

le 4

, Tab

le 5

and

Tab

le 6

.

Page 151: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

16

Tabl

e 4:

Impl

emen

tatio

n us

ing

eMai

nten

ance

for A

sset

Ope

ratio

n M

anag

emen

t

Leve

l N

ame

Tim

elin

e Ge

nera

l For

mul

a 3

4

Overall Asset

Shutdown Statistics

Num

ber o

f Sh

utdo

wns

St

op d

ate/

Reg

istr

atio

n da

te ⊆

(que

ry st

art d

ate,

que

ry

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆𝑅𝑅)

Tota

l Shu

tdow

n Ti

me

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Aver

age

Shut

dow

n Ti

me

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆𝑅𝑅)

Failure Related

Dow

ntim

e Ra

tio/F

requ

ency

Re

gist

ratio

n da

te /

stop

reco

rd d

ate ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑜𝑜𝑓𝑓𝑆𝑆𝑓𝑓𝑅𝑅

𝑓𝑓𝑅𝑅𝐶𝐶𝑁𝑁′

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆𝑅𝑅)

Dow

ntim

e Ra

tio/T

ime

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑜𝑜𝑓𝑓𝑆𝑆𝑓𝑓𝑅𝑅

𝑓𝑓𝑅𝑅𝐶𝐶𝑁𝑁′

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Failu

re M

ode

Repo

rtin

g Ra

te

Wor

k or

der r

egis

trat

ion

/cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry te

rmin

atio

n da

te) I

tem

: Wor

k or

der t

ype;

Sy

stem

/sec

tion;

Wor

k fo

r sup

plie

r gro

up; W

ork

supp

lier

attr

ibut

e

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′ 𝑓𝑓𝐶𝐶𝑅𝑅

𝑜𝑜𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁

𝑆𝑆𝑜𝑜𝑅𝑅𝑁𝑁 𝑅𝑅𝑅𝑅

𝐶𝐶𝑜𝑜𝑅𝑅

𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′

Reas

on fo

r Fa

ilure

Re

gist

ratio

n Ra

te

wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

star

t dat

e, q

uery

te

rmin

atio

n da

te)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′

𝑓𝑓𝐶𝐶𝑅𝑅

𝑜𝑜𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁

𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑅𝑅

𝐶𝐶𝑜𝑜𝑅𝑅

𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′

Availability

Operational Availability

Avai

labi

lity

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑂𝑂

𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝑂𝑂𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

+ 𝐷𝐷𝑜𝑜𝐷𝐷

𝐶𝐶 𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁 𝐷𝐷𝑆𝑆𝑁𝑁

𝑅𝑅𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

)

Reliability

Mean Reliability Measures

Mea

n Ti

me

Betw

een

Failu

re

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑂𝑂𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅𝑁𝑁𝑆𝑆

𝑅𝑅𝑅𝑅 𝑁𝑁𝑁𝑁𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑓𝑓𝑁𝑁𝑓𝑓𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐹𝐹𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅)

Mea

n Ti

me

To

Failu

re

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑂𝑂𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅𝑁𝑁𝑆𝑆

𝑅𝑅𝑅𝑅 𝐶𝐶𝑜𝑜𝑅𝑅

𝑁𝑁𝑁𝑁𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑓𝑓𝑁𝑁𝑓𝑓𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐹𝐹𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅)

Mea

n Up

Tim

e Re

gist

ratio

n da

te /

star

t rec

ord

date

⊆ (q

uery

star

t dat

e,

quer

y te

rmin

atio

n da

te)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁 𝑅𝑅𝐶𝐶

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑁𝑁𝑆𝑆𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁 𝐸𝐸𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅

Page 152: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

17

Failure Related

Emer

genc

y Fa

ilure

Rat

io

Wor

k or

der r

egis

trat

ion/

crea

tion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑁𝑁𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑁𝑁𝐶𝐶𝑐𝑐𝑒𝑒 𝑁𝑁𝑁𝑁𝑆𝑆𝑓𝑓𝑅𝑅𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Emer

genc

y Fa

iled

Equi

pmen

t Rat

io

Wor

k or

der r

egis

trat

ion/

crea

tion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑁𝑁𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑁𝑁𝐶𝐶𝑐𝑐𝑒𝑒 𝑁𝑁𝑁𝑁𝑆𝑆𝑓𝑓𝑅𝑅𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Corr

ectiv

e M

aint

enan

ce

Failu

re R

ate

Wor

k or

der r

egis

trat

ion/

crea

tion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Repe

at F

ailu

re

Wor

k or

der r

egis

trat

ion/

crea

tion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑜𝑜𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁

𝑆𝑆𝑜𝑜𝑅𝑅𝑁𝑁

> 1

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑜𝑜𝑁𝑁𝑁𝑁𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁′

Maintainability

Mean Maintainability Measures

Mea

n D

ownt

ime

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(𝐷𝐷𝑜𝑜𝐷𝐷

𝐶𝐶𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁 𝑅𝑅𝐶𝐶

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐷𝐷𝑜𝑜𝐷𝐷

𝐶𝐶𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁 𝐸𝐸𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅

Mea

n Ti

me

Betw

een

Mai

nten

ance

Regi

stra

tion

date

/ st

art r

ecor

d da

te ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁)

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝐴𝐴𝑐𝑐𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶𝑅𝑅

Mea

n Ti

me

To

Mai

ntai

n Re

gist

ratio

n da

te /

stop

reco

rd d

ate ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

(“𝐶𝐶”

𝐼𝐼𝐶𝐶𝑅𝑅𝑅𝑅𝑐𝑐𝑅𝑅𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁𝑅𝑅

𝑅𝑅𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (

“𝐶𝐶” 𝑁𝑁

𝐶𝐶𝑅𝑅𝑅𝑅𝑅𝑅

)

Mea

n Ti

me

To

Repa

ir

Regi

stra

tion

date

/ st

op re

cord

dat

e ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

)

𝑆𝑆𝑆𝑆𝑆𝑆

(“𝐶𝐶”

𝐼𝐼𝐶𝐶𝑅𝑅𝑅𝑅𝑐𝑐𝑅𝑅𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁𝑅𝑅

𝑅𝑅𝑜𝑜 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝑁𝑁𝑁𝑁

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (

“𝐶𝐶” 𝑁𝑁

𝐶𝐶𝑅𝑅𝑅𝑅𝑅𝑅

)

Fals

e Al

arm

Rat

e Re

gist

ratio

n da

te ⊆

(que

ry st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐹𝐹𝑓𝑓𝑓𝑓𝑅𝑅𝑁𝑁

𝐴𝐴𝑓𝑓𝑓𝑓𝑁𝑁𝑆𝑆

𝑅𝑅)𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐴𝐴𝑓𝑓𝑓𝑓𝑁𝑁𝑆𝑆

𝑅𝑅

Safety

Occupational Safety

Num

ber o

f Saf

ety

Inci

dent

s Re

gist

ratio

n da

te ⊆

(que

ry st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑓𝑓𝑜𝑜𝑁𝑁𝑅𝑅𝑒𝑒 𝐼𝐼𝐶𝐶𝑐𝑐𝑅𝑅𝑅𝑅𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅)

Inju

ry R

ate

Regi

stra

tion

date

⊆ (q

uery

star

t dat

e, q

uery

term

inat

ion

date

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑓𝑓𝑜𝑜𝑁𝑁𝑅𝑅𝑒𝑒 𝐼𝐼𝐶𝐶𝑐𝑐𝑅𝑅𝑅𝑅𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐼𝐼𝐶𝐶𝐼𝐼𝑆𝑆𝑁𝑁𝑒𝑒

𝑁𝑁𝑁𝑁𝑅𝑅𝐷𝐷𝑁𝑁𝑁𝑁𝐶𝐶

′𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁 1′

𝑓𝑓𝐶𝐶𝑅𝑅

′𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁 2′

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)

Inju

ry R

ate

per

Failu

re

Regi

stra

tion

date

⊆ (q

uery

star

t dat

e, q

uery

term

inat

ion

date

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐹𝐹𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅 𝐶𝐶𝑓𝑓𝑆𝑆𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅 𝐼𝐼𝐶𝐶𝐼𝐼𝑆𝑆𝑁𝑁𝑒𝑒

)𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐹𝐹𝑓𝑓𝑅𝑅𝑓𝑓𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅

∗10

0

Page 153: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

18

Tabl

e 5:

Impl

emen

tatio

n us

ing

eMai

nten

ance

for M

aint

enan

ce P

roce

ss M

anag

emen

t

Le

vel

Nam

e Ti

mel

ine

Gene

ral F

orm

ula

3 4

MaintenanceManagement

Maintenance Strategy

Criti

cal E

quip

men

t Rat

io

Re

gist

ratio

n da

te ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝐶𝐶𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝐶𝐶𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

Prev

entiv

e M

aint

enan

ce

Rate

W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑃𝑃𝑀𝑀𝑅𝑅𝑒𝑒𝑆𝑆𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

Pred

ictiv

e M

aint

enan

ce

Rate

(PdM

)

Regi

stra

tion

date

⊆ (q

uery

star

t dat

e,

quer

y te

rmin

atio

n da

te)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑅𝑅𝑅𝑅𝑓𝑓𝑅𝑅𝑆𝑆𝑅𝑅 𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑜𝑜𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅 𝑆𝑆𝑜𝑜𝑅𝑅𝐶𝐶𝑅𝑅′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

Prev

entiv

e M

aint

enan

ce

Rate

(Cri

tical

Equ

ipm

ent)

W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑃𝑃𝑀𝑀𝑅𝑅𝑒𝑒𝑆𝑆𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁′

& ′𝑐𝑐𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝐶𝐶𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′

Pred

ictiv

e M

aint

enan

ce

Rate

(Cri

tical

Equ

ipm

ent)

Re

gist

ratio

n da

te ⊆

(que

ry st

art d

ate,

qu

ery

term

inat

ion

date

)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′ 𝑅𝑅𝑅𝑅

′𝑅𝑅𝑅𝑅𝑓𝑓𝑅𝑅𝑆𝑆𝑅𝑅 𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑜𝑜𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅 𝑆𝑆𝑜𝑜𝑅𝑅𝐶𝐶𝑅𝑅 ′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′

Run

to F

ailu

re (R

TF) R

atio

fo

r Cri

tical

Equ

ipm

ent

Regi

stra

tion

date

⊆ (q

uery

star

t dat

e,

quer

y te

rmin

atio

n da

te)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐶𝐶′𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′ 𝑅𝑅𝑅𝑅

𝐶𝐶𝑜𝑜 ′ 𝑅𝑅𝑅𝑅𝑓𝑓𝑅𝑅𝑆𝑆𝑅𝑅 𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑜𝑜𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅 𝑆𝑆𝑜𝑜𝑅𝑅𝐶𝐶𝑅𝑅 ′

𝐶𝐶𝑜𝑜 𝑃𝑃𝑀𝑀

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑐𝑐𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑐𝑐𝑓𝑓𝑓𝑓′

Plan

ned

Mai

nten

ance

vs

Unpl

anne

d M

aint

enan

ce

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

=′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝐶𝐶𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝑁𝑁𝑅𝑅′

Maintenance Planning

Quantity Related

Num

ber o

f Pla

nned

Wor

k Or

ders

Cre

ated

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Time Related

Aver

age

Plan

ned

Exec

utio

n Ti

me

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Resource Related

Tota

l Num

ber o

f Pla

nned

In

tern

al L

abou

r Hou

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ‘’𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓’

& 𝑊𝑊

𝑂𝑂_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Aver

age

Plan

ned

Inte

rnal

La

bour

Hou

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ‘𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓’

& 𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Tota

l Num

ber o

f Pla

nned

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ‘𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

& 𝑊𝑊

𝑂𝑂_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Page 154: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

19

Exte

rnal

Lab

our H

ours

da

te, q

uery

end

dat

e)

Aver

age

Plan

ned

Exte

rnal

La

bour

Hou

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ‘𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓’

& 𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Plan

ned

Num

ber o

f Mat

eria

l Us

ed

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Aver

age

Plan

ned

Num

ber o

f M

ater

ials

Use

d W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Cost Related

Tota

l Cos

t of P

lann

ed

Hum

an R

esou

rces

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

) ∗( 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

=

′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

& 𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ′𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

Aver

age

Plan

ned

Exte

rnal

H

uman

Res

ourc

e Co

sts

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

) ∗ (𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

= ′𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

=′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

=′𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

Tota

l Cos

t of P

lann

ed

Mat

eria

ls

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑄𝑄𝑆𝑆𝑓𝑓𝐶𝐶𝑅𝑅𝑅𝑅𝑅𝑅𝑒𝑒)

∗ (𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑃𝑃𝑁𝑁𝑅𝑅𝑐𝑐𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Plan

ned

Aver

age

Mat

eria

l Co

st

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

) ∗

(𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑁𝑁𝑅𝑅𝑐𝑐𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Labo

ur C

ost R

atio

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

+ 𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓𝑅𝑅

)

Plan

ned

Mat

eria

l Cos

t Rat

io

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓𝑅𝑅

( 𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

+ 𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓𝑅𝑅

)

Maintenance Preparation

Work Order Creation

Plan

ned

Star

t / E

nd T

ime

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n/cr

eatio

n da

te ⊆

(q

uery

star

t dat

e, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑅𝑅𝑅𝑅𝑓𝑓𝑁𝑁𝑅𝑅

/𝑁𝑁𝐶𝐶𝑅𝑅

𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁’

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Plan

ned

Spar

e Pa

rts

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n /

crea

tion

date

(que

ry st

art d

ate,

que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Plan

ned

Man

-Hou

r Re

gist

ratio

n Ra

te

Wor

k or

der r

egis

trat

ion/

crea

tion

date

(que

ry st

art d

ate,

que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑓𝑓𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

ℎ𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

’ 𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Plan

ned

Dow

ntim

e Re

gist

ratio

n Ra

te

Wor

k or

der r

egis

trat

ion/

crea

tion

date

(que

ry st

art d

ate,

que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+

(𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑓𝑓𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁’

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

Stan

dard

Ope

ratin

g Pl

an

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n/cr

eatio

n da

te ⊆

(q

uery

star

t dat

e, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑅𝑅𝑅𝑅𝑓𝑓𝐶𝐶𝑅𝑅𝑓𝑓𝑁𝑁𝑅𝑅 𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶

’ 𝑅𝑅𝑅𝑅 𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁𝐶𝐶

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Page 155: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

20

Plan

ned

Wor

k Ty

pe

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n/cr

eatio

n da

te ⊆

(q

uery

star

t dat

e, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ ( 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Job

Prio

rity

Reg

istr

atio

n Ra

te

Wor

k or

der r

egis

trat

ion/

crea

tion

date

(que

ry st

art d

ate,

que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+

(𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝐼𝐼𝑜𝑜𝑁𝑁

𝑆𝑆𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑅𝑅𝑅𝑅𝑒𝑒′

𝑅𝑅𝑅𝑅 𝑅𝑅𝑅𝑅𝑐𝑐𝑁𝑁𝐶𝐶

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

+ (𝐶𝐶𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁𝐶𝐶𝑅𝑅 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Work Order Feedback

Actu

al S

pare

Par

ts U

se

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑓𝑓𝑓𝑓 𝑆𝑆𝑅𝑅𝑁𝑁

𝑜𝑜𝑜𝑜 𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅′

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

Actu

al M

an-H

our

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝐷𝐷ℎ𝑁𝑁𝐶𝐶 𝑆𝑆𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅 𝑆𝑆𝑓𝑓𝐶𝐶𝑆𝑆𝑓𝑓𝑓𝑓’ 𝑅𝑅𝑅𝑅

𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Actu

al D

ownt

ime

Regi

stra

tion

Rate

W

ork

orde

r reg

istr

atio

n ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑓𝑓𝑐𝑐𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓 𝑅𝑅𝑜𝑜𝐷𝐷

𝐶𝐶𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁’

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

‘𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑅𝑅𝑜𝑜𝐷𝐷

𝐶𝐶𝑅𝑅𝑅𝑅𝑆𝑆𝑁𝑁’

𝑅𝑅𝑅𝑅 𝑓𝑓𝑜𝑜𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

Wor

k Or

der R

egis

trat

ion

Back

-Log

W

ork

orde

r reg

istr

atio

n ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

( 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐷𝐷𝑓𝑓𝑅𝑅𝑁𝑁 𝑓𝑓𝐶𝐶𝑅𝑅 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

– (𝐴𝐴

𝑐𝑐𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓 𝐷𝐷

𝑓𝑓𝑅𝑅𝑁𝑁 𝑓𝑓𝐶𝐶𝑅𝑅 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Work Order Approval

Tota

l Num

ber o

f Wor

k Or

ders

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

(𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝐶𝐶𝑁𝑁𝑁𝑁𝑅𝑅 𝑅𝑅𝑜𝑜

𝑁𝑁𝑁𝑁𝑆𝑆𝑜𝑜𝑁𝑁𝑅𝑅′)

Tota

l Num

ber o

f App

rove

d W

ork

Orde

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

=

′𝐶𝐶𝑁𝑁𝑁𝑁𝑅𝑅 𝑅𝑅𝑜𝑜

𝑁𝑁𝑁𝑁𝑆𝑆𝑜𝑜𝑁𝑁𝑅𝑅′

& ′𝑓𝑓𝑜𝑜𝑅𝑅

𝐶𝐶𝑜𝑜 𝑁𝑁𝑁𝑁𝐼𝐼𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑁𝑁𝑁𝑁𝑐𝑐𝑜𝑜𝑁𝑁𝑅𝑅′

Tota

l Num

ber o

f Un

appr

oved

Wor

k Or

ders

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

=

′𝐶𝐶𝑁𝑁𝑁𝑁𝑅𝑅 𝑅𝑅𝑜𝑜

𝑁𝑁𝑁𝑁𝑆𝑆𝑜𝑜𝑁𝑁𝑅𝑅′

& ′𝑓𝑓𝑜𝑜𝑅𝑅

𝑁𝑁𝑁𝑁𝐼𝐼𝑁𝑁𝑐𝑐𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑁𝑁𝑁𝑁𝑐𝑐𝑜𝑜𝑁𝑁𝑅𝑅′

Wor

k Or

der A

ppro

val R

atio

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅 𝑅𝑅𝑜𝑜

𝑁𝑁𝑁𝑁 𝐴𝐴𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅 𝐶𝐶𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅

One-

time

Appr

oved

Wor

k Or

der R

atio

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐴𝐴𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

Aver

age

time

lag

for

Repo

rtin

g an

d Ap

prov

ing

Wor

k Or

ders

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐴𝐴𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑓𝑓𝑓𝑓 𝐷𝐷

𝑓𝑓𝑅𝑅𝑁𝑁)

– (𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑅𝑅𝑁𝑁𝑆𝑆𝑜𝑜𝑁𝑁𝑅𝑅 𝐷𝐷

𝑓𝑓𝑅𝑅𝑁𝑁)

𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑅𝑅𝑁𝑁𝑆𝑆𝑜𝑜𝑁𝑁𝑅𝑅 𝑅𝑅

𝑁𝑁𝐸𝐸𝑆𝑆𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐴𝐴𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

Page 156: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

21

Maintenance Execution

Quantity Related

Num

ber o

f Pla

nned

Wor

k Or

ders

Com

plet

ed

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁 𝑅𝑅𝑅𝑅

‘𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶′

Num

ber o

f Unp

lann

ed W

ork

Orde

rs C

ompl

eted

W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁 𝑅𝑅𝑅𝑅

‘𝑆𝑆𝐶𝐶𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

Num

ber o

f Wor

k Or

ders

Co

mpl

eted

per

Shi

ft W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

�𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅 𝑃𝑃𝑁𝑁𝑁𝑁𝑜𝑜𝑜𝑜𝑁𝑁𝑆𝑆𝑁𝑁𝑅𝑅

𝑓𝑓𝑅𝑅 𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

�∗

100

Wor

k Or

der R

esol

utio

n Ra

te

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

�𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅 𝑃𝑃𝑁𝑁𝑁𝑁𝑜𝑜𝑜𝑜𝑁𝑁𝑆𝑆𝑁𝑁𝑅𝑅

𝑓𝑓𝑅𝑅 𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

�∗

100

Time Related

Aver

age

Wor

k Or

der T

ime

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Aver

age

Wai

ting

Tim

e fo

r Pe

rson

nel

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐷𝐷𝑅𝑅𝑅𝑅ℎ 𝑆𝑆𝑅𝑅𝑓𝑓𝑜𝑜𝑜𝑜 𝑅𝑅𝐶𝐶

𝑃𝑃𝑓𝑓𝑓𝑓𝑐𝑐𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Aver

age

Wai

ting

Tim

e fo

r Sp

are

Part

s W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑅𝑅𝐶𝐶 𝑃𝑃𝑓𝑓𝑓𝑓𝑐𝑐𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

] 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑁𝑁𝑜𝑜𝐶𝐶

−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

& 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Pers

onne

l Wai

ting

Tim

e Ra

tio

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑓𝑓𝑅𝑅𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁 𝑜𝑜𝑜𝑜𝑁𝑁 𝑃𝑃𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Spar

e Pa

rts W

aitin

g Ti

me

Ratio

W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑓𝑓𝑅𝑅𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁 𝑜𝑜𝑜𝑜𝑁𝑁 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Aver

age

Mai

nten

ance

Ou

tage

Tim

e W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑁𝑁𝐸𝐸𝑅𝑅𝑅𝑅𝑅𝑅

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑓𝑓𝑁𝑁𝑓𝑓𝑅𝑅

−𝑅𝑅𝑅𝑅𝑜𝑜𝑆𝑆

𝑜𝑜𝑜𝑜 𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶 𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑆𝑆𝑅𝑅𝑓𝑓𝑁𝑁𝑅𝑅−𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆

𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Aver

age

Wai

ting

Tim

e of

Pe

rson

nel d

urin

g Sh

utdo

wn

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑆𝑆𝑅𝑅𝑓𝑓𝑜𝑜𝑜𝑜

𝑅𝑅𝐶𝐶 𝑃𝑃𝑓𝑓𝑓𝑓𝑐𝑐𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑁𝑁𝐸𝐸𝑅𝑅𝑅𝑅𝑅𝑅

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑅𝑅𝑅𝑅𝑓𝑓𝑁𝑁𝑅𝑅

𝑜𝑜𝑜𝑜 𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶 𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑆𝑆𝑅𝑅𝑓𝑓𝑁𝑁𝑅𝑅

𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Aver

age

Wai

ting

Tim

e fo

r Sp

are

Part

s dur

ing

Shut

dow

n

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑅𝑅𝐶𝐶 𝑃𝑃𝑓𝑓𝑓𝑓𝑐𝑐𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑅𝑅𝑅𝑅𝑜𝑜𝑆𝑆 𝑓𝑓𝑅𝑅𝐶𝐶𝑁𝑁

𝐷𝐷𝑅𝑅𝑅𝑅ℎ 𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆 𝑁𝑁𝑅𝑅𝐶𝐶𝑁𝑁

& 𝑊𝑊𝑅𝑅𝑅𝑅ℎ 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Aver

age

Wai

ting

Tim

e of

Pe

rson

nel d

urin

g Sh

utdo

wn

Ratio

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑓𝑓𝑅𝑅𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁 𝑜𝑜𝑜𝑜𝑁𝑁 𝑃𝑃𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓 𝑅𝑅𝑆𝑆𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅 𝑆𝑆ℎ𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐷𝐷𝐶𝐶

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑂𝑂𝑆𝑆𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Aver

age

Wai

ting

Tim

e fo

r Sp

are

Part

s dur

ing

Shut

dow

n Ra

tio

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑓𝑓𝑅𝑅𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁 𝑜𝑜𝑜𝑜𝑁𝑁 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑅𝑅𝑆𝑆𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅

𝑆𝑆ℎ𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐷𝐷

𝐶𝐶 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑀𝑀𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑂𝑂𝑆𝑆𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Estim

ated

Tim

e vs

. Act

ual

Tim

e W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁−𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Page 157: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

22

Resource Related

Tota

l Num

ber o

f Int

erna

l La

bour

Hou

rs

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)

Aver

age

Inte

rnal

Lab

our

Hou

rs U

sed

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Tota

l Num

ber o

f Ext

erna

l La

bour

Hou

rs

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)

Aver

age

Exte

rnal

Lab

our

Hou

rs U

sed

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Num

ber o

f Mat

eria

ls U

sed

Wor

k or

der c

ompl

etio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅)

Aver

age

Mat

eria

l Use

d W

ork

orde

r com

plet

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑂𝑂𝑜𝑜 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)

Cost Related

Tota

l Cos

t of E

xter

nal

Hum

an R

esou

rces

Use

d W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

( 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑓𝑓𝑒𝑒 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

) ∗

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑜𝑜𝑜𝑜

𝐹𝐹𝑜𝑜𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝐶𝐶

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)

Aver

age

Exte

rnal

Hum

an

Reso

urce

s Cos

ts

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑐𝑐𝑓𝑓𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒

=’𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓’

Tota

l Cos

t of M

ater

ials

Use

d W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

) 𝑆𝑆𝑆𝑆𝑆𝑆

𝐷𝐷𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝐶𝐶𝑐𝑐𝑅𝑅 𝑓𝑓𝑅𝑅𝑅𝑅𝑁𝑁𝑅𝑅

((𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑄𝑄𝑆𝑆𝑓𝑓𝐶𝐶𝑅𝑅𝑅𝑅𝑅𝑅𝑒𝑒)∗

( 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑃𝑃𝑁𝑁𝑅𝑅𝑐𝑐𝑁𝑁

) )

Aver

age

Cost

of M

ater

ials

Us

ed

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑅𝑅𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

Exte

rnal

Lab

our C

osts

Rat

io

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

+ 𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑅𝑅𝑁𝑁𝑅𝑅)

Actu

al M

ater

ials

Cos

t Rat

io

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑅𝑅𝑁𝑁𝑅𝑅

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝐻𝐻𝑆𝑆𝑆𝑆

𝑓𝑓𝐶𝐶 𝑅𝑅𝑁𝑁𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅

𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

+

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑀𝑀𝑓𝑓𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑅𝑅𝑁𝑁𝑅𝑅)

Mai

nten

ance

Cos

t per

Ass

et

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑄𝑄𝑆𝑆𝑓𝑓𝐶𝐶𝑅𝑅𝑅𝑅𝑅𝑅𝑒𝑒)

∗ (𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝑃𝑃𝑁𝑁𝑅𝑅𝑐𝑐𝑁𝑁)

Page 158: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

23

Maintenance Assessment

Quality N

umbe

r of C

ompl

eted

Wor

k Or

ders

App

rove

d W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑆𝑆𝑅𝑅𝑓𝑓𝑅𝑅𝑆𝑆𝑅𝑅

=

′𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁′𝑓𝑓𝐶𝐶𝑅𝑅 𝑊𝑊𝑂𝑂

_𝑇𝑇𝑒𝑒𝑆𝑆𝑁𝑁

= ′𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅′

Wor

k Or

der A

ppro

val R

atio

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

) 𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

= ′𝑁𝑁𝑁𝑁𝐸𝐸𝑆𝑆𝑅𝑅𝑁𝑁𝑁𝑁

𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑓𝑓𝑓𝑓′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

One-

Tim

e Pa

ss In

tern

al

Com

plet

ion

Rate

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

= ′𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑜𝑜𝑜𝑜𝑁𝑁

𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑓𝑓𝑓𝑓′

‘𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅 𝑜𝑜𝐶𝐶𝑐𝑐𝑁𝑁

’ 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

) 𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑅𝑅𝑁𝑁𝑜𝑜𝑆𝑆𝑆𝑆

=′𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

One-

Tim

e Pa

ss E

xter

nal

Com

plet

ion

Rate

W

ork

orde

r cre

atio

n da

te ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑊𝑊𝑂𝑂𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇

= ′𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑜𝑜𝑜𝑜𝑁𝑁

𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑓𝑓𝑓𝑓′

‘𝑓𝑓𝑆𝑆𝑆𝑆𝑁𝑁𝑜𝑜𝑐𝑐𝑁𝑁𝑅𝑅

𝑜𝑜𝐶𝐶𝑐𝑐𝑁𝑁’

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑅𝑅𝑁𝑁𝑜𝑜𝑆𝑆𝑆𝑆

=′𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

Plan

ning

Com

plia

nce

Wor

k or

der c

reat

ion

date

⊆ (q

uery

star

t da

te, q

uery

end

dat

e)

�𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝑀𝑀𝑓𝑓𝐶𝐶

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑊𝑊

𝑁𝑁𝑁𝑁𝑊𝑊𝑓𝑓𝑒𝑒 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅 𝑀𝑀𝑓𝑓𝐶𝐶

𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

�∗

100

Effectiveness

Inte

rnal

Wor

k Co

mpl

etio

n Ra

te

Plan

ned

wor

k or

der o

mpl

etio

n da

te ⊆

(q

uery

star

t dat

e, q

uery

term

inat

ion

date

)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝑁𝑁𝑜𝑜𝑆𝑆𝑆𝑆

=′𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

Outs

ourc

ed W

ork

Com

plet

ion

Rate

Plan

ned

wor

k or

der c

ompl

etio

n da

te ⊆

(q

uery

star

t dat

e, q

uery

term

inat

ion

date

)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑅𝑅𝑁𝑁𝑜𝑜𝑆𝑆𝑆𝑆

= ′𝑁𝑁𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓′

Inte

rnal

Wor

k De

lay

Rate

W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑁𝑁′𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶 𝑐𝑐𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

Inte

rnal

Wor

k Av

erag

e De

lay

Peri

od

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry te

rmin

atio

n da

te)

𝑆𝑆𝑆𝑆𝑆𝑆�

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐷𝐷𝑓𝑓𝑅𝑅𝑁𝑁)

– ( 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐷𝐷𝑓𝑓𝑅𝑅𝑁𝑁)�

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′

Exte

rnal

Wor

k D

elay

Rat

e W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

term

inat

ion

date

)

𝑆𝑆𝑆𝑆𝑆𝑆

([𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅]

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶 𝑐𝑐𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁)

Exte

rnal

Wor

k Av

erag

e De

lay

Peri

od

Tick

et re

gist

ratio

n da

te ⊆

(inq

uiry

star

t da

te, i

nqui

ry te

rmin

atio

n da

te)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐷𝐷𝑓𝑓𝑅𝑅𝑁𝑁)

– ( 𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝐷𝐷𝑓𝑓𝑅𝑅𝑁𝑁)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑐𝑐𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐶𝐶𝑜𝑜𝑆𝑆

𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑂𝑂𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

′𝑁𝑁𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁

> 𝑆𝑆𝑓𝑓𝑓𝑓𝐶𝐶 𝑐𝑐𝑜𝑜𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁′

Inte

rnal

Ave

rage

Exe

cutio

n Ti

me

Dev

iatio

n Ra

tio

Wor

k or

der c

reat

ion

date

(dat

e of

in

quir

y, d

ate

of in

quir

y te

rmin

atio

n)

( 𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝐴𝐴𝑐𝑐𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓 𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

– (𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

(𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Page 159: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

24

Exte

rnal

Com

mitt

ee

Exec

utio

n Ti

me

Dev

iatio

n Ra

tio

Wor

k or

der c

reat

ion

date

(dat

e of

in

quir

y, d

ate

of in

quir

y te

rmin

atio

n)

( 𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝐴𝐴𝑐𝑐𝑅𝑅𝑆𝑆𝑓𝑓𝑓𝑓 𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

– (𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑊𝑊

𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊𝑅𝑅𝐶𝐶𝑅𝑅

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝐸𝐸𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Inte

rnal

Man

-Hou

r Di

ffere

nce

Ratio

W

ork

orde

r cre

atio

n da

te (d

ate

of

inqu

iry,

dat

e of

inqu

iry

term

inat

ion)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)−

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Inte

rnal

Ave

rage

Man

-Hou

r Di

ffere

nce

Ratio

W

ork

orde

r cre

atio

n da

te (d

ate

of

inqu

iry,

dat

e of

inqu

iry

term

inat

ion)

( 𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)−

(𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝐼𝐼𝐶𝐶𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁

Exte

rnal

Man

-Hou

r Di

ffere

nce

Ratio

W

ork

orde

r cre

atio

n da

te (d

ate

of

inqu

iry,

dat

e of

inqu

iry

term

inat

ion)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)−

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶𝐶𝐶𝑁𝑁𝑅𝑅

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Exte

rnal

Ave

rage

Man

-Hou

r Di

ffere

nce

Ratio

W

ork

orde

r cre

atio

n da

te (d

ate

of

inqu

iry,

dat

e of

inqu

iry

term

inat

ion)

(𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁 𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

−(𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

(𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

𝑃𝑃𝑓𝑓𝑓𝑓𝐶𝐶 𝑁𝑁𝑓𝑓𝑁𝑁𝑜𝑜𝑆𝑆𝑁𝑁

𝑇𝑇𝑅𝑅𝑆𝑆𝑁𝑁)

Mat

eria

ls D

iffer

ence

Rat

io

Wor

k or

der c

reat

ion

date

(dat

e of

in

quir

y, d

ate

of in

quir

y te

rmin

atio

n)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅)

− 𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅)

Aver

age

Mat

eria

ls

Diffe

renc

e Ra

tio

Wor

k or

der c

reat

ion

date

(dat

e of

in

quir

y, d

ate

of in

quir

y te

rmin

atio

n)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅)

− 𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

(𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

)𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁

(𝑆𝑆𝑐𝑐ℎ𝑁𝑁𝑅𝑅𝑆𝑆𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

)

Tabl

e 6:

Impl

emen

tatio

n us

ing

eMai

nten

ance

for M

aint

enan

ce R

esou

rce

Man

agem

ent

Le

vel

Nam

e Ti

mel

ine

Gene

ral F

orm

ula

3 4

Spare Parts Management

Inventory Management

Aver

age

Spar

e Pa

rt Q

uant

ity

Spar

e pa

rts s

tock

dat

e (q

uery

star

t da

te, q

uery

end

dat

e)

([𝑂𝑂𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅𝐶𝐶𝑅𝑅

𝑆𝑆𝑅𝑅𝑜𝑜𝑐𝑐𝑊𝑊

] +

[𝐸𝐸𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑅𝑅

𝑆𝑆𝑅𝑅𝑜𝑜𝑐𝑐𝑊𝑊

) 2

Spar

e Pa

rt C

apita

l Util

izat

ion

equi

pmen

t led

ger d

ate(

quer

y st

art

date

, que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐼𝐼𝐶𝐶𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒 𝐹𝐹𝑆𝑆𝐶𝐶𝑅𝑅𝑅𝑅

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅 𝑃𝑃

𝑆𝑆𝑁𝑁𝑐𝑐ℎ𝑓𝑓𝑅𝑅𝑁𝑁 𝑃𝑃𝑁𝑁𝑅𝑅𝑐𝑐𝑁𝑁)

Spar

e Pa

rts C

apita

l Rep

lace

men

t Rat

e eq

uipm

ent l

edge

r dat

e(qu

ery

star

t da

te, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐼𝐼𝐶𝐶𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒 𝐹𝐹𝑆𝑆𝐶𝐶𝑅𝑅𝑅𝑅

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐸𝐸𝐸𝐸𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅 𝑅𝑅

𝑁𝑁𝑆𝑆𝑓𝑓𝑓𝑓𝑐𝑐𝑁𝑁𝑆𝑆𝑁𝑁𝐶𝐶𝑅𝑅 𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅)

Spar

e Pa

rt C

onsu

mpt

ion

per T

hous

and

Sek

Outp

ut

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐶𝐶𝑜𝑜𝐶𝐶𝑅𝑅𝑆𝑆𝑆𝑆

𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑂𝑂𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑅𝑅 𝑉𝑉

𝑓𝑓𝑓𝑓𝑆𝑆𝑁𝑁

𝑃𝑃𝑁𝑁𝑁𝑁

100

0 𝑆𝑆𝑁𝑁𝑊𝑊 𝑂𝑂𝑆𝑆𝑅𝑅𝑆𝑆𝑆𝑆𝑅𝑅

Spar

e Pa

rt T

urno

ver R

ate

Wor

k or

der r

egis

trat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐶𝐶𝑜𝑜𝐶𝐶𝑅𝑅𝑆𝑆𝑆𝑆

𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐼𝐼𝐶𝐶𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒 𝐹𝐹𝑆𝑆𝐶𝐶𝑅𝑅𝑅𝑅

Spar

e Pa

rt T

urno

ver P

erio

d W

ork

orde

r reg

istr

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝐴𝐴𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑁𝑁 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐼𝐼𝐶𝐶𝑐𝑐𝑁𝑁𝐶𝐶𝑅𝑅𝑜𝑜𝑁𝑁𝑒𝑒 𝐹𝐹𝑆𝑆𝐶𝐶𝑅𝑅𝑅𝑅

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁 𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅 𝐶𝐶𝑜𝑜𝐶𝐶𝑅𝑅𝑆𝑆𝑆𝑆

𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶)∗

365

Slow

Mov

ing

Inve

ntor

y Ra

tio

equi

pmen

t led

ger d

ate(

quer

y st

art

date

, que

ry e

nd d

ate)

𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

𝑃𝑃𝑓𝑓𝑁𝑁𝑅𝑅𝑅𝑅

𝑁𝑁𝑜𝑜𝑅𝑅

𝑁𝑁𝑅𝑅𝑁𝑁𝑅𝑅

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑁𝑁

𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑆𝑆𝑆𝑆𝑓𝑓𝑁𝑁𝑁𝑁

Page 160: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

25

Outsourcing Management

Contractor Statistics

Num

ber o

f Out

sour

ced

Equi

pmen

t Br

eakd

owns

W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝑆𝑆𝑅𝑅𝑜𝑜𝑆𝑆𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

‘𝑜𝑜𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅’

)

Num

ber o

f Out

sour

ced

Mai

nten

ance

Pe

rson

nel

Wor

k su

pplie

r lis

t dat

e ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

‘𝑜𝑜𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝑆𝑆𝑁𝑁𝑐𝑐𝑁𝑁𝑅𝑅’

)

Exte

rnal

Mai

nten

ance

Cos

t Rat

io

Wor

k su

pplie

r lis

t dat

e ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝐸𝐸𝑅𝑅𝑁𝑁𝑁𝑁𝐶𝐶𝑓𝑓𝑓𝑓 𝑃𝑃𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓

) ∗

(𝑅𝑅𝑓𝑓𝑅𝑅𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊 𝐷𝐷𝑜𝑜𝐶𝐶𝑁𝑁)

𝑇𝑇𝑜𝑜𝑅𝑅𝑓𝑓𝑓𝑓 𝑀𝑀

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝐶𝐶𝑜𝑜𝑅𝑅𝑅𝑅

∗10

0

Human Resources Management

Skills Management

Tota

l Num

ber o

f Mai

nten

ance

Ope

rato

rs

Wor

k su

pplie

r lis

t dat

e ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑜𝑜𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝑁𝑁′

Tota

l Num

ber o

f Mai

nten

ance

Eng

inee

rs

Wor

k su

pplie

r lis

t dat

e ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝐷𝐷ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁′

Num

ber o

f Mul

ti-Sk

illed

Mai

nten

ance

Pe

rson

nel

Wor

k su

pplie

r lis

t dat

e ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑆𝑆𝑆𝑆𝑓𝑓𝑅𝑅𝑅𝑅−𝑅𝑅𝑊𝑊𝑅𝑅𝑓𝑓𝑓𝑓′

Mai

nten

ance

Ope

rato

r Rat

e W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑜𝑜𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

∗10

0

Mai

nten

ance

Eng

inee

r Rat

e W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

∗10

0

Mul

ti-Sk

illed

Mai

nten

ance

Per

sonn

el

Rate

W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑆𝑆𝑆𝑆𝑓𝑓𝑅𝑅𝑅𝑅−𝑅𝑅𝑊𝑊𝑅𝑅𝑓𝑓𝑓𝑓′

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

∗10

0

Work Load Management

Aver

age

Num

ber o

f Wor

k Or

ders

Cr

eate

d pe

r Per

son

Wor

k or

der c

reat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝐶𝐶𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁𝑅𝑅

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝐶𝐶𝑆𝑆𝑆𝑆𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑁𝑁𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ‘𝑁𝑁𝑁𝑁𝑅𝑅𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑁𝑁

𝑜𝑜𝑜𝑜𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁 𝑐𝑐𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶

Aver

age

Num

ber o

f Wor

k Or

ders

Ex

ecut

ed p

er P

erso

n W

ork

orde

r cre

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝑊𝑊𝑜𝑜𝑁𝑁𝑊𝑊)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ‘𝑁𝑁𝑁𝑁𝑅𝑅𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑁𝑁

𝑜𝑜𝑜𝑜𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁 𝑁𝑁𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶’

Aver

age

Daily

Wor

kloa

d pe

r Per

son

Wor

k or

der c

reat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ‘𝑁𝑁𝑁𝑁𝑅𝑅𝑆𝑆𝑜𝑜𝐶𝐶𝑅𝑅𝑅𝑅𝑁𝑁𝑓𝑓𝑁𝑁

𝑜𝑜𝑜𝑜𝑁𝑁

𝐷𝐷𝑜𝑜𝑁𝑁𝑊𝑊 𝑜𝑜𝑁𝑁𝑅𝑅𝑁𝑁𝑁𝑁 𝑁𝑁𝐸𝐸𝑁𝑁𝑐𝑐𝑆𝑆𝑅𝑅𝑅𝑅𝑜𝑜𝐶𝐶’

(𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐷𝐷𝑓𝑓𝑒𝑒𝑅𝑅 𝑅𝑅𝑆𝑆𝑁𝑁𝑅𝑅𝐶𝐶𝑅𝑅

𝐼𝐼𝐶𝐶𝐸𝐸𝑆𝑆𝑅𝑅𝑁𝑁𝑒𝑒)

Page 161: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

26

Training Management

Aver

age

Annu

al T

rain

ing

Hou

rs p

er

Mai

nten

ance

Ope

rato

r W

ork

orde

r cre

atio

n da

te ⊆

(que

ry

star

t dat

e, q

uery

end

dat

e)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

) 𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑜𝑜𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝑁𝑁′

∗36

5

Aver

age

Annu

al T

rain

ing

Hou

rs p

er

Mai

nten

ance

Eng

inee

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

( 𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁′

∗36

5

Aver

age

Annu

al T

rain

ing

Hou

rs p

er

Mul

ti-Sk

illed

Mai

nten

ance

Eng

inee

rs

Wor

k or

der c

reat

ion

date

⊆ (q

uery

st

art d

ate,

que

ry e

nd d

ate)

𝑆𝑆𝑆𝑆𝑆𝑆

(𝑅𝑅𝑁𝑁𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑁𝑁𝑁𝑁𝑁𝑁𝑅𝑅 𝐻𝐻𝑜𝑜𝑆𝑆𝑁𝑁𝑅𝑅

)𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ ′𝑆𝑆𝑆𝑆𝑓𝑓𝑅𝑅𝑅𝑅−𝑅𝑅𝑊𝑊𝑅𝑅𝑓𝑓𝑓𝑓𝑁𝑁𝑅𝑅′∗

365

Competence Development

Num

ber o

f New

Sen

ior M

aint

enan

ce

Engi

neer

s W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑜𝑜𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝑁𝑁′

& 𝑁𝑁𝑁𝑁𝐷𝐷

𝑅𝑅𝑜𝑜𝑓𝑓𝑁𝑁

=

’𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁’

Perc

enta

ge o

f New

Sen

ior M

aint

enan

ce

Engi

neer

s W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑜𝑜𝑆𝑆𝑁𝑁𝑁𝑁𝑓𝑓𝑅𝑅𝑜𝑜𝑁𝑁′

& 𝐶𝐶𝑁𝑁𝐷𝐷

𝑁𝑁𝑜𝑜𝑓𝑓𝑁𝑁

=

’𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁’

∗ 1

00

Num

ber o

f New

Mul

ti-Sk

illed

M

aint

enan

ce E

ngin

eers

W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁′

) &

(𝐶𝐶𝑁𝑁𝐷𝐷

𝑁𝑁𝑜𝑜𝑓𝑓𝑁𝑁

=’𝑆𝑆𝑆𝑆𝑓𝑓𝑅𝑅𝑅𝑅−𝑅𝑅𝑊𝑊𝑅𝑅𝑓𝑓𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁’

)

Perc

enta

ge o

f New

Mul

ti-Sk

illed

M

aint

enan

ce E

ngin

eers

W

ork

supp

lier l

ist d

ate ⊆

(que

ry st

art

date

, que

ry e

nd d

ate)

𝐶𝐶𝑜𝑜𝑆𝑆𝐶𝐶𝑅𝑅 (𝑁𝑁𝑆𝑆𝑆𝑆

𝑁𝑁𝑁𝑁𝑁𝑁 𝑜𝑜𝑜𝑜

𝐸𝐸𝑆𝑆𝑆𝑆𝑓𝑓𝑜𝑜𝑒𝑒𝑁𝑁𝑁𝑁𝑅𝑅)

𝑊𝑊ℎ𝑁𝑁𝑁𝑁𝑁𝑁

𝑅𝑅𝑅𝑅 ′𝑆𝑆

𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑆𝑆𝑁𝑁𝑁𝑁𝑅𝑅𝑜𝑜𝐶𝐶𝐶𝐶𝑁𝑁𝑓𝑓′ &

′𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁′

) 𝑓𝑓𝐶𝐶𝑅𝑅

(𝐶𝐶𝑁𝑁𝐷𝐷

𝑁𝑁𝑜𝑜𝑓𝑓𝑁𝑁

=’𝑆𝑆𝑆𝑆𝑓𝑓𝑅𝑅𝑅𝑅−𝑅𝑅𝑊𝑊𝑅𝑅𝑓𝑓𝑓𝑓𝑁𝑁𝑅𝑅

𝑆𝑆𝑓𝑓𝑅𝑅𝐶𝐶𝑅𝑅𝑁𝑁𝐶𝐶𝑓𝑓𝐶𝐶𝑐𝑐𝑁𝑁

𝑁𝑁𝐶𝐶𝑅𝑅𝑅𝑅𝐶𝐶𝑁𝑁𝑁𝑁𝑁𝑁’

) ∗

100

Page 162: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

27

7. Discussion The frameworks and approaches highlighted in Section 1 cannot solve the problems of the case study mining company. In terms of technical KPIs, it is at the top of its game, possibly because measuring the performance of machines is not as complicated as measuring the maintenance process. With the right configuration and devices, censors can make measurements automatically with little human involvement. However, the company does not have a way to measure soft KPIs.

Our proposed KPI framework has 111 soft KPIs, 85 for the maintenance process and 26 for maintenance resources. Following the maintenance process in the IEV standard, this KPI framework provides KPIs to measure maintenance strategy, maintenance planning, maintenance preparation, maintenance execution and maintenance assessment. These form the basis of the maintenance process. Thus, the proposed system is a more holistic performance measurement system; it will be beneficial to this mining organization, as there are some dependencies between soft KPIs and other organizational KPIs as shown in Figure 2.

Figure 2: Dependencies between Organizational KPIs

Soft KPIs can affect the technical KPIs in the long run and increase or decrease utilization and plant speed. When utilization and plant speed decrease, total production output will also decrease. In some cases, quality can be affected. Thus, both the soft KPIs and the technical KPIs affect the production KPIs. The values of the soft and technical KPIs reflect how well maintenance activities are going. Ineffective maintenance will not give optimal production and can affect the quality of the product, in this case, iron ore, and/or reduce production times because of breakdowns. Poor production, in turn, will not give good manufacturing execution system (MES) KPIs. This will eventually reduce the marketing KPI values, as customers will not buy products that are not of the highest quality for high prices, and the company’s overall KPIs will suffer. Each KPI in the proposed framework has a relationship of some kind with the KPIs

Overall KPIs

Marketing KPIs

Manufacturing Execution

System KPIs

Production KPIs

Technical KPIs

Soft KPIs

Page 163: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

28

above and below it; thus, changes in one KPI have a ripple effect on other KPIs. Recognizing appropriate soft KPIs and improving their values will increase overall capacity utilization, not just in maintenance but in all areas of the organization.

8. Conclusions This study proposes a KPI framework for maintenance management in a mining company. This KPI framework comprises two parts: technical KPIs (linked to machines) and business KPIs (linked to workflow); the latter ones are also called “soft KPIs” internally. The developed KPI framework has four levels. One the second levels are Asset Operation Management which deals with KPIs that measure maintenance performance relative to the equipment condition, Maintenance Process Management which deals with KPIs that measure efficiency and effectiveness of the consistent application of maintenance and maintenance support and Maintenance Resources Management which deals with KPIs that measure spare part management, internal maintenance personnel management and external maintenance personnel management. The third level shows a further breakdown of the items on the second level while the fourth level shows the KPIs that are made up of the third level classifications. The proposed KPI framework presents 134 KPIs that can be used to measure maintenance performance and streamline maintenance processes. Twenty-three KPIs are technical and 111 are soft. For the latter, 85 KPIs are for maintenance process management and 26 are for maintenance resources management. The study suggests soft KPIs can help to track maintenance strategy, maintenance planning, preparation and execution; they can also show how well maintenance tasks are achieved and track the use of resources, including internal and external maintenance personnel and spare parts for maintenance tasks. Ultimately, an integrated KPI approach will increase the decision maker’s awareness of maintenance performance, enhancing his or her decision-making abilities. Besides the proposed KPI framework, another contribution in this study is addressing its implementation by introducing time definition and general formula of each specified KPI. Results from this study will be applied to the studied company and supply the guidance of implementing those KPIs through eMaintenance.

Acknowledgements The motivation for the research originated from the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance (In Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h a eUnderhåll)”, initiated and financed by LKAB. The authors wish to thank Peter Olofsson, Mats Renfors, Sylvia Simma, Maria Rytty, Mikael From and Johan Enbak, for their support for this research in the form of funding and work hours.

Page 164: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

29

References

Al-Najjar, B. (2007). The lack of maintenance and not maintenance which costs: A model to describe and quantify the impact of vibration-based maintenance on company's business. International Journal of Production Economics, 107(1), 260-273.

Bourne, M., Melnyk, S., & Bititci, U. S. (2018). Performance measurement and management: Theory and practice. International Journal of Operations & Production Management, 38(11), 2010-2021.

Bourne, M., Mills, J., Wilcox, M., Neely, A., & Platts, K. (2000). Designing, implementing and updating performance measurement systems. International Journal of Operations & Production Management, 20(7), 754-771.

Campbell, J. D., & Reyes-Picknell, J. (2006). Strategies for excellence in maintenance management Productivity Press.

Coetzee, J. (1997). Towards a general maintenance model. Proceedings of IFRIM ‘97, , 1-9.

Dwight, R. (1995). Concepts for measuring maintenance performance. New Developments in Maintenance, , 109-125.

Dwight, R. (1999a). Frameworks for measuring the performance of the maintenance system in a capital intensive organisation.

Dwight, R. (1999b). Searching for real maintenance performance measures. Journal of Quality in Maintenance Engineering, 5(3), 258-275.

Jayaratna, N. (1994). Understanding and evaluating methodologies: NIMSAD, a systematic framework McGraw-Hill, Inc.

Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard--measures that drive performance. Harvard Business Review, 70(1), 71-79. Retrieved from http://search.proquest.com/docview/73114889?accountid=27917

Kumar, U., & Ellingson, H. (2000). Development and implementation of maintenance performance indicators for the norweigan oil and gas industry. Paper presented at the Development and Implementation of Maintenance Performance Indicators for the Norweigan Oil and Gas Industry: 07/03/2000-10/03/2000, 221-228.

Kumar, U., Galar, D., Parida, A., Stenström, C., & Berges, L. (2013). Maintenance performance metrics: A State-of-the-Art review. J of Qual in Maintenance Eng, 19(3), 233-277. doi:10.1108/JQME-05-2013-0029

Lingle, J. H., & Schiemann, W. A. (1996). From balanced scorecard to strategic gauges: Is measurement worth it? Management Review, 85(3), 56.

Löfsten, H. (2000). Measuring maintenance performance–in search for a maintenance productivity index. International Journal of Production Economics, 63(1), 47-58.

Muchiri, P., Pintelon, L., Gelders, L., & Martin, H. (2011). Development of maintenance function performance measurement framework and indicators. International Journal of Production Economics, 131(1), 295-302.

Neely, A. (1999). The performance measurement revolution: Why now and what next? International Journal of Operations & Production Management, 19(2), 205-228.

Parida, A., & Chattopadhyay, G. (2007). Development of a multi-criteria hierarchical framework for maintenance performance measurement (MPM). Journal of Quality in Maintenance Engineering, 13(3), 241-258.

Parmenter, D. (2007). Key performance indicators: Developing, implementing, and using winning KPIs John Wiley & Sons.

Tsang, A. H. (2000). Maintenance performance management in capital intensive organizations (PhD Thesis).

Weber, A., & Thomas, R. (2005). Key performance indicators: Measuring and managing the maintenance function. Ivara Corporation.

Page 165: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 166: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

PaperB

System availability assessment using a parametric Bayesian approach – a case study of balling drums

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Saari, E., Lin, J., Zhang, L-W, Liu B and Karim, R. 2019. System availability assessment using a parametric Bayesian approach – a case study of balling drums. InternationalJournalofSystemAssuranceEngineeringandManagement.Accepted.

Page 167: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 168: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

1  

SystemavailabilityassessmentusingaparametricBayesianapproach

A case study of balling drums

Esi Saari1, Jing Lin1*, Liangwei Zhang1,2, Bin Liu3, Ramin Karim1

1. DivisionofOperationandMaintenanceEngineering,LuleåUniversityofTechnology,Luleå,Sweden;2. DepartmentofIndustrialEngineering,DongguanUniversityofTechnology,Dongguan,China;3. DepartmentofManagementScience,UniversityofStrathclyde,Glasgow,UK.

*Correspondingauthor;E‐mailaddress:[email protected]

Abstract:Assessment of system availability usually uses either an analytical (e.g., Markov/semi-Markov) or a simulation approach (e.g., Monte Carlo simulation-based). However, the former cannot handle complicated state changes and the latter is computationally expensive. Traditional Bayesian approaches may solve these problems; however, because of their computational difficulties, they are not widely applied. The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches have led to the use of the Bayesian inference in a wide variety of fields. This study proposes a new approach to system availability assessment: a parametric Bayesian approach using MCMC, an approach that takes advantages of the analytical and simulation methods. By using this approach, Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged”, which better reflects reality and compensates for the limitations of simulation data sample size. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined in a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate the analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems).

Keywords: Asset management, system availability, reliability, maintainability, Bayesian statistics; Markov Chain Monte Carlo (MCMC), mining industry.

1. Introduction

Availability represents the proportion of a system’s uptime out of the total time in service and is one of the most critical aspects of performance evaluation. Availability is commonly measured as Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR). However, those “mean” values are normally “averaged”; thus, some useful information (e.g., trends, system complexity) may be neglected, and some problems may even be hidden.

Assessment of system availability has been studied from the design stage to the operational stage in various system configurations (e.g., in series, parallel, k-out-of-n, stand-by, multi-state, or mixed architectures). Approaches to assessing system availability mainly use either analytic or simulation techniques.

In general, analytic techniques represent the system using direct mathematical solutions from applied probability theory to make statements on various performance measures, such as the

Page 169: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

2  

steady-state availability or the interval availability (Dekker & Groenendijk, 1995) (Ocnasu, 2007). Researchers tend to use Markov models to assess dynamic availability or semi-Markov models using Laplace transforms to determine average performance measures (Dekker & Groenendijk, 1995) (Faghih-Roohi, et al., 2014). However, such approaches have been criticised as too restrictive to tackle practical problems; they assume constant failure and repair rates which is not likely to be the case in the real world (Raje, et al., 2000) (Marquez, et al., 2005). Furthermore, the time dependent availability obtained by a Markovian assumption is actually not valid for non-Markovian processes (Raje, et al., 2000).

Simulation techniques estimate availability by simulating the actual process and random behaviour of the system. The advantage is that non-Markov failures and repair processes can be modelled easily (Raje, et al., 2000). Recent research is working on developing Monte Carlo techniques to model the behaviour of complex systems under realistic time-dependent operational conditions (Marquez, et al., 2005) (Marquez & Iung, 2007) (Yasseri & Bahai, 2018) or to model multi-state systems with operational dependencies (Zio, et al., 2007). Although simulation is more flexible, it is computationally expensive.

Traditionally, Bayesian approaches have been used to assess system availability as they can solve the problem of complicated system state changes and computationally expensive simulation data; however, their development and application were stalled by the strict assumptions on prior forms and by computational difficulties. Research is more concerned with the prior’s selection or the posterior’s computation than the reality (Brender, 1968) (Brender, 1968) (Kuo, 1985) (Sharma & Bhutani, 1993) (Khan & Islam, 2012).

The recent proliferation of Markov Chain Monte Carlo (MCMC) simulation techniques has led to the use of the Bayesian inference in a wide variety of fields. Because of MCMC’s high dimensional numerical integral calculation (Lin, 2014), the selection of prior information and descriptions of reliability/maintainability can be more flexible and more realistic.

This study proposes a new approach to system availability assessment: a parametric Bayesian approach with MCMC, with a focus on the operational stage, using both analytical and simulation methods. MTTF or MTTR are treated as distributions instead of being “averaged” by point estimation, and this is closer to reality; in addition, the limitations of simulation data sample size are addressed by using MCMC techniques.

The rest of this paper is organized as follows. Section 2 describes the problem statement, the balling drum system, the data preparation, and the preliminary analysis of failure and repair data. Section 3 proposes a Bayesian Weibull model for MTTF and a Bayesian lognormal model for MTTR and explains how to use an MCMC computational scheme to obtain the parameters’ posterior distributions. Section 4 presents a case study, results, and discussion. Section 5 offers conclusions and suggestions for further study.

2. Problemstatement

This section presents the study problem statement, the balling drum system and its configuration, the system availability framework, and data preparation; it performs a

Page 170: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

3  

preliminary analysis of failure and repair data based on which parametric Bayesian models are constructed subsequently.

2.1Ballingdrumsystemsintheminingindustry

Our study is motivated by a balling drum system in the mining industry. The case study mine consists of five balling drums, labelled 1-5 (see Figure.1). All five balling drums receive their feed for production in the same manner. Each balling drum is expected to produce the same amount of pellets at its maximum. According to the working mechanism and an i. i. d test, they are regarded as independent; if one of the balling drums breaks down, it does not affect the rest of the balling drums, except that total production will be reduced. One assumption is made here that the system will fail only if all subsystems fail; therefore, it is treated as a parallel system.

Balling drum 1

Balling drum 2

Balling drum 3

Balling drum 4

Balling drum 5 

Figure.1 Description of a balling drum and the system sketch

The availability of a single balling drum, denoted as A , can be computed by

𝐴𝑀𝑇𝑇𝐹

𝑀𝑇𝑇𝐹 𝑀𝑇𝑇𝑅 1

According to the assumption, the total system availability, A , can be calculated as

𝐴 1 1 𝐴 2

2.2Datapreparationandpreliminaryanalysis

The study uses the failure and repair data of the five balling drums from January 2013 to December 2018. There are 1782 records. In the first step, the null values are removed, and the data are reduced to 1774 records.

The next step reveals there are different reasons for the TTF and TTR of individual balling drums. It is noticed that, for TTR data, if 150 shutdowns are considered normal (denoted as a threshold, see Figure. 2), then those exceeding 150 should be treated as abnormal and investigated using Root Cause Analysis (RCA).

Page 171: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

4  

After checking the work order types of such kind of abnormal data, it is found that most of them are caused by “preventive maintenance” which may due to lack of maintenance resources. To simplify the study, we assume all maintenance resources are sufficient for “preventive maintenance”; thus, the abnormally data might be caused by shortage of spare parts or skilled personnel will not be treated specially in this paper.

Figure.2 Example of TTR data for balling drum 1

To determine the baseline distribution of Time to Failure (TTF) and Time to Repair (TTR), we conduct a preliminary study of failure data and repair data using traditional analysis. In this preliminary study, several distributions are considered: exponential distribution, Weibull distribution, normal distribution, log-logistic distribution, lognormal distribution, and extreme value distribution. Table 1 lists the results.

Table.1 Preliminary study of failure data and repair data

BallingdrumTTFfitness TTRfitness

1st 2nd 3rd 1st 2nd 3rd 1 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 2 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 3 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 4 Weibull Log-logistic Lognormal Lognormal Weibull Logistic 5 Weibull Log-logistic Lognormal Lognormal Weibull Logistic

Based on the results, the Weibull distribution and lognormal distribution are selected for the TTF and TTR for balling drums 1 to 5; these are applied to the parametric Bayesian models in the next section.

Page 172: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

5  

3. ParametricBayesianModels

This section proposes a Bayesian Weibull model for TTF and a Bayesian lognormal model for TTR in the proposed parametric Bayesian models and explains the procedure of MCMC computational scheme to obtain the posterior distributions.

3.1MarkovChainMonteCarlowithGibbssampling

The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches has led to the use of the Bayesian inference in a wide variety of fields. MCMC is essentially Monte Carlo integration using Markov chains. Monte Carlo integration draws samples from the required distribution and then forms sample averages to approximate expectations. MCMC draws out these samples by running a cleverly constructed Markov chain for a long time. There are many ways of constructing these chains. The Gibbs sampler is one of the best known MCMC sampling algorithms in the Bayesian computational literature. It adopts the thinking of “divide and conquer”: i.e., when a set of parameters must be evaluated, the other parameters are assumed to be fixed and known. Let θ be an i-dimensional vector of parameters, and let f θ denote the marginal distribution for the j th parameter. The basic scheme of the Gibbs sampler for sampling from p θ is given as follows:

Step 1. Choose an arbitrary starting point 𝜃 𝜃 , … , 𝜃 ;

Step 2. Generate 𝜃 from the conditional distribution 𝑓 𝜃 |𝜃 , … , 𝜃 , and

generate 𝜃 from the conditional distribution distribution 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ;

Step 3. Generate 𝜃 from 𝑓 𝜃 |𝜃 , … , 𝜃 , 𝜃 … , 𝜃 ;

Step 4. Generate 𝜃 from 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ; the one-step transition from

𝜃 to 𝜃 𝜃 , … , 𝜃 has been completed, where 𝜃 is a one-time accomplishment of a Markov chain.

Step 5. Go to Step2.

After t iterations,θ θ , … , θ can be obtained. Each component of θ can also be obtained. Starting from different θ , as t → ∞, the marginal distribution of θ can be viewed as a stationary distribution based on the theory of the ergodic average. Then, the chain is seen as converging, and the sampling points are seen as observations of the sample.

3.2BayesianWeibullmodelforTTF

Suppose the Time to Failure (TTF) data t t , t , ⋯ , t for n individuals are i. i. d., and each corresponds to a 2-parameter Weibull distribution W α, γ , where α 0 and γ 0. Then, the p. d. f. is f t |α, γ αγt exp γt , while the c. d. f. is F t |α, γ 1 exp γt . The reliability function is R t |α, γ exp γt .

Denote the observed data set as D n, t . Therefore, the likelihood function for α and γ is

𝐿 𝛼, 𝛾|𝐷 𝑓 𝑡 |𝛼, 𝛾 𝛼𝛾𝑡 𝑒𝑥𝑝 𝛾𝑡 3

Page 173: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

6  

In this study, we assume α to be a gamma distribution (Kuo, 1985), denoted by G a , b as its prior distribution, written as π α |a , b ; we assume γ to be a gamma distribution denoted by G c , d as its prior distribution, written as π γ|c , d . This means

𝜋 𝛼 |𝑎 , 𝑏 ∝ 𝛼 𝑒𝑥𝑝 𝑏 𝛼 (4)

𝜋 𝛾|𝑐 , 𝑑 ∝ 𝛾 𝑒𝑥𝑝 𝑑 𝛾 (5)

Therefore, the joint posterior distribution can be obtained according to equations (3) to (5) as

𝜋 𝛼, 𝛾|𝐷 ∝ 𝐿 𝛼, 𝛾|𝐷 𝜋 𝛼 |𝑎 , 𝑏 𝜋 𝛾|𝑐 , 𝑑 , 6

and the parameters’ full conditional distribution with Gibbs sampling can be written as

𝜋 𝛼𝑗|𝛼 𝑗 , 𝛾, 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛼𝑎0 1𝑒𝑥𝑝 𝑏0𝛼 7

𝜋 𝛾𝑗|𝛼, 𝛾 𝑗 , 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛾𝑐0 1𝑒𝑥𝑝 𝑑0𝛾 8

3.3BayesianLognormalmodelforTTR

Suppose the Time to Repair (TTF) data t t , t , ⋯ , t for n individuals are i. i. d., and each ln t corresponds to a normal distribution, N μ, σ . We can get t ’s lognormal distribution with parameters μ and σ . Then, the p. d. f. and c. d. f. are given by equation (9) and equation (10):

𝑓 𝑡 |𝜇, 𝜎1

√2𝜋𝜎𝑡𝑒𝑥𝑝

12𝜎

𝑙𝑛 𝑡 𝜇 9

𝐹 𝑡 |𝜇, 𝜎 Φ𝑙𝑛 𝑡 𝜇

𝜎 10

Denote the observed data set as D n, t . Therefore, according to equation (9), the likelihood function for μ and σ becomes

𝐿 𝜇, 𝜎|𝐷 𝑓 𝑡 |𝜇, 𝜎 11

In this study, we assume μ to be a normal distribution denoted by N e , f as its prior distribution, written as π μ|e , f ; we assume σ to be a gamma distribution denoted by G g , h as its prior distribution, written as π σ|g , h . This means

𝜋 𝜇|𝑒 , 𝑓 ∝ 𝑓 𝑒𝑥𝑝𝑓2

𝜇 𝑒 12

Page 174: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

7  

𝜋 𝜎|𝑔 , ℎ ∝ 𝜎 𝑒𝑥𝑝 ℎ 𝜎 (13)

Therefore, the joint posterior distribution can be obtained according to equations (11) to (13) as

𝜋 𝜇, 𝜎|𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜋 𝜇 |𝑒 , 𝑓 𝜋 𝜎|𝑔 , ℎ 14

Then, the parameters’ full conditional distribution with Gibbs sampling can be written as

π 𝜇 |𝜇 , 𝜎, 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝑓0

12𝑒𝑥𝑝

𝑓0

2𝜇 𝑒0

2 15

π 𝜎 |𝜇, 𝜎 , 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜎 𝑒𝑥𝑝 ℎ 𝜎 16

4. Casestudy

This section presents a case study; it explains the procedure, gives the results, and offers a discussion.

4.1Theprocedure

The procedure applied in this case study to assess the system availability of the mine’s five balling drums has a total of seven steps, as described in Table 2.

Table 2. Steps in the system availability assessment

Steps Name Purpose Outputsinthiscase

1 Configuration definition

System configuration and dependencies determined to calculate system availability.

Five balling drum system parallel and independent (see Section 2.1).

2 Data collection Reliability and maintenance data (and information) collected.

1774 records for failure and repair data of the five balling drums collected from 2013 to 2018 (see Section 2.2).

3 Data preparation Data cleaned and outliers removed as needed.

Null values removed and abnormal data checked (see Section 2.2).

4 Preliminary Analysis

Pre-studies for TTF and TTR data performed to decide the baseline distributions.

MTTF fits a Weibull distribution; MTTR fits a lognormal distribution (see Section 2.2).

5 Parametric Bayesian model building

Prior distribution defined, and analytic models developed.

Bayesian Weibull model for MTTF with gamma priors and Bayesian lognormal model with gamma and normal priors constructed (see Section 3)

6 MCMC simulation

Burn-in defined and MCMC simulation implemented; convergence diagnostics and Monte Carlo error checked to confirm the effectiveness of the results.

Burn-in of 1000 samples used with an additional 10,000 Gibbs samples for each Markov chain (see Sections 3 and 4.2).

7 Results and analysis

Results, calculation, and discussion.

Results for parameters of interest in system availability assessment (see Sections 4.2 and 4.3).

Page 175: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

8  

4.2Results

In this case study, the calculations are implemented with WINBUGS. A three-chain Markov chain is constructed for each MCMC simulation. A burn-in of 1000 samples is used, with an additional 10,000 Gibbs samples for each Markov chain.

Vague prior distributions are adopted as follows:

For Bayesian Weibull model using TTF data:

𝛼~𝐺 0.0001, 0.0001 , 𝛾~𝐺 0.0001, 0.0001 ;

For Bayesian lognormal model using TTR data:

𝜇~𝑁 0, 0.0001 , 𝜎~𝐺 0.0001, 0.0001 .

Using the convergence diagnostics (i.e. checking dynamic traces in Markov chains, determining time series and Gelman-Rubin-Brooks (GRB) statistics, and comparing Monte Carlo error (MC error) with standard deviation (SD)) (Lin, 2014), we consider the following posterior distribution summaries for our models (see Table 3 and Table 4), including the parameters’ posterior distribution mean, SD, MC error, and 95% highest posterior distribution density (HPD) interval.

Table.3 Posterior statistics in Bayesian Weibull model for TTF

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝛼 0.5409 0.0231 4.288E-4 (0.4964, 0.5867) 𝛾 0.0928 0.0120 2.235E-4 (0.0712, 0.1178)

2 𝛼 0.5747 0.0288 6.289E-4 (0.5195, 0.6324) 𝛾 0.0642 0.0109 2.334E-4 (0.0451, 0.0876)

3 𝛼 0.5975 0.0251 5.004E-4 (0.5974, 0.6481) 𝛾 0.0712 0.0098 1.942E-4 (0.0707, 0.0922)

4 𝛼 0.5745 0.0245 4.885E-4 (0.5272, 0.6236) 𝛾 0.0750 0.0104 2.028E-4 (0.0564, 0.0970)

5 𝛼 0.5560 0.0216 4.135E-4 (0.5558, 0.5988) 𝛾 0.0958 0.0112 2.158E-4 (0.0952, 0.1196)

Table.4 Posterior statistics in Bayesian lognormal model for TTR

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝜇 -0.1842 0.1107 6.730E-4 (-0.4015, 0.0342)

𝜎 0.2270 0.0169 9.565E-5 ( 0.1951,0.2615 )

2 𝜇 -0.0075 0.1424 8.504E-4 (-0.2845,0.2697)

𝜎 0.1861 0.0161 9.140E-5 ( 0.1556, 0.2193)

3 𝜇 -0.4574 0.1134 6.540E-4 (-0.4578, -0.2354)

𝜎 0.2196 0.0164 9.621E-5 ( 0.2191, 0.2533 )

4 𝜇 -0.3540 0.1145 7.052E-4 (-0.5787, -0.1297)

𝜎 0.2184 0.0166 9.845E-5 ( 0.1871, 0.2523 )

5 𝜇 -0.3484 0.1023 6.265E-4 (-0.3486, -0.1488)

𝜎 0.2195 0.0148 8.614E-5 ( 0.2189, 0.2495 )

Page 176: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

9  

Using the results from Table 3 and Table 4, we calculate the availability of individual balling drums in Table 5, where MTTF = E f t |α, γ , and MTTR = E f t |μ, σ .

Table.5 Statistics of individual availability

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.1, 178.0) 7.779 (5.284, 11.58) 0.9487 (0.9229, 0.9665) 2 196.4 (157.7, 256.0)  15.48 (8.927, 26.60)  0.9265 (0.8766, 0.9582) 3 128.7 (127.9, 155.0)  6.381 (6.194, 9.622)  0.9525 (0.9538, 0.9693) 4 148.5 (122.5, 180.3)  7.178 (4.755, 10.86)  0.9536 (0.9291, 0.9702) 5 115.8 (115.1, 139.0)  7.083 (6.926, 10.22)  0.9420 (0.9433, 0.9610)

According to equation (2), the system availability of the five balling drums is

𝐴 1 1 𝐴 0.99

4.3Discussion

Compared to the traditional method of assessing availability in equation (1), the proposed approach extends the method to equation (17), where

𝐴𝐸 𝑓 𝑇𝑇𝐹

𝐸 𝑓 𝑇𝑇𝐹 𝐸 𝑓 𝑇𝑇𝑅

𝐸 𝑓 𝑡 |𝛼, 𝛾𝐸 𝑓 𝑡 |𝛼, 𝛾 𝐸 𝑓 𝑡 |𝜇, 𝜎 .

17

Equation (17) shows the flexibility of assessing availability according to reality. For one thing, the parametric Bayesian models using MCMC make the calculation of posteriors more feasible. More importantly, however, parametric Bayesian models can be applied to predict TTF, TTR, and system availability in the future.

In this study, since the five balling drums are relatively new, the gamma distributions and normal distributions are selected as vague priors due to lack of prior information. This could be improved with more historical data/experience.

The system configurations could be extended to other more complex architectures (series, k-out-of-n, stand-by, multi-state, or mixed) by modifying equation (2).

The data analysis reveals that for TTF data, the shape parameter for the Weibull distribution is less than 1. The TTFs have a decreasing trend (as in an early stage of the bathtub curve) which is not suitable for the experience of mechanical equipment. The TTF data include not only corrective maintenance but also preventive maintenance. In this case study, a high percentage of TTF work orders are for preventive maintenance. The decreasing trends also indicate that a possible way to improve TTF is to improve the preventive maintenance plan.

Among those three stages, Step 1 to Step 4 can be treated as Plan stage; Step 5 and Step 6 as Do and Check stage, while Step 7 as Action stage. The outputs from Step 7 could become input for

Page 177: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

10  

Step 2 for the next calculation period. It means these eight steps are following the “PDCA” cycle and the results could be continuously improved.

5. Conclusions

This study proposes a parametric Bayesian approach for system availability assessment on the operational stage. MCMC is adopted to take advantages of the analytical and simulation methods.

In this approach, MTTF and MTTR are treated as distributions instead of being “averaged” by a point estimation. This better reflects the reality; in addition, the limitations of simulation data sample size are compensated for by MCMC techniques.

In the case study, TTF and TTR are determined using a Bayesian Weibull model and a Bayesian lognormal model. The results show that the proposed approach can integrate the analytical and simulation methods for system availability assessment and could be applied to other technical problems in asset management (e.g., other industries, other systems).

Acknowledgements

The motivation for the research originated from the project “Key Performance Indicators (KPI) for control and management of maintenance process through eMaintenance (In Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h a eUnderhåll)”, which was initiated and financed by LKAB. The authors wish to thank Ramin Karim, Peter Olofsson, Mats Renfors, Sylvia Simma, Maria Rytty, Mikael From and Johan Enbak, for their support for this research in the form of funding and work hours.

References

Brender, D. M., 1968. The Bayesian Assessment of System Availability: Advanced Applications and Techniques. IEEEtransactionsonReliability,17(3), pp. 138-147.

Brender, D. M., 1968. The Prediction and Measurement of System Availability: A Bayesian Treatment. IEEETransactionsonReliability,17(3), pp. 127-138.

Dekker, R. & Groenendijk, W., 1995. Availability Assessment Methods and their Application in Practice. MicroelectronicsReliability,35(9-10), pp. 1257-1274.

Faghih-Roohi, S., Xie, M., Ng, K. M. & Yam, R. C., 2014. Dynamic Availability Assessment and Optimal Component Design of Multi-state Weighted k-out-of-n Systems. ReliabilityEngineeringandSystemSafety,Volume 123, pp. 57-62.

Khan, M. A. & Islam, H., 2012. Bayesian Analysis of System Availability with Half-Normal Life Time. QualityTechnologyandQuantitativeManagement,9(2), pp. 203-209.

Kuo, W., 1985. Bayesian Availability Using Gamma Distributed Priors. IIETransactions,17(2), pp. 132-140.

Page 178: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

11  

Lin, J., 2014. An Integrated Procedure for Bayesian Reliability Inference using Markov Chain Monte Carlo Methods. JournalofQualityandReliabilityEngineering,Volume 2014, pp. 1-16.

Marquez, A. C., Heguedas, A. S. & Iung, B., 2005. Monte Carlo-based Assessment of System Availability. A Case Study for Cogeneration Plants. ReliabilityEngineeringandSystemSafety,Volume 88, pp. 273-289.

Marquez, A. C. & Iung, B., 2007. A Structured Approach for the Assessment of System Availability and Reliablity using Monte Carlo Simulatoin. JournalofQualityinMaintenanceEngineering,13(2), pp. 125-136.

Ocnasu, A. B., 2007. Distribution System Availability Assessment ‐Monte Carlo and Antithetic VariatesMethod.Vienna, 19th International Conference on Electricity Distribution.

Raje, D., Olaniya, R., Wakhare, P. & Deshpande, A., 2000. Availability Assessment of a Two-unit Stand-by Pumping System. ReliabilityEngineeringandSystemSafety,Volume 68, pp. 269-274.

Sharma, K. & Bhutani, R., 1993. Bayesian Analysis of System Availability. MicroelectronicReliability,33(6), pp. 809-811.

Yasseri, S. F. & Bahai, H., 2018. Availability Assessment of Subsea Distribution Systems at the Architectural Level. OceanEngineering,Volume 153, pp. 399-411.

Zio, E., Marella, M. & podofillini, L., 2007. A Monte Carlo Simulation Approach to the Availability Assessment of Multi-state System with Operational Dependencies. Reliability Engineering and SystemSafety,Volume 92, pp. 871-882.

Page 179: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,
Page 180: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

 

PaperC

A novel Bayesian approach to system availability assessment using a threshold to censor data

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Saari, E., Lin, J., Liu B, Zhang, L-W and Karim, R.. 2019. A novel Bayesian approach to system availability assessment using a threshold to censor data. InternationalJournalofPerformabilityEngineering.Published.

Page 181: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

 

Page 182: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

1  

AnovelBayesianapproachtosystemavailabilityassessmentusingathresholdtocensordata

A case study of balling drums in a mining company

Esi Saari1, Jing Lin1*, Bin Liu2, Liangwei Zhang1,3, Ramin Karim1

1. DivisionofOperationandMaintenanceEngineering,LuleåUniversityofTechnology,Luleå,Sweden;2. DepartmentofManagementScience,UniversityofStrathclyde,Glasgow,UK;3. DepartmentofIndustrialEngineering,DongguanUniversityofTechnology,Dongguan,China.

*Correspondingauthor;E‐mailaddress:[email protected]

Abstract:Assessment of system availability has been studied from the design stage to the operational stage in various system configurations using either analytic or simulation techniques. However, the former cannot handle complicated state changes and the latter is computationally expensive. This study proposes a Bayesian approach to evaluate system availability. In this approach: 1) Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better describe real scenarios and overcome the limitations of data sample size; 2) Markov Chain Monte Carlo (MCMC) simulations are applied to take advantages of the analytical and simulation methods; 3) a threshold is set up for Time to Failure (TTR) data and Time to Repair (TTR) data, and new datasets with right-censored data are created to reveal the connections between technical and “Soft” KPIs. To demonstrate the approach, the paper considers a case study of a balling drum system in a mining company. In this system, MTTF and MTTR are determined by a Bayesian Weibull model and a Bayesian lognormal model respectively. The results show that the proposed approach can integrate the analytical and simulation methods to assess system availability and could be applied to other technical problems in asset management (e.g., other industries, other systems). By comparing the results with and without considering the threshold for censoring data, we show the threshold can be used as a monitoring line for continuous improvement in the investigated mining company.

Keywords: System availability, Bayesian statistics, Gibbs sampling, Kaplan-Meier estimation, mining industry.

1. Introduction

Availability, commonly measured as Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR), is one of the most critical aspects of performance evaluation. Approaches to assessing system availability mainly use either analytic or simulation techniques (note: PC tools and databases are other options, but are not part of this research).

Simulation techniques estimate availability by simulating the actual process and random behaviour of the system. The advantage is that non-Markov failures and repair processes can be modelled easily (Raje, et al., 2000) (Marquez, et al., 2005) (Marquez & Iung, 2007) (Yasseri & Bahai, 2018), as can multi-state systems with operational dependencies (Zio, et al., 2007). Although simulation is more flexible, it is computationally expensive. In general, analytic techniques represent the system using mathematical solutions from applied probability theory to make

Page 183: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

2  

statements on various performance measures (Dekker & Groenendijk, 1995) (Ocnasu, 2007) (Marquez, et al., 2005). (Faghih-Roohi, et al., 2014). However, such approaches have been criticised as too restrictive to tackle practical problems; they assume constant failure and repair rates, and this is not likely to be the case in the real world. Furthermore, the time dependent availability obtained by a Markovian assumption (a common analytic technique) is actually not valid for non-Markovian processes (Raje, et al., 2000). Traditionally, Bayesian statistical approaches have been used to assess system availability as they can solve the problem of complicated system state changes and computationally expensive simulation data, but they require strict assumptions on prior forms and can be computationally difficult. Bayesian research is more concerned with the prior’s selection or the posterior’s computation than the reality (Brender, 1968) (Brender, 1968) (Kuo, 1985) (Sharma & Bhutani, 1993) (Khan & Islam, 2012).

This study proposes a novel Bayesian approach to system availability assessment, combining analytic and simulation techniques. In the proposed approach: 1) Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR) are treated as distributions instead of being “averaged” to better reflect reality and compensate for the limitations of simulation data sample size; 2) Markov Chain Monte Carlo (MCMC) simulations are used to take advantage of both analytical and simulation methods (Lin, 2014); 3) a threshold is established for Time to Failure (TTF) data and Time to Repair (TTR) data; new datasets created with right-censored data reveal the connections between technical and “soft” KPIs.

The rest of this paper is organized as follows. Section 2 explains the three stages of the proposed Bayesian approach. Section 3 describes Stage I, pre-analysis, including the configuration of a balling drum system in a case study mine, data collection and preparation, and the preliminary analysis of failure and repair data. Section 4 presents Stage II; it proposes a Bayesian Weibull model for MTTF and a Bayesian lognormal model for MTTR considering right-censored data and explains how to use an MCMC computational scheme to obtain the posterior distributions. Section 5 explains Stage III, the assessment of system availability. Section 6 presents and assess the results of a case study and then compares results with and without considering the data censored by the threshold. Section 7 features a discussion, while Section 8 provides conclusions and makes suggestions for further study.

2. Ageneralprocedure

The proposed Bayesian approach to system availability has seven steps divided into three stages (see Table 1): 1) in Stage I, we perform pre-analysis; 2) in Stage II, we create the analytic models (Bayesian) and simulation models (MCMC); 3) in Stage III, we assess system availability.

The seven steps follow a “PDCA” cycle; those in Stage I can be treated as the Plan stage, Stage II as the Do and Check stage, and Stage III as the Action stage. The outputs from Stage III could become input for Stage I for the next calculation period, so the results can be continuously improved.

To accomplish step 2, prior information can come from: 1) engineering design data; 2) component test data; 3) system test data; 4) operational data from similar systems; 5) field data in various environments; 6) computer simulations; 7) related standards and operation manuals; 8)

Page 184: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

3  

experience data from similar systems; 9) expert judgment and personal experience. Of these, the first seven yield objective prior data, and the last two provide subjective prior data. Prior data also take a variety of forms, including reliability data, the distribution of reliability parameters, moments, confidence intervals, quantiles, upper and lower limits, etc. (Lin, 2014).

In step 3, a threshold is set up according to the asset management goals connected with the organization’s business goals (see later sections for a more detailed discussion). In step 4, various types of priors can be used because of the flexibility of MCMC. In this study, since the balling drums in the case study mine are quite new, we adopt vague priors. In step 5, the likelihood function can differ according to the types of censored/ truncated data, while the Bayesian analytics could differ according to the preliminary study of the baseline analysis of TTF and TTR. In step 6, checking the MCMC simulation can follow Lin (2014). In step 7, system availability can also be described by an empirical distribution instead of an analytical one.

Table 1. A general procedure

Stages Steps Name Description

I

1 Configuration determination

Determine dependencies among units and system configuration.

2 Data collection Collect prior information and event data, including reliability and maintenance data.

3 Data preparation Clean data and remove outliers as needed. Set up a threshold for censored data.

4 Preliminary Analysis

Determine the distribution of prior information, TTF, and TTR for the Bayesian analytics in step 5.

II

5 Bayesian analytic modelling

According to step 3 and step 4, determine the likelihood function and Bayesian analytic models.

6 MCMC simulation

Define burn-in defined and implement MCMC simulation; perform convergence diagnostics and check Monte Carlo error to confirm the effectiveness of the results. If not passed, go back to step 4 and 5; if passed, go to step 7.

III 7 Assessment

According to the simulation results for Bayesian analytic models and system configuration, determine distributions of TTF and TTR and assess system availability. Assessment could start with the prior information collection in step 2 for the next calculation period.

 

3. StageI:Pre‐analysis

3.1Configurationofballingdrumsystem

The case study mine has five balling drums, labelled 1-5. All five balling drums receive their feed for production in the same manner, and each balling drum is expected to produce the same amount of pellets at its maximum. According to the company, the balling drums are independent; if one breaks down, it does not affect the rest. One assumption is made here that the system will fail only if all subsystems fail; therefore, it is treated as a parallel system (Figure 1).

Page 185: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

4  

Figure1 Description of a balling drum and the system sketch

The availability of a single balling drum, denoted as 𝐴 , can be computed by

𝐴𝑀𝑇𝑇𝐹

𝑀𝑇𝑇𝐹 𝑀𝑇𝑇𝑅 1

The total system availability for this parallel configuration, 𝐴 , can be calculated as

𝐴 1 1 𝐴 2

 

3.2Datacollectionanddatapreparation

The study uses the failure and repair data of the five balling drums from January 2013 to December 2018. There are 1782 records. In the first step of data preparation, the null values are removed, and the data are reduced to 1774 records.

In the next step, we look for the normal and abnormal values for the TTF and TTR of individual balling drums. If 150 shutdowns are considered normal, for example, then those exceeding 150 are abnormal, and 150 is denoted as a threshold, as shown in Figure 2. The work orders show most of these abnormal shutdowns are caused by “preventive maintenance” and may simply reflect a lack of maintenance resources. To simplify the study, we assume that not all maintenance resources are sufficient for “preventive maintenance”; thus, the abnormal data may reflect a shortage of spare parts or skilled personnel.

Page 186: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

5  

Figure2 Example of TTR data for balling drum 1

To establish a more reasonable TTR threshold than the 150 shutdowns, we perform a Pareto analysis for all balling drums. The results shown in Figure 3. According to the figure, if the threshold is set up according to the “80-20” rule, the data can be censored at six hours. This explains almost 80% of the data. Therefore, we create a new dataset with TTR censored at six hours.

   

Figure3 Pareto analysis for TTR of five balling drums

In addition, we make the following assumptions:

1. Abnormal TTR values exceeding six hours could be improved by implementing maintenance improvements, including Root Cause Analysis (RCA), maintenance resource improvement, etc. The goal is to reduce the TTR values exceeding six hours. However, we

Page 187: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

6  

don’t know how much we can do. Therefore, those values are considered right censored at six;

2. The preventive maintenance plan is not changed. Thus, if one TTR is treated as censored, then in the corresponding maintenance interval, the Time between Failure (TBF), which equals to TTF plus TTR, will not change significantly, and the TTF could be longer than in the collected data. However, we don’t know how much longer the TTF could be. Therefore, TTF data can also be treated as right censored. The difference with censored TTR data is that the corresponding TTF data are treated as right-censored at the original value instead of a new value (see Figure 4).

 

Figure 4 Data censored under assumptions

We use Figure 4 to illustrate assumption 2. TBF equals to the time between t and t . TTR =t - t might be larger than six but it is right censored at six. Then, the original TTR is denoted as six with a right-censored indicator. Since TBF= t - t will not change, the corresponding TTF’= 𝑡 t will be longer than TTF. However, according to assumption 2, we don’t know how much longer; therefore, TTF’ is denoted as right-censored data with an original value equal to t - t .

After this step, the censored TTF and TTR data represent a total of 20% of all data.

3.3Preliminaryanalysis

To determine the baseline distribution of TTR and TTF, we conduct a preliminary study of failure data and repair data using traditional analysis. We consider the following distributions: exponential distribution, Weibull distribution, normal distribution, log-logistic distribution, lognormal distribution, and extreme value distribution. Table 2 lists the results, including the goodness-of-fit using Anderson-Darling (AD) statistics.

Table 2 Preliminary study of failure data and repair data

Ballingdrum

TTFfitness TTRfitness1st AD 2nd AD 1st AD 2nd AD

1 Weibull 1.976 Lognormal 11.276 Lognormal 10.068 Weibull 14.607 2 Weibull 1.796 Lognormal 8.274 Lognormal 11.144 Weibull 14.302 3 Weibull 2.115 Lognormal 10.499 Lognormal 8.698 Weibull 14.332 4 Weibull 1.196 Lognormal 6.366 Lognormal 9.245 Weibull 13.106 5 Weibull 2.148 Lognormal 14.416 Lognormal 7.533 Weibull 11.933

Page 188: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

7  

Based on the results, we select the Weibull distribution for the TTF and the lognormal distribution for the TTR and apply these to their respective parametric Bayesian models with censored data, as explained in the next section.

4. StageII:Analyticandsimulationmodels

This section elaborates on the analytic and simulation models described in Stage II. It proposes a Bayesian Weibull model for TTF and a Bayesian lognormal model for TTR and explains how to use an MCMC computational scheme to obtain the posterior distributions considering right-censored data.

4.1MarkovChainMonteCarlowithGibbssampling

The recent proliferation of Markov Chain Monte Carlo (MCMC) approaches has led to the use of the Bayesian inference in a wide variety of fields. MCMC is essentially Monte Carlo integration using Markov chains. Monte Carlo integration draws samples from the required distribution and then forms sample averages to approximate expected results. MCMC draws out these samples by running a cleverly constructed Markov chain for a long time. There are many ways of constructing these chains. The Gibbs sampler is one of the best known MCMC sampling algorithms in the Bayesian computational literature. In this method, when a set of parameters must be evaluated, the other parameters are assumed to be fixed and known. Let 𝜃 be an 𝑖-dimensional vector of parameters, and let 𝑓 𝜃 denote the marginal distribution for the 𝑗 th parameter. The basic scheme of the Gibbs sampler for sampling from 𝑝 𝜃 comprises the following steps:

Step 1. Choose an arbitrary starting point 𝜃 𝜃 , … , 𝜃 ; Step 2. Generate 𝜃 from the conditional distribution 𝑓 𝜃 |𝜃 , … , 𝜃 , and

generate 𝜃 from the conditional distribution 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ; Step 3. Generate 𝜃 from 𝑓 𝜃 |𝜃 , … , 𝜃 , 𝜃 … , 𝜃 ; Step 4. Generate 𝜃 from 𝑓 𝜃 |𝜃 , 𝜃 , … , 𝜃 ; the one-step transition from

𝜃 to 𝜃 𝜃 , … , 𝜃 has been now completed, where 𝜃 is a one-time accomplishment of a Markov chain.

Step 5. Go to Step2.

After 𝑡 iterations,𝜃 𝜃 , … , 𝜃 can be obtained. Each component of 𝜃 can also be obtained. Starting from different 𝜃 , as 𝑡 → ∞, the marginal distribution of 𝜃 can be viewed as a stationary distribution based on the theory of the ergodic average. The chain is seen as converging, and the sampling points are seen as observations of the sample.

4.2Likelihoodconstructionforright‐censoreddata

In practice, lifetime data are usually incomplete, and only a portion of the individual lifetimes of assets are known. Right-censored data are often called Type I censoring in the literature; the corresponding likelihood construction problem has been extensively studied. The right-censored data of this study are illustrated in Figure 4.

Suppose there are 𝑛 individuals whose lifetimes and censoring times are independent. The 𝑖 th individual has life time 𝑇 and censoring time 𝐿 . The 𝑇 s are assumed to have probability density

Page 189: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

8  

function 𝑓 𝑡 and reliability function 𝑅 𝑡 . The exact lifetime 𝑇 of an individual will be observed only if 𝑇 𝐿 . The lifetime data involving right censoring can be conveniently represented by 𝑛 pairs of random variables 𝑡 , 𝑣 , where 𝑡 min 𝑇 , 𝐿 and 𝑣 1 if 𝑇 𝐿 and 𝑣 0if 𝑇 𝐿 . That is, 𝑣 indicates whether the lifetime 𝑇 is censored or not. The likelihood function is deduced as

𝐿 𝑡 𝑓 𝑡 𝑅 𝑡 3

4.3BayesianmodellingforTTF

Suppose the Time to Failure (TTF) data 𝑡 𝑡 , 𝑡 , ⋯ , 𝑡 for 𝑛 individuals are 𝑖. 𝑖. 𝑑., and each corresponds to a 2-parameter Weibull distribution 𝑊 𝛼, 𝛾 , where 𝛼 0 and 𝛾 0. Then, the 𝑝. 𝑑. 𝑓. is 𝑓 𝑡 |𝛼, 𝛾 𝛼𝛾𝑡 𝑒𝑥𝑝 𝛾𝑡 , while the 𝑐. 𝑑. 𝑓. is 𝐹 𝑡 |𝛼, 𝛾 1 𝑒𝑥𝑝 𝛾𝑡 , and the reliability function is 𝑅 𝑡 |𝛼, 𝛾 𝑒𝑥𝑝 𝛾𝑡 .

Let 𝑣 𝑣 , 𝑣 , … , 𝑣 indicate whether the lifetime is right-censored or not, and let the observed dataset for the study be denoted as 𝐷 , where 𝐷 𝑛, 𝑡, 𝑣 , following equation (3). Therefore, the likelihood function for 𝛼 and 𝛾 is

𝐿 𝛼, 𝛾|𝐷 𝛼∑ 𝑒𝑥𝑝 𝑣 𝑙𝑛 𝛾 𝑣 𝛼 1 𝑙𝑛 𝑡 𝛾𝑡 4

In this study, we take α and 𝛾 to be independent. Furthermore, we assume α to be a gamma distribution, denoted by 𝐺 𝑎 , 𝑏 as its prior distribution, written as π 𝛼 |𝑎 , 𝑏 , and we assume 𝛾 to be a gamma distribution denoted by 𝐺 𝑐 , 𝑑 as its prior distribution, written as π 𝛾|𝑐 , 𝑑 . This means

𝜋 𝛼 |𝑎 , 𝑏 ∝ 𝛼 𝑒𝑥𝑝 𝑏 𝛼 (5)

𝜋 𝛾|𝑐 , 𝑑 ∝ 𝛾 𝑒𝑥𝑝 𝑑 𝛾 (6)

Therefore, the joint posterior distribution can be obtained according to equations (4) to (6) as

𝜋 𝛼, 𝛾|𝐷 ∝ 𝐿 𝛼, 𝛾|𝐷 𝜋 𝛼 |𝑎 , 𝑏 𝜋 𝛾|𝑐 , 𝑑 7

The parameters’ full conditional distribution with Gibbs sampling can be written as

𝜋 𝛼𝑗|𝛼 𝑗 , 𝛾, 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛼𝑎0 1𝑒𝑥𝑝 𝑏0𝛼 8

𝜋 𝛾𝑗|𝛼, 𝛾 𝑗 , 𝐷0 ∝ 𝐿 𝛼, 𝛾|𝐷0 𝛾𝑐0 1𝑒𝑥𝑝 𝑑0𝛾 9

Page 190: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

9  

4.4BayesianmodellingforTTR

Suppose the Time to Repair (TTR) data 𝑡 𝑡 , 𝑡 , ⋯ , 𝑡 for 𝑛 individuals are 𝑖. 𝑖. 𝑑., and each 𝑙𝑛 𝑡 corresponds to a normal distribution 𝑁 𝜇, 𝜎 . We can get 𝑡 ’s lognormal distribution with parameters 𝜇 and 𝜎 , denoted by 𝐿𝑁 𝜇, 𝜎 . Then, the 𝑝. 𝑑. 𝑓. and 𝑐. 𝑑. 𝑓. are given by equation (10) and equation (11):

𝑓 𝑡 |𝜇, 𝜎1

√2𝜋𝜎𝑡𝑒𝑥𝑝

12𝜎

𝑙𝑛 𝑡 𝜇 10

𝐹 𝑡 |𝜇, 𝜎 Φ𝑙𝑛 𝑡 𝜇

𝜎 11

The likelihood function related to 𝜇 and 𝜎 , considering the censoring indicators 𝑣𝑣 , 𝑣 , … , 𝑣 and the observed data set 𝐷 𝑛, 𝑡, 𝑣 , becomes

𝐿 𝜇, 𝜎|𝐷 2𝜋𝜎 ∑ 𝑒𝑥𝑝1

2𝜎𝑙𝑛 𝑡 𝜇 𝑡 1 Φ

𝑙𝑛 𝑡 𝜇𝜎

12

In this study, we assume 𝜇 to be a normal distribution denoted by 𝑁 𝑒 , 𝑓 as its prior distribution, written as 𝜋 𝜇|𝑒 , 𝑓 , and we assume 𝜎 to be a gamma distribution denoted by 𝐺 𝑔 , ℎ as its prior distribution, written as 𝜋 𝜎|𝑔 , ℎ . This means

𝜋 𝜇|𝑒 , 𝑓 ∝ 𝑓 𝑒𝑥𝑝𝑓2

𝜇 𝑒 13

𝜋 𝜎|𝑔 , ℎ ∝ 𝜎 𝑒𝑥𝑝 ℎ 𝜎 (14)

 

Therefore, the joint posterior distribution can be obtained according to equations (12) to (14) as

𝜋 𝜇, 𝜎|𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜋 𝜇 |𝑒 , 𝑓 𝜋 𝜎|𝑔 , ℎ 15

The parameters’ full conditional distribution with Gibbs sampling can be written as

π 𝜇 |𝜇 , 𝜎, 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝑓0

12𝑒𝑥𝑝

𝑓0

2𝜇 𝑒0

2 16

π 𝜎 |𝜇, 𝜎 , 𝐷 ∝ 𝐿 𝜇, 𝜎|𝐷 𝜎 𝑒𝑥𝑝 ℎ 𝜎 17

5. StageIII:Assessment

According to the results from Stage II, the distribution for TTF and TTR can be achieved separately for balling drums 1 to 5. Compared to the traditional method of assessing availability in equation (1), the proposed approach extends the method to equation (18), where

Page 191: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

10  

𝐴𝐸 𝑓 𝑇𝑇𝐹

𝐸 𝑓 𝑇𝑇𝐹 𝐸 𝑓 𝑇𝑇𝑅

𝐸 𝑓 𝑡 |𝛼, 𝛾𝐸 𝑓 𝑡 |𝛼, 𝛾 𝐸 𝑓 𝑡 |𝜇, 𝜎 .

18

Equation (18) shows the flexibility of assessing availability according to reality. For one thing, the parametric Bayesian models using MCMC make the calculation of posteriors more feasible.

Based on the system configuration determined in Stage I, and using the results from Stage II for 𝑊 𝛼, 𝛾 , 𝐿𝑁 𝜇, 𝜎 and equation (18), TTF, TTR, and system availability can be assessed.

System availability can be computed via the TTF and TTR. However, according to equation (18), we cannot obtain a closed-form distribution of system availability. Therefore, we use an empirical distribution instead of an analytical one. As illustrated in the case study, the Kaplan-Meier estimate can be used as the empirical 𝑐. 𝑑. 𝑓.

6. Casestudy

In this case study of five balling drums, the Markov chain is constructed for each MCMC simulation. A burn-in of 1000 samples is used, with an additional 10,000 Gibbs samples for each Markov chain.

Vague prior distributions are adopted as follows:

For the Bayesian Weibull model using TTF data:

𝛼~𝐺 0.0001, 0.0001 , 𝛾~𝐺 0.0001, 0.0001 ;

For the Bayesian lognormal model using TTR data:

𝜇~𝑁 0, 0.0001 , 𝜎~𝐺 0.0001, 0.0001 .

6.1Results

Table 3 Posterior statistics in Bayesian Weibull model with censored TTF data

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝛼 0.5399 0.0235 4.34E-4 (0.4954, 0.5870) 𝛾 0.0934 0.0122 2.26E-4 (0.0710, 0.1186)

2 𝛼 0.5721 0.0289 6.25E-4 (0.5159, 0.6295) 𝛾 0.0651 0.0110 2.39E-4 (0.0459, 0.0890)

3 𝛼 0.5781 0.0251 5.08E-4 (0.5299, 0.6281) 𝛾 0.0742 0.0104 2.09E-4 (0.0555, 0.0961)

4 𝛼 0.5713 0.0252 5.14E-4 (0.5228, 0.6210) 𝛾 0.0763 0.0109 2.22E-4 (0.0569, 0.0992)

5 𝛼 0.5601 0.0219 3.95E-4 (0.5176, 0.6038) 𝛾 0.0940 0.0111 1.99E-4 (0.0735, 0.1175)

Page 192: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

11  

Using convergence diagnostics (i.e. checking dynamic traces in Markov chains, determining time series and Gelman-Rubin-Brooks (GRB) statistics, and comparing MC error with standard deviation (SD)) (Lin, 2014), we consider the posterior distribution statistics shown in Table 3 and Table 4, including the parameters’ posterior distribution mean, SD, Monte Carlo error (MC error), and 95% highest posterior distribution density (HPD) interval.

Table 4 Posterior statistics in Bayesian lognormal model with censored TTR data

Ballingdrum Parameter Mean SD MCerror 95%HPDinterval

1 𝜇 -0.4501 0.0882 4.98E-4 (-0.6250, -0.2776)

𝜎 0.3585 0.0267 1.50E-4 (0.3078, 0.4125)

2 𝜇 -0.3825 0.1082 6.24E-4 (-0.5959, -0.1719)

𝜎 0.3277 0.0285 1.56E-4 (0.2742, 0.3853)

3 𝜇 -0.4510 0.0839 5.10E-4 (-0.6176, -0.2871)

𝜎 0.4041 0.0305 1.80E-4 (0.3463, 0.4660)

4 𝜇 -0.6124 0.0907 5.29E-4 (-0.7924, -0.4351)

𝜎 0.3516 0.0266 1.49E-4 (0.3010, 0.4057)

5 𝜇 -0.6023 0.0812 4.72E-4 (-0.7633, -0.4432)

𝜎 0.3524 0.0238 1.39E-4 (0.3072, 0.4007)

6.2Assessment

Using the results from Table 3 and Table 4 for balling drums 1 to 5, we derive the distributions of TTF and TTR as shown in Table 5.

Table 5 Statistics of individual balling drums with censored data

Ballingdrum

TTF TTR Availability

𝑊 𝛼, 𝛾 𝐿𝑁 𝜇, 𝜎 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

1 𝑊 0.5399, 0.0934 𝐿𝑁 0.4501, 0.3585   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

2 𝑊 0.5721, 0.0651 𝐿𝑁 0.3825, 0.3277   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

3 𝑊 0.5781, 0.0742 𝐿𝑁 0.4510, 0.4041 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 4 𝑊 0.5713, 0.0763 𝐿𝑁 0.6124, 0.3516 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 5 𝑊 0.5601, 0.0940 𝐿𝑁 0.6023, 0.3524 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄

Using the results in Table 5, we create 𝑝. 𝑑. 𝑓. and 𝑐. 𝑑. 𝑓. charts of TTF and TTR data in Figure 5 and Figure 6.

Page 193: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

12  

(a) 𝑝. 𝑑. 𝑓. of TTF (b) 𝑝. 𝑑. 𝑓. of TTR

Figure 5 𝑝. 𝑑. 𝑓. of TTF and TTR

(a) 𝑐. 𝑑. 𝑓. of TTF (b) 𝑐. 𝑑. 𝑓. of TTR

Figure 6 𝑐. 𝑑. 𝑓. of TTF and TTR

As discussed above, system availability can be computed via the TTF and TTR, but we cannot obtain a closed-form distribution of system availability. Therefore, we use an empirical distribution instead of an analytical one. We generate 10,000 samples from the distributions of TTF and TTF and calculate the associated availability. Figure 7 presents the histogram of availability of the five balling drums. We use the Kaplan-Meier estimate as the empirical 𝑐. 𝑑. 𝑓. Figure 8 shows the empirical distribution of the availability of the five balling drums.

Page 194: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

13  

Balling drum 1 Balling drum 2

Balling drum 3 Balling drum 4

Balling drum 5

Figure 7 Histogram plot of availability

Figure 8 Empirical 𝑐. 𝑑. 𝑓. of availability

Table 6 Statistics of individual balling drums with censored data

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.4, 178.2) 2.616 (2.000, 3.437) 0.9821 (0.9753, 0.9873) 2 197.0 (157.6, 247.5) 3.223 (2.301, 4.540) 0.9837 (0.9759, 0.9893) 3 146.0 (120.7, 177.0) 2.239 (1.741, 2.864) 0.9848 (0.9795, 0.9890) 4 149.0 (122.5, 181.8) 2.289 (1.736, 3.041) 0.9847 (0.9788, 0.9891) 5 115.0 (96.40, 137.5) 2.296 (1.796, 2.958) 0.9803 (0.9736, 0.9855)

Page 195: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

14  

We calculate the availability of the individual balling drums in Table 6, where MTTF = 𝐸 𝑓 𝑡 |𝛼, 𝛾 , and MTTR = 𝐸 𝑓 𝑡 |𝜇, 𝜎 .

According to equation (2), the system availability of the five balling drums is

𝐴 1 1 𝐴 0.99

 

7. Discussion

7.1Acomparisonstudy

For comparative purposes, Table 7 and Table 8 show the statistics of the individual balling drums with no censored data. All TTF and TTR data collected in Stage I are treated as reasonable and require no improvement.

Table 7 Statistics of individual balling drums with no censored data

Ballingdrum

TTF TTR Availability

𝑊 𝛼, 𝛾 𝐿𝑁 𝜇, 𝜎 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

1 𝑊 0.5409, 0.0928 𝐿𝑁 0.1842, 0.2270   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

2 𝑊 0.5747, 0.0642 𝐿𝑁 0.0075, 0.1861   1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄  

3 𝑊 0.5975, 0.0712 𝐿𝑁 0.4574, 0.2196 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 4 𝑊 0.5745, 0.0750 𝐿𝑁 0.3540, 0.2184 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄ 5 𝑊 0.5660, 0.0958 𝐿𝑁 0.3484, 0.2195 1 1 𝐿𝑁 𝜇, 𝜎 𝑊 𝛼, 𝛾⁄⁄

 

Table 8 Statistics of individual balling drums with no censored data

BallingdrumMTTF MTTR Availability

Mean 95% HPD interval Mean 95% HPD interval Mean 95% HPD interval 1 145.0 (118.1, 178.0) 7.779 (5.284, 11.58) 0.9487 (0.9229, 0.9665) 2 196.4 (157.7, 256.0)  15.48 (8.927, 26.60)  0.9265 (0.8766, 0.9582) 3 128.7 (127.9, 155.0)  6.381 (6.194, 9.622)  0.9525 (0.9538, 0.9693) 4 148.5 (122.5, 180.3)  7.178 (4.755, 10.86)  0.9536 (0.9291, 0.9702) 5 115.8 (115.1, 139.0)  7.083 (6.926, 10.22)  0.9420 (0.9433, 0.9610)

For convenience, the results are also listed in Table 9.

Table 9 Comparison of statistics with and without censored data

Ballingdrum

MeanofMTTF MeanofMTTR MeanofAvailabilityNo

censored censored %

No censored

censored % No

censored censored %

1 145.0 145.0 0 7.779 2.616 66.37 0.9487 0.9821 3.52 2 196.4 197.0 0.30 15.48 3.223 79.18 0.9265 0.9837 6.17 3 128.7 146.0 13.4 6.381 2.239 64.91 0.9525 0.9848 3.39 4 148.5 149.0 0.33 7.178 2.289 68.11 0.9536 0.9847 3.26 5 115.8 115.0 0 7.083 2.296 67.58 0.9420 0.9803 4.07

Page 196: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

15  

In Table 9, “%” denotes the percentage after considering the censored data. For instance, for balling drum 1, after considering the censored data, the mean of MTTF does not change; MTTR improves by 66.37%, and availability improves by 3.52%.

According to the results from Table 9, if 20% of the abnormal TTR data could be improved (for instance, by applying RCA activities, or more specifically, by improving maintenance resource management, including maintenance skills, spare parts, etc.), the TTR could be improved by 66.37%, 79.18%, 64.91%, 68.11%, and 67.58% for drums 1 to 5, respectively. Meanwhile, the availability would be improved by 3.52%, 6.17%, 3.39%, 3.26%, and 4.07% for drums 1 to 5, respectively.

The improvement of the TTF is not as impressive. We apply right-censored data for the TTRs under the assumption that they can be improved (censored at six), but the corresponding TTFs can only be marked as censored instead of censored at some specified value, under the assumption that the maintenance interval will not change all that much. This implies that if the maintenance interval (for instance, the preventive maintenance) could be improved, the TTFs could be improved (censored at a larger value), thus improving the availability.

7.2Connectionbetweentechnicaland“soft”KPIs

In the studied company, Key Performance Indicators (KPIs) are divided into two groups: technical KPIs and soft KPIs. The former are related to the performance of equipment, whilst the latter focus on maintenance management.

In this case, the abnormal values of TTR are assumed to be mainly caused by lack of maintenance resources, including personnel with suitable skills, spare parts, etc. KPIs of maintenance resources are treated as “soft” KPIs in the company. Therefore, using our comparative approach, we could easily find out how the technical KPIs (TTF, availability of assets) would be influenced by improving “soft” KPIs.

7.3Applicationofthethresholdasamonitoringline

In this study, the threshold of abnormal TTR values in the work orders is determined by a “80-20” rule in Pareto analysis, in which a TTR value exceeding six is treated as an abnormally long time for TTR and should be improved by RCA activities, including improving maintenance resource management.

Actually, the threshold could be determined by the company according to its business goals; for instance, they could be set at 70% or 90%, or set according to other rules combined with business goals. The threshold could also be changed gradually to improve the maintenance step by step, following a PDCA process. In another words, the so-called abnormal data are not really abnormal. Finally, the threshold could be treated as a monitoring line, permitting the dynamic monitoring of system availability.

Page 197: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

16  

7.4Furtherresearch

In this study, since the five balling drums are relatively new, the gamma distributions and normal distributions are selected as vague priors due to lack of real prior information. This could be improved with more historical data/experience.

The system configurations could be extended to other more complex architectures (series, k-out-of-n, stand-by, multi-state, or mixed) by modifying equation (2).

The results of system availability are all larger than 0.99, with or without considering censored data. The difference is not very obvious for two reasons. First, the system configuration is in parallel; second, the individual balling drums have relatively high availabilities (higher than 0.9). The difference (with or without considering censored data) will be more obvious with other system configurations and less individual availability.

For TTF data, the shape parameter for the Weibull distribution is less than 1 (see Figure 5 (a)). The TTFs have a decreasing trend (as in the early stage of the bathtub curve) which is not suitable for the real-world experience of mechanical equipment. However, the TTF data include not only corrective maintenance but also preventive maintenance. The decreasing trends suggest a possible way to improve TTF is to improve the preventive maintenance plan.

8. Conclusions

This study proposes a parametric Bayesian approach to assess system availability in the operational stage. MCMC is adopted to take advantage of both analytical and simulation methods. Because of MCMC’s high dimensional numerical integral calculation, the selection of prior information and descriptions of reliability/maintainability can be more flexible and realistic. In this method, MTTF and MTTR are treated as distributions instead of being “averaged” by point estimation. This better reflects reality; in addition, the limitations of simulation data sample size are overcome by MCMC techniques.

In the case study, TTF and TTR are determined using a Bayesian Weibull model and a Bayesian lognormal model, respectively. The results show that:

The proposed approach can integrate analytical and simulation methods for system availability assessment and could be applied to other technical problems in asset management (e.g., other industries, other systems);

There is a connection between technical and “soft” KPIs; The threshold can be treated as a monitoring line by the mining company for continuous

improvement.

Acknowledgements

The motivation for the research was the project “Key Performance Indicators (KPIs) for control and management of maintenance process through eMaintenance (In Swedish: Nyckeltal för styrning och uppföljning av underhållsverksamhet m h a eUnderhåll)”, which was initiated and

Page 198: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,

17  

financed by LKAB. The authors wish to thank Ramin Karim, Peter Olofsson, Mats Renfors, Sylvia Simma, Maria Rytty, Mikael From and Johan Enbak, for their support for this research in the form of funding and work hours.

References 

Brender, D. M., 1968. The Bayesian Assessment of System Availability: Advanced Applications and Techniques. IEEEtransactionsonReliability,17(3), pp. 138-147.

Brender, D. M., 1968. The Prediction and Measurement of System Availability: A Bayesian Treatment. IEEETransactionsonReliability,17(3), pp. 127-138.

Dekker, R. & Groenendijk, W., 1995. Availability Assessment Methods and their Application in Practice. MicroelectronicsReliability,35(9-10), pp. 1257-1274.

Faghih-Roohi, S., Xie, M., Ng, K. M. & Yam, R. C., 2014. Dynamic Availability Assessment and Optimal Component Design of Multi-state Weighted k-out-of-n Systems. ReliabilityEngineeringandSystemSafety,Volume 123, pp. 57-62.

Khan, M. A. & Islam, H., 2012. Bayesian Analysis of System Availability with Half-Normal Life Time. QualityTechnologyandQuantitativeManagement,9(2), pp. 203-209.

Kuo, W., 1985. Bayesian Availability Using Gamma Distributed Priors. IIETransactions,17(2), pp. 132-140.

Lin, J., 2014. An Integrated Procedure for Bayesian Reliability Inference using Markov Chain Monte Carlo Methods. JournalofQualityandReliabilityEngineering,Volume 2014, pp. 1-16.

Marquez, A. C., Heguedas, A. S. & Iung, B., 2005. Monte Carlo-based Assessment of System Availability. A Case Study for Cogeneration Plants. ReliabilityEngineeringandSystemSafety,Volume 88, pp. 273-289.

Marquez, A. C. & Iung, B., 2007. A Structured Approach for the Assessment of System Availability and Reliablity using Monte Carlo Simulatoin. JournalofQualityinMaintenanceEngineering,13(2), pp. 125-136.

Ocnasu, A. B., 2007. DistributionSystemAvailabilityAssessment‐MonteCarloandAntitheticVariatesMethod.Conference Proceeding. 19th International Conference on Electricity Distribution. 21st May - 24th May. Vienna, Austria.

Raje, D., Olaniya, R., Wakhare, P. & Deshpande, A., 2000. Availability Assessment of a Two-unit Stand-by Pumping System. ReliabilityEngineeringandSystemSafety,Volume 68, pp. 269-274.

Sharma, K. & Bhutani, R., 1993. Bayesian Analysis of System Availability. MicroelectronicReliability,33(6), pp. 809-811.

Yasseri, S. F. & Bahai, H., 2018. Availability Assessment of Subsea Distribution Systems at the Architectural Level. OceanEngineering,Volume 153, pp. 399-411.

Zio, E., Marella, M. & podofillini, L., 2007. A Monte Carlo Simulation Approach to the Availability Assessment of Multi-state System with Operational Dependencies. ReliabilityEngineeringandSystemSafety,Volume 92, pp. 871-882.

 

Page 199: KPI framework for maintenanceltu.diva-portal.org/smash/get/diva2:1315950/FULLTEXT02.pdfKPI framework for maintenance management through eMaintenance Development, implementation, assessment,