Machine condition monitoring Dany Robberecht [email protected]www.verhaert.com VERHAERTINNOVATIONDAY – OCTOBER 20 th , 2006 www.mastersininnovation.com Commercially confidence – This presentation contains ideas and information which are proprietary of VERHAERT, Masters in Innovation®*, it is given in confidence. You are authorized to open and view the electronic copy of this document and to print a single copy. Otherwise, the material may not in whole or in part be copied, stored electronically or communicated to third parties without prior agreement of VERHAERT, Masters in Innovation®*. * VERHAERT, Masters in Innovation is a registered trade name of Verhaert Consultancies N.V.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Commercially confidence – This presentation contains ideas and information which are proprietary of VERHAERT, Masters in Innovation®*, it is given in confidence. You are authorized to open and view the electronic copy of this document and to print a single copy. Otherwise, the material may not in whole or in part be copied, stored electronically or communicated to third parties without prior agreement of VERHAERT, Masters in Innovation®*.
* VERHAERT, Masters in Innovation is a registered trade name of Verhaert Consultancies N.V.
The problem of operating potentially damaged equipmentLost efficiency through downtime and ineffective performanceCost of precautionary replacement Cost of repair after catastrophic failure Cost of precautionary spares inventoryCost of recovery to normal operation after failureUncertain yield and quality
What maintenance mix does your organisation have?Is there a difference in process control and our equipment we manufacture?How good earning-generating capabilities has your mix?
Need to predict equipment failureFrequency of monitoring is the lead time to failure (PF interval)Interval is variable (type of failure, lubrification, ...)Estimates most of the time conservative, but why can we all easily quote many undetected errors?
Need for an hollistic view of equipment conditionNeed for greater accuracy in failure predictionNeed to reduce the cost of condition monitoringNeed to improve equipment and process reliability (prolonge MTBF)
a) Time-frequency data analysis or time-scale data processing have been investigated
These techniques enable the detection of incipient faults– computationally demanding – difficult to identify data features for automated and online condition monitoring from this
representation
b) Higher order statistics are a relatively new tool in the area of data processing (turbo-pump, rolling element bearing and electric motor faults)
– high computational overhead– only effective only for a limited range of faults.
c) independent component analysis is difficult to interpret physically
> Techniques require an expert and hardly give an early warning indicator of wear!
Neural networks have the ability to learn, to form a model from their training data alone
Storing information as patternsUse those patterns and then solve problems
Used inSignature analysisInvestment analysisProcess control
Where can neural networks be used in?Where we can’t formulate an algorithmic solutionWhere we can get lots of examples of the behavior we requireWhere we need to pick out the structure from existing data
Neural Network Technology falls under the umbrella of ArtificialIntelligence (AI)
AI suffered from overhype and inability to deliver
Technology developed at QinetiQ ltd early 80’sExperiments with different AI technologySingle neuron layer networks provided good results in military applicationsQQ developed AI with tech transfer opportunity to an early warning anomaly detection system to predict wear and failure of equipment
Condition of machine: failure & faultSeverity of faultTime to failure
Componential coding is an innovative approachUnsupervisedAdaptiveIt derives the features of the data based on training (no labeling of known faults, nor pre-processing to enhance known fault characteristics)
High fidelity, accurate modeling of the actual signal behavior leads to more sensitive and robust indicator = it allows to detect changes in the deep structure of the signal
Automatic alignment of the model to multi-dimensional datasets (convolution techniques)
Universal and rapidly applicable to any piece of equipment
Componential coding provides benefits over more familiar techniquesCompared to Principal Component Analysis(Statistical techniques -> averaging technique, blurs the signal)– Fine resolution (Impulsive noise of a bearing) - Better for subtle changes– Very good to detect intermittent faults– High fidelity, because it models the actual signal
Compared to RBF– No expert required to design the signal features (quality of the expert)– No bias/assumptions (typically build into expert derived features)– It does more than classification into healthy and non-healthyCompared to multi-layered networks– Less complex, able to deliver results
When is it particularly suited– When no machine-specific tailored techniques have been developed– When no previous monitoring experience is available
Prior application: Military (MOD)Little prior information on characteristics of target
Further research outlined technology transfer opportunitiesIn-house research to detect wear of mechanical equipmentCustomer funding to detect anomalies of an electric motor
The technology was benchmarked by the University of ManchesterOutperformed other techniques
How does it work?• It is an advanced statistical model
• Layered statistical templates• Non linear transformation of data• Robust statistics to output one simple scalar
• It is not a typical rule-based system, that means• The templates are learned• The parameters are learned• It works unsupervised and takes decisions itself• It learns from the daily operations (e.g. From faulty data
to interprete severity of faults)
• Compare it to a lossy data compression algorithm• It tranforms input data into a coded form and back again• To achieve minimum quantity of information (scalar
The basic principle of the modelThe system is designed not to be able to reconstruct all data sets accurately, but to measure how inaccurately each data set gets reconstructed
Ability to differentiate between data sets having different statistical characteristics
It can reconstruct a model-based replica of any sensor typeIt is designed to optimise the accuracy with which the model reconstructs the input data signal
There are more than 90 parameters in the modelValidation process optimises the settings of the model
Sensitive & robust anomalie detection through multiple comparisons
Difference with reconstructed modelVariance of the size (shape of the movement)
Time into Recording
Data model defined by neurons’ parameters, used to non -linearly transform the input data signal into the encoding neural outputs
Reconstruction of an estimated replica of the input data signal:a linear transformation of the encoding neural outputs
The current input data signal: a vector of several sample -sequences from several sensors
Data model defined by neurons’ parameters, used to non-linearly transform the input data signal into the encoding neural outputs.
-
Reconstruction of an estimated replica of the input data signal:a linear transformation of the encoding neural outputs.
The current input data signal: a vector of several sample - sequences from several sensors.-
Sensor
Cha
nnel
Model-based reconstructed estimate of the current input data signal.
Time into Recording
Data model defined by neurons’ parameters, used to non -linearly transform the input data signal into the encoding neural outputs
Reconstruction of an estimated replica of the input data signal:a linear transformation of the encoding neural outputs
The current input data signal: a vector of several sample -sequences from several sensors
Data model defined by neurons’ parameters, used to non-linearly transform the input data signal into the encoding neural outputs.
-
Reconstruction of an estimated replica of the input data signal:a linear transformation of the encoding neural outputs.
The current input data signal: a vector of several sample - sequences from several sensors.-
Sensor
Cha
nnel
Model-based reconstructed estimate of the current input data signal.
The reconstruction modelSingle layer of neurons, each of which uses the same input vectorEach neuron calculates a different neural output which is a function of both the input vector and neural parameters, whose values/settings differ from neuron to neuronNon linear transformation of the input signals is analogous to the way Fourrier analysis linearly transforms an input vector into Fourrier coefficients or that principal component analysis linearly transforms inputs into a set of coefficients (projections on the basis set of the eigenvectors of the covariance matrix)Robust statistics for reliable output (proprietary information, not disclosed)
Patent InformationFiled: 17.02.2000Title: Signal processing technique
Robust statistics to output one simple scalarDicrimination factor
Scalar can indicatePresent general condition – anomalie identificationSeverity of faultTime-to-critical-failureFailurePhysical location of faultWhat caused the failure
Scalar can be normalised to customer preferenceStandard: 0 – 1PercentageGraph – Diagram
From real time continuous monitoring to monitoring on request
How do you train and calibrate the model?Training Process
Run down healthy data setsKnown configuration of sensorsNominal operating conditions
The model minimise the mean-squared-error (MSE)Validation ProcessMeasure how closely the model matches with new ‘unseen’ healthy data (collect under the same conditions as training data)The MSE of the new data set provides a natural variability of that machine
Make the system more robust with ‘faulty’ data
Faulty data is not to train the model and not to look for a particular faultStabilise the process - learn from a particular fault
If particular failure modes are known, then specific data on them can be added
Use different sensors to characterise input
Training Phase
Iterate to minimise error
Adapt the model parameters to reduce the mean-squared model-error for the training data-set as far as possible.
Validation Phase
Monitoring Phase
For the trained values of the model parameters, calculate the discrimination index for the data-set(s) to be monitored.
For the current values of the model parameters, measure the mean-squared model-error for the training data-set.
For the trained values of the model parameters, measure the mean-squared model-error for the validation data-set.
For the trained values of the model parameters, measure the mean-squared model-error for the data-set(s) to be monitored.
Optional: check statistical significance of discrimination index by repeating previous step (left) with different validation data-sets.
What processing footprint would it require?This data has been estimated from previous applications
TimingsScale your timing to your applicable processing platformTo analyse the health of a new machine would take in the order of 0.14s.The timings are for a PC with in an Intel Pentium 4 CPU 2.80GHz processor and 512 MB of RAM. A machine of this specification is not required but slower machines would obviously affect speed.
Memory usageThe technology used approximately 7.5Mbytes of RAM during these experiments. The use of memory has not been optimized for this application.
Can you use or make it smarter?Predict the time-to-failure
Physics of failure model – FMECAData on usage
Predict the severity of an errorSimply run several faulty data sets to understand discrimination indexDifferentiation between different faulty data sets is the basis for severity discrimination
Integrate an array of sensors: e.g. Improve fault recognitionCriticality of sensorsCost of sensors (more, but low cost components)Accessibility of component
Learn a particular failure mode (engineering)Array of sensors on specific locationUse different type of sensorsUse different networks to isolate a failure location
Test toolFactory acceptance test (general condition)Highly accelerated life test or HALT (onderling vgl geteste systemen)Objectieve test maintenance personeel (instinctieve test!)
Real time continuous test mode of field equipmentTest for critical faultsOn-site test to improve management of time-intervalsOnline processing monitoring