Metal Loss Inline Inspection Tool Validation Guidance Document · PDF fileCEPA Metal Loss Inline Inspection Tool Validation Guidance ... Parameters as per API 1163 ... Loss Inline
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Table of Contents .................................................................................................... 3 List of Tables .......................................................................................................... 5 List of Figures ......................................................................................................... 6 1. Introduction .................................................................................................... 7
1.1. Definition of Terms .................................................................................... 7 1.2. Revisions to this Guidance Document ........................................................... 7 1.3. Background and Philosophy ........................................................................ 8 1.4. Harmonization with Other Industry Documents ............................................. 8
2. Scope............................................................................................................. 8 2.1. Practically Assessing ILI Performance ........................................................... 9
3. ILI Acceptance Overview .................................................................................. 9 3.1. Process Overview ...................................................................................... 9 3.2. Components ............................................................................................10
4. Overall Process ...............................................................................................10 4.1. Process Description ...................................................................................10 4.2. Process Flowchart .....................................................................................11
5. Process Verification .........................................................................................14 5.1. Process Overview .....................................................................................14 5.2. (Pre-Run) Tool Selection ............................................................................15 5.3. Inspection System ....................................................................................16 5.4. (Pre-Run) Planning and Preparation ............................................................18 5.5. (Pre-Run) Function Checks ........................................................................20 5.6. (Pre-Run) Mechanical Checks .....................................................................21 5.7. (In the Pipe) Procedure Execution ...............................................................23 5.8. (Post-Run) Mechanical Check .....................................................................24 5.9. (Post-Run) Function Check .........................................................................26 5.10. (Post-Run) Field Data Quality Check .........................................................28 5.11. (Post-Run) Data Analysis Process Check ...................................................29 5.12. (Post-Run) Cumulative Assessment ..........................................................31
6. Validation ......................................................................................................32 6.1. Known Pipeline Features ............................................................................32 6.2. Comparison with Previous ILI .....................................................................34 6.3. Validation from Excavation Data .................................................................36
A1. Scorecard and Guidance Document .....................................................................39 A1.1. Verification Examples .................................................................................50
A4. Validation using a Previous ILI ............................................................................56 A4.1. Demonstration of Concept ...........................................................................56 A4.2. ILI Error ....................................................................................................57 A4.3. Comparison with Reference Measurements ....................................................58 A4.4. Acceptance Criteria ....................................................................................59
A5. Opportunities for Future Refinement....................................................................65
A5.1. Standardization of ILI Reporting...................................................................65 A5.2. Documentation of Procedures ......................................................................67 A5.3. Refinement of Scorecard .............................................................................67 A5.4. Technology Specific Verification ...................................................................67
A6. Scoring – Verification Process Scorecard Summary ...............................................69
5.3.2. Motivation Whereas the emphasis of the Tool Selection check is to ensure
that the technology is capable of detecting and sizing the
anomalies, the motivation of this check is to ensure that the
inspection system is able to deliver quality data as demonstrated
by its history of successful runs.
5.3.3. Scoring Table 3: Inspection System Data Check Scoring
Score Scoring Description
F Tool is experimental and there is no established history or it has been demonstrated to have deficiencies in addressing the threat.
C Same model of tool with minor differences (such as diameter) has a history of successful runs to assess the threat, Or the specific model of tool has history of successful runs to assess the threat for other operators, but results of those runs are not available.
P Operator firsthand knowledge of the performance capabilities of the tool and has several successful inspections using the tool.
Note: For a comprehensive Verification Process Scorecard Summary, please
refer to Table 33 in A.6.
The scoring of this parameter is expected to be relatively
straightforward. If the operator has firsthand experience with the
specific ILI vendor’s tool, and the use of that tool has reliably
resulted in successful inspections, then a “Pass” is given.
If, however, the operator does not have firsthand experience with
the specific tool, but has indirect experience or knowledge of the
tool’s performance, a “Conditional” pass is scored. For example, a
“Conditional” is given if the operator has extensive experience
with other tools in the ILI vendor’s fleet, but those tools differ
from the tool used in the current inspection in some way, such as
diameter. Since there are usually a large number of similarities
between a vendor’s 24-inch and 30-inch tool, for example, the
performance of the 30-inch tool is a good indication of the
performance of the 24-inch tool.
This parameter receives a “Fail” if the ILI tool is of an
experimental prototype or if its past runs suggest a high failure
rate.
5.3.4. Options for Dealing with Untested Tools One of the implicit assumptions of this document is that if an
inspection is conducted according to proper procedures, then the
inspection system will perform to its ability. In the case of an
untested inspection system, that ability is not known. Therefore
to accept a tool with no history of successful runs requires a
5.9.3. Scoring Table 9 Post-Run Function Check Scoring
Score Scoring Description
F Significant function checks not passed.
C Significant function checks passed were not documented, but that the proper functioning of the tool can be Verified by other means throughout the length of the run.
P Function checks passed and documented
Note: For a comprehensive Verification Process Scorecard Summary, please
refer to Table 33 in A.6.
The scoring of this parameter is expected to be relatively
straightforward. For example, malfunctioning of the tool to the
point where there is significant data loss or degradation would be
deemed a “Fail”. However, if the tool experienced a functional
issue but no significant data degradation, then a “Conditional”
pass would be assigned. A “Pass” would be reported if all function
checks were passed upon tool receive and documented.
5.9.4. Options for Dealing with Compromised Data Quality The impact to data, due to a tool malfunction, must be addressed
on a case-by-case basis since the range of potential outcomes is
large. For example, an electronics failure at launch would require
a re-inspection. At the other end of the spectrum, an electronics
failure at receive would not be expected to be material to the
quality of the data collected.
If the function check is undocumented, the operator must
demonstrate that the tool was operating properly for the entire
length of the inspection. Documentation of this check is
important because the demonstration that the tool was operating
properly may be difficult.
The guiding principle remains, as stated above: The operator
must ensure that the dimensions of an injurious defect, for the
pipeline in question, are greater than the minimum detection and
sizing thresholds of the tool. If the operators cannot be confident
that an injurious defect would be detected, a range of options
exist depending on the location and length of the area where
data has been compromised. However, given the limited ability to
analyze tool data in the field, detailed analysis of data
degradation is unlikely to occur until data analysis in the office
environment is undertaken. As such, this check is intended to
identify major data shortfalls and degradation issues.
Should a “Fail” score be appropriate for this parameter (i.e.,
significant data degradation), the options are somewhat limited
in that some large-scale program to prove the integrity of the
pipeline must be undertaken. For example, the operator must
re-inspect the line having remedied the suspected cause of the
only be accepted once all girth weld numbers and pipe asset data
completely match the reference listing. The reported location of
all features must meet location-accuracy specifications to enable
the excavation of any reported feature.
6.1.4. Special Considerations If the ILI report initially fails to meet the above requirements, the
operator must investigate the cause. To meet the acceptance
criteria, the operator may choose to ignore the part of the
inspection in the stations at launch or receive where there are
many short pipe joints. Also, when replaced or rerouted segments
have been identified in the ILI report, the operator should
investigate and validate the information and then assign new
Girth Weld (GW) numbers. If the GW numbers assigned are
different from the GW reported on the ILI report, the operator
should request the ILI vendor to update the ILI report with the
new GW numbers. Finally, any unexplained features reported by
the ILI should be investigated to determine their cause.
6.2. Comparison with Previous ILI
6.2.1. Description The comparison with a previous ILI is likely the most
comprehensive method for validating the results of an ILI
inspection. Unlike excavating the pipeline, a previous ILI enables
the operator to systematically compare all anomalies of the
current inspection to the previous reference inspection.
6.2.2. Validation using a previous ILI consists of two parts: detection and accuracy. By using a previous inspection, the operator can confirm both the detection capabilities and the accuracy of the current inspection. Procedure The validation procedure using previous ILI data is listed below.
The procedure is meant as a guideline rather than rigorous set of
instructions. Deviations from the procedure may be required in
some circumstances. Furthermore, a lengthy interval (e.g. more
than 5 years) between ILI inspections or the use of very different
technologies can make matching difficult if not impossible. If
there is insufficient similarity between the inspections to make
adequate matches, then the current inspection cannot be
validated by the comparison. However, it would not necessarily
lead to the rejection of the ILI run, since the cause of the
discrepancy may be the previous ILI run.
6.2.2.1. Validation Parameters
The ILI validation criterion is based on the assumptions and
calculations in A4. Validation using a Previous ILI. The
The ILI validation criterion is based on the assumptions and
calculations in A4. Validation using a Previous ILI. The
validation parameters for a typical run are summarized in
Table 14.
Table 14: ILI Validation Parameters 𝑻 Tolerance to be
Validated in the current ILI.
This value is 10% if the Specified accuracy of the current ILI is ±10% NWT, 80% of the time.
𝑪 Specified confidence level (or certainty) of the current ILI.
This value is 80% if the specified accuracy of the previous ILI is ±10% NWT, 80% of the time.
𝑻𝟏 Specified tolerance of the in-the-ditch measurements.
If the device used in the field is highly accurate, then the field measurements can be assumed to have no error. Or refer PRCI project EC-4-2 for depth error of commonly used NDE devices
𝑪𝟏 Specified confidence level (or certainty) of in-the-ditch measurements.
This value is assumed to be 95%.
𝑻𝒖𝒑𝒑𝒆𝒓 Upper bound of acceptance for the ILI tolerance.
CEPA recommends that this value be 1.1 × 𝑇. Thus in most cases
𝑇𝑢𝑝𝑝𝑒𝑟 = 11%.
𝟏 − 𝜶 Confidence level for the tolerance validation.
1 − 𝛼 is commonly set at 95%, which makes 𝛼 = 0.05.
𝑵 Minimum comparison sample size
Based on the above parameters and the calculations in A4.
Validation using a Previous ILI, the minimum sample size is 134 comparisons. Note that if the NDT measurements in the field have
significant error, then the procedure in in A4. Validation using a
Previous ILI should be used to calculate the minimum sample size.
6.3.2.2. Depth Difference Statistics
For each comparison calculate the apparent difference in
depth:
Δ𝑖 = 𝑑𝑖 − 𝑑𝑟𝑖
Where 𝑑𝑖 is the depth of the 𝑖’th anomaly in the
inspection;
𝑑𝑟𝑖 is the corresponding in-the-ditch depth of
the 𝑖’th anomaly; and
Δ𝑖 is the difference between the in-the-ditch
measurement and the ILI reported depth of the 𝑖’th anomaly.
Calculate the mean, Δ̅, and standard deviation, 𝑠Δ, of the
Score (F/C/P) “Fail” - Tool not capable of detection or sizing of expected anomaly type(s). “Conditional” - Tool capable of detecting anomaly types but has limited sizing ability. “Pass” – Tool is best available technology for detecting and sizing expected anomaly types(s).
Flowchart Box 1A (Pass Check?)
Use Guidance Document: SP0102-2010 - specifically Table 1. “Pass” if the tool was the tool is the best available technology relative to the purpose of the inspection.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the operator can establish that the integrity of the pipeline is not jeopardized by the use of the specific tool.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems in this check are likely due to changes in the pipeline (diameter, wall thickness, etc.) Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of a more suited ILI tool would lead to a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Table 16: Guidance for Parameter #2 Pre-Run Inspection System Data Item # 2
Parameter Inspection system check
Stage Pre-run
API Category System Results Validation
API 1163 Reference 2
Score (F/C/P) “Fail” - Tool is experimental and there is no established history or has been demonstrated to have data gaps. “Conditional” – Tools of the same model with minor differences have a history of successful runs or tool has a history of successful runs, but data is not available to the operator. “Pass” – Tool has a history of successful runs.
Flowchart Box 1A (Pass Check?)
“Pass” if the operator has first-hand knowledge of the performance capabilities of the tool and has several successful inspections using the tool.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the operator has first-hand knowledge of the similar model of tool on other pipelines. The experiences may be with models of tool on different diameters. “Conditional” if the tool has been successfully run on several other pipeline systems, but for other operators and the operator has no access to the data.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems in this check are likely due to changes in the pipeline (diameter, wall thickness, etc.) Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of a more tested ILI tool would lead to a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Score (F/C/P) “Fail” - Key elements of Pipeline ILI Compatibility Assessment and Inspection Scheduling not conducted. “Conditional” - Majority of elements of Pipeline ILI Compatibility Assessment and Inspection Scheduling completed but undocumented. “Pass” – All elements of Pipeline ILI Compatibility Assessment and Inspection Scheduling completed and documented.
Flowchart Box 1A (Pass Check?)
Use Guidance Document: SP0102-2010 - specifically sections 4, 5 and 6. “Pass” if the appropriate plan was developed and executed for the expected line conditions.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if either the plan was deficient or if the plan was not properly executed, but without affecting the data.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems can be caused by event not covered by the plan. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if the lack of planning in one of the elements described in the RP contributed to compromising data collection. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Table 18: Guidance for Parameter #4 Pre-Run Function Checks Item # 4
Parameter Function Checks
Stage Pre-run
API Category System Operational Verification
API 1163 Reference 7.3.2
Score (F/C/P) “Fail” - Significant function checks not passed. “Conditional” - Significant function checks passed completed but undocumented. “Pass” - All function checks passed and documented.
Flowchart Box 1A (Pass Check?)
“Pass” if all relevant function checks passed, including but not limited to
- Adequate power supply available and operational; - Sensors and data storage operating; - Adequate data storage available; - All tool components properly initialized;
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the checks were done but not documented or if the functional integrity of the tool can be demonstrated indirectly.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems are unlikely in this check. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool with adequate pre-run checks is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Score (F/C/P) “Fail” - Significant mechanical checks not passed. “Conditional” - Significant mechanical checks completed but undocumented. “Pass” - All mechanical checks passed and documented.
Flowchart Box 1A (Pass Check?)
“Pass” if all relevant mechanical checks passed, including but not limited to: - Visual inspection of tool to ensure it is mechanically sound; - Ensuring electronics are sealed; - Ensuring adequate integrity of cups; - Ensuring all moving parts are functioning as expected;
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the checks were done but not documented or if the mechanical integrity of the tool can be demonstrated indirectly.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems are unlikely in this check. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool with adequate pre-run checks is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Score (F/C/P) “Fail” - Inspection not conducted as per inspection procedure with potential material impact to data quality. “Conditional” - Inspection not carried out as per inspection procedure but deviations are not material to data quality. “Pass” - Inspection carried out as per inspection procedure.
Flowchart Box 1A (Pass Check?)
“Pass” – if all relevant checks pass, including but not limited to -Tool run was executed as per the planned pigging procedure; -Line condition (fluid composition, flow rate, temperature, pressure, etc.) was as planned; -Line conditions for tool launch as expected and the launch proceed as planned; -Line conditions for tool receive was as expected and the receive proceed as planned; -Tool speed was within the planned range for the length of the run. -Tool tracking unfold as planned; If deviations did occur, they planned or within expectations.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if areas where deviations from the planned procedure are manageable or pose minimal risk to the pipeline.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems can be caused by short speed excursions. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool with better planning to address the problem is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Score (F/C/P) “Fail” - Significant tool wear, damage or debris with material impact to data. “Conditional” - Tool wear, damage or debris observed with no material impact to data. “Pass” - Tool received in good mechanical condition (no unexpected tool wear, damage or debris).
Flowchart Box 1A (Pass Check?)
“Pass” if all relevant mechanical checks passed, including but not limited to -Visual inspection of tool to ensure it is mechanically sound? -Ensuring electronics are sealed; -Ensuring adequate integrity of cups; -Ensuring all moving parts are functioning as expected; -The volume and nature of any debris present was within expectations and not detrimental to data collection.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the checks were done but not documented or if the mechanical integrity of the tool can be demonstrated indirectly. The demonstration of the mechanical integrity of the tool can be difficult unless it has been properly documented.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems are unlikely in this check, except when the damage to the tool occurred near the end of the run. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool after addressing the cause of the problem is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Table 22: Guidance for Parameter #8 Post-Run Function Check Item # 8
Parameter Function Check
Stage Post run
API Category System Operational Verification
API 1163 Reference 7.5.2
Score (F/C/P) “Fail” - Significant function checks not passed. “Conditional” - Significant function checks passed but undocumented. “Pass” - Function checks passed and documented.
Flowchart Box 1A (Pass Check?)
“Pass” if all relevant function checks passed, including but not limited to -Adequate power supply available and operational; -Sensors and data storage operating; -Adequate data storage available; -All tool components functioning as expected.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the checks were done but not documented or if the functional integrity of the tool can be demonstrated indirectly. The demonstration of the functional integrity of the tool can be difficult unless it has been properly documented.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems are unlikely in this check, except when the functional failure of the tool occurred near the end of the run. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool with better planning to address the problem is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Table 23: Guidance for Parameter #9 Post-Run Field Data Check Item # 9
Parameter Field Data Check
Stage Post run
API Category System Operational Verification
API 1163 Reference 7.5.3
Score (F/C/P) “Fail” - Tool is unable to meet stated specifications due to significant lack of data integrity. “Conditional” - Tool is unable to meet stated specifications but manageable through further analysis. “Pass” - Tool is able to meet stated specification for entire length of run.
Flowchart Box 1A (Pass Check?)
“Pass” if the data collection met all the basic quality and quantity checks, including but not limited to -Confirmation of continuous data stream for full circumference of pipe; -Basic quality requirements have been met; -The “amount” of data captured in line with expectations;
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the checks were done but not documented or if the full data collection by the tool can be demonstrated indirectly.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems can be due to short losses of data. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool with better planning to address the problem is likely to result in a successful run. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Table 24: Guidance for Parameter #10 Post-Run Data Analysis Processes and
Quality Checks Item # 10
Parameter Data analysis processes: quality checks
Stage Post-run
API Category System Results Validation
API 1163 Reference 8.2.2, Annex C
Score (F/C/P) “Fail” - Results of data analysis quality checks are not acceptable. “Conditional” - Significant data quality checks passed but procedure is undocumented. “Pass” - Data analysis procedures followed; data quality checks passed and documented; the number and severity of anomalies meets expectations.
Flowchart Box 1A (Pass Check?)
“Pass” if data analysis checks are met including but not limited to -Continuous recording of data was for the full pipe circumference; -Sensor response was within expected range(s); -Data analysis processes were executed as per pre-defined procedures; -Analysis was conducted by persons with qualification as agreed; -Automated detection and sizing parameters were used as agreed; -Manual intervention by data analysts was conducted as agreed; -Burst pressure calculations where conducted as agreed; -Correct pipeline parameters (pipe diameter, wall thickness, manufacturer, and grade) were documented and used to undertake the analysis; -The number and type of anomalies reported are consistent with expectations.
Flowchart Box 1B (Is the impact on data significant?)
“Conditional” if the analysis of the data deviated from the planned procedure but that the impact on the data is deemed minimal.
Flowchart Box 1C (Is problem localized?)
Go to Box IE if problem is localized: Localized problems are unlikely in this check or can be corrected by reanalysis of the affected areas. Go to Box 1D is problem is widespread.
Flowchart Box 1D (Can a re-run fix the problem?)
“Fail” if a rerun of the tool is likely to result in a successful run. However, rerunning the tool is unlikely to address problems in the analysis of the data. If a rerun is unlikely to be successful, then the threat must be address by other means.
Flowchart Box 1E (Can issue be addressed by other means?)
“Conditional” if alternative integrity management tools such as hydrotesting, direct assessment, etc. can adequately address the localized problems. Go to Box 1D of the areas are too numerous or too long to be addressed economically by other means.
Score (F/A/P) “Fail” - Cumulative impact of "Conditional" passes deemed to materially impact ILI results A - Cumulative impact of "Conditional" passes is unclear and may have impacted ILI results “Pass” - The cumulative impact of all "Conditional" passes, if any, deemed to be tolerable
Flowchart Box 1A (Pass Check?)
“Fail” if the number and nature of any Conditional passes provide a cause for concern on a cumulative basis. Relevant considerations include, but are not limited to -Can data gaps be mitigated effectively using alternative methods? -Can any data gaps actually be addressed through re-running the tool or are line conditions such that similar challenges will remain? -Are any “Conditional” pass issues cumulative in nature? Issues would be considered to be cumulative if one issue magnifies any pre-existing data degradation (such as a tool overspeed in a location where the tool performance is already compromised as a result of debris related sensor lift off). Conversely, issues would not be considered cumulative if their impact on data quality is largely independent (such as a run where the tool overspeeds at launch and experiences debris issues at receive).
Additional Comments:
F = Failing assessment of a parameter that cannot be mitigated
C = Conditional passing score that may be investigated, mitigated, documented, and
During the running of the tool, a short speed excursion occurred
at the start of the run. The excursion was believed to affect sizing
capabilities of the tool, but would have minimal effect on the
detection capabilities of the tool. Since no metal-loss anomalies
were detected in the area of the speed excursion, the effect on
the data was deemed to be not significant.
In the data analysis process, it was discovered that one sensor
was lost for 100 metres. The loss of one sensor was believed to
affect detection and sizing capabilities of the tool only for metal-
loss anomalies along that sensor’s track. Only very sparse
shallow anomalies were detected in the section of the sensor
loss, and none of the anomalies were in the vicinity of the lost
sensor. The effect on the data was deemed to be not significant.
The final Cumulative Assessment examined the “Conditional”
scores. Since the area of speed excursion and the sensor loss
were along different parts of the pipeline, all aspects of the run
were considered acceptable and none of the issues were
cumulative, the ILI run was accepted.
A1.1.2. Example 2 The second example is:
• 100 km NPS 24 run 1950 asphalt coated line.
• Loss of one sensor bank for last 1 km of inspection.
• All pre-run and post-run tests passed and documented.
• 20 metal loss anomalies reported by ILI tool.
• No actionable anomalies identified; no previous inline
inspection.
The completed scorecard is shown in Table 27.
Table 27: Completed Scorecard Example 2
Stage Parameter Score Comment 1 Pre run Tool selection Pass
2 Pre run Inspection system Conditional -Operator has experience with the 16-inch and 36-inch model of this tool. -The 24-inch model of this tool has been successfully run for several other operating companies, but results of those runs are not available.
3 Pre run Planning Pass
4 Pre run Function check Pass
5 Pre run Mechanical check Pass
6 In pipe Procedure execution Pass
7 Post run Mechanical check Pass
8 Post run Function check Pass
9 Post run Field data check Conditional -Loss of sensor bank for last two km of inspection -Deemed tolerable since section under hydrotest
10 Post run Data analysis processes Conditional -Independent audit identified incorrect threshold set during data analysis process -Data re-analyzed and report re-issued (10,000 metal features identified)
11 Post run Cumulative assessment Pass -All “Conditional” scores mitigated -No material cumulative impacts identified
A3.1. Process overview When validating an ILI inspection using a previous inspection, there are two
ILI run datasets to be compared. Defect matching is done so that defects
from the first run can be compared to the corresponding defect in the
second run. This process involves matching up the girth weld sections,
adjusting the odometer and orientation, and matching the identified
anomalies.
A3.2. Girth Weld Matching Girth welds are very easy to detect with current ILI tools. This means that
the girth weld sections can be matched up from the two ILI runs based on
the length of the sections. Once girth weld sections are matched, the defects
on specific girth weld sections can be matched between runs.
A3.3. Matching of identified Anomalies Once the girth weld sections are matched between the runs, the defects are
matched based on chainage, orientation and depth. Location (as in chainage
and orientation) and size are prioritized over identification (whether the
anomaly is identified as being internal/external on the pipe surface). This
means that external anomalies can be matched to internal anomalies based
on chainage and orientation. This is done because the defect location on the
pipe surface is less certain than the reported chainage and orientation
values.
A3.4. Calculating Anomaly Depth Change With the defects matched between the two runs, it is possible to calculate
the apparent difference in depth of each anomaly. With this information, the
systematic bias and the standard deviation of the difference can be
calculated.
Suppose that we have 𝑛 matched anomalies with depths of 𝑑1, 𝑑2, 𝑑3, … 𝑑𝑛 , as
reported by the current ILI. The depths of the corresponding anomalies in the reference ILI run are 𝑑𝑟1, 𝑑𝑟2, 𝑑𝑟3 … 𝑑𝑟𝑛. The apparent difference in depth
A4.4.1. Systematic Bias Criterion This relative bias between two ILI runs does not indicate anything
specifically about either of the ILI runs, but a large relative bias
could indicate that one of the ILI runs does not meet its stated
specification.
To establish how much bias is significant and how one might
adjust for it, consider the following: Assume that the random
component of the error has a normal distribution, then the average of the Δ𝑑’s is the best estimate of the systematic bias, 𝜀𝑠.
Note, however, that the average is only an estimate of the bias:
𝜀𝑠 = Δ𝑑̅̅̅̅ ± 1.96𝜎Δ
√𝑛
Where 1.96𝜎Δ
√𝑛 is the 95% confidence bound of the estimate.
Table 28 outlines the use and significance of the calculated
relative bias.
Table 28: Considerations when Dealing with Systematic Bias Item Description Considerations
1 Usage
Systematic bias can be readily considered when validating an inspection by comparing it to other data sets.
Caution should be used in adjusting for bias when selecting excavations – especially if adjustments result in lower depths.
2 Size of bias Systematic bias may be considered to be significant based on some constant threshold (say 5 - 6% of NWT) and could be associated with tool detection threshold(s).
Alternatively, bias may be viewed in the context of defect depth. That is, if the deepest defect is 10%, a 5% bias may be significant; however, if the deepest defect is 40%, a 5% bias may be less material.
3 Statistical significance
Systematic bias may be considered to be statistically significant based on size of the confidence interval
±1.96σΔd
√n. If Δd̅̅̅̅ > 1.96
σΔd
√n , then the calculated bias is
statistically significant to a 95% confidence level. A calculated bias that is not statistically significant should not be used to adjust the ILI reported depths.
A4.4.2. Random Error Criterion Given an ILI run, we can use the reference measurements to
validate the inspection results. We would like to prove that tolerance, 𝑇, meets the tool specification for the ILI that in
the examples is ±10% NWT, 80% of the time. However, we
cannot statistically prove that the tolerance is exactly 10%;
the best we can do is estimate the tolerance. If the target
tolerance is within the 95% confidence bounds of the
estimate, then we have validated the ILI accuracy.
A5. Opportunities for Future Refinement Table 30 contains a summary of the key considerations for future development of this
procedure.
Table 30: Key Items to Consider for Future Refinement Item No.
Key Consideration Recommendation
1 Standardization of ILI Reporting
Reporting of the activities prior, during, and after the inspection should be standardized. Standardization of the reporting of these activities would result in greater consistency of the verification Process.
2 Documentation of Procedures
Documentation of all checks should be required. Proper documentation is also indicative of the ILI vendor’s diligence in following established Standards and Guidelines.
3 Technology Specific Verification
Separate versions of the scorecard should be developed for MFL and UT inspection tools. Also, separate versions of the scorecard should be developed for liquid and gas pipelines.
4 Refinement of Scorecard ILI vendors and operators should provide standardized reporting and the scorecard should be refined to yield a numeric 0-10 score.
A5.1. Standardization of ILI Reporting The purpose of this Guidance Document is to provide a standard procedure
by which operators may verify and validate ILI runs. The procedure is
intended to be independent of any specific operator, ILI vendor, or ILI
technology. Creating a standard procedure has been hampered by the lack
of standardized reporting by the ILI vendors.
Table 31 shows the variability in the data provided to CEPA for various lines.
The documentation of pre-run cleaning, for example was inconsistent. In
some cases it was documented in the report (as indicated by “yes” in the
table). Sometimes pre-run cleaning was not documented (as indicated by
“no” in the table). Sometimes pre-run cleaning was documented by it saying
it may not have been applicable (as indicated by “Not/App?” in the table).
The significance of the differences is not readily obvious.
An area for future consideration is the potential for reporting of the activities
prior, during, and after the inspection to be standardized. Standardization of
the reporting of these activities would result in greater consistency of the
A6. Scoring – Verification Process Scorecard Summary
Table 33: Verification Process Scorecard Summary
Item Parameter Stage Score
1 Tool Selection Pre-Run F - Tool not capable of detection or sizing of expected anomaly type(s) C - Tool capable of detecting anomaly types but limited sizing or detection abilities of expected anomaly type(s) P - Best available technology for detecting and sizing expected anomaly type(s) identified and used
2 Inspection System
Data
Pre-Run F - Tool is experimental and there is no established history or it has been demonstrated to have deficiencies in addressing the threat C - Same model of tool with minor differences (such as diameter) has a history of successful runs to assess the threat, or the specific model of tool has history of successful runs to assess the threat for other operators, but results of those runs are not available P - Operator firsthand knowledge of the performance capabilities of the tool and has several successful inspections using the tool
3 Planning Pre-Run F - Key elements of Pipeline ILI Compatibility Assessment and Inspection planning not conducted C - Majority of elements of Pipeline ILI Compatibility Assessment and Inspection Scheduling completed but undocumented P - Elements of Pipeline ILI Compatibility and Inspection Scheduling completed and documented
4 Function Checks Pre-Run F - Significant function checks not passed C - Significant function checks passed but checks are undocumented
P - All function checks passed and documented
5 Mechanical Checks Pre-Run F - Significant mechanical checks not passed C - Significant mechanical checks passed but checks are undocumented P - All mechanical checks passed and documented
6 Procedure
Execution (e.g. tool
speed, pigging
procedure, and
etc.)
In the Pipe F - Inspection not carried out as per inspection procedure with potential material impact to data quality C - Inspection not carried out as per inspection procedure but deviations are not material to data quality
P - Inspection carried out as per inspection procedure
7 Mechanical Checks Post-Run F - Significant tool wear, damage or debris with
Item Parameter Stage Score material impact to data C - Tool wear, damage or debris observed with no material impact to data P - Tool received in good mechanical condition (with no unexpected tool wear, damage or debris)
8 Function Check Post-Run F - Significant function checks not passed
C - Significant function checks passed were not documented, but that the proper functioning of the tool can be Verified by other means throughout the length of the run P - Function checks passed and documented
9 Field Data Quality
Check
Post-Run F - Tool unable to meet stated specifications due to significant lack of data integrity C - Tool unable to meet stated specifications but manageable through further analysis
P - Tool able to meet stated specifications for entire length of run
10 Data Analysis
Process – Quality
Check
Post-Run F - Results of data analysis quality checks are not acceptable A - Significant data quality checks passed, but quality checks initially undocumented or reanalysis was required P - Data quality checks passed and documented