Strategies for Analytical Method Replacements for Approved ...c.ymcdn.com/sites/ · Strategies for Analytical Method Replacements for Approved Products. ... The method performance

Post on 26-Apr-2018

219 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

Transcript

Stephan O. Krause, PhDWCBP 2017, Washington, DC

Strictly Confidential

24-26 January 2017

Strategies for Analytical Method Replacements

for Approved Products

Outline

I. Introduction and Method Comparison Strategies

Replacement strategy background and rationale

General Strategies for Qualitative and Quantitative Methods

II. Analytical Method Comparison (AMC)

Case Studies (qualitative and quantitative methods)

III. Comparison and Implementation considerations

The content and views expressed by the author/presenter are not necessarily the views of the organization he represents.

3

Analytical Method Comparability (AMC)

What is AMC ?

– AMC is the demonstration of comparable (“equivalent or better”) test method

performance of a modified/new method.

– AMC should be demonstrated for methods replacing approved methods (in-

house licensed, compendial, or otherwise recognized).

Why is it important ?

– A continuous suitable test method performance must be assured for

safety/efficacy/quality to patients (linking to licensed specs and clinical data).

– This also assures a data/quality continuum for the sponsor.

How exactly can we demonstrate AMC ?

– Follow ICH E9 and CPMPs Points to Consider guidelines (detailed in PDA TR

57)

– Demonstrate “equal or better” by testing for non-inferiority, equivalence, or

superiority depending on assay type and need (risk).

– Compare particular test method performance criteria per ICH Q2(R1) (detailed

in PDA TR 57).

Krause/PDA, 2012

4

Points to Consider for AMC Studies

Krause/PDA, 2012

FTIH POC BLAQTPP

Approval

CQ

A

De

ve

lop

me

nt

Sp

ec

s L

ife

Cy

cle

Mg

mt

CM

C a

nd

Te

ch

Tra

ns

fer

Pro

ce

ss

Analytical

Manufacturing

Strategic or

Tactical ChangesDose

change

Delivery

Device

PPQ lotsMfg

TransferMfg Process

Change

Qualification

Process Verification (CPV)

Global

Supply

Development of Control Strategy

Specification Life Cycle Management

ValidationOptimization Method ChangeContinuous VerificationS

up

po

rt PP

Q S

pe

cs

?Transfer

5

Points to Consider for AMC Studies

Krause/PDA, 2012

Consider:

A. Impact on DS/DP Specifications

FTIH POC BLAQTPP

CQ

A

De

ve

lop

me

nt

Sp

ec

s L

ife

Cy

cle

Mg

mt

CM

C a

nd

Te

ch

Tra

ns

fer

Pro

ce

ss

Analytical

Manufacturing

Strategic or

Tactical ChangesDose

change

Delivery

Device

PPQ lotsMfg

TransferMfg Process

Change

Qualification

Process Verification (CPV)

Global

Supply

Specification Life Cycle Management

ValidationOptimization Method ChangeContinuous VerificationS

up

po

rt PP

Q S

pe

cs

?Transfer

6

Points to Consider for AMC Studies

Krause/PDA, 2012

Consider:

A. Impact on DS/DP Specifications

B. Impact on CPV program (CPPs, KPPs)

FTIH POC BLAQTPP

CQ

A

De

ve

lop

me

nt

Sp

ec

s L

ife

Cy

cle

Mg

mt

CM

C a

nd

Te

ch

Tra

ns

fer

Pro

ce

ss

Analytical

Manufacturing

Strategic or

Tactical ChangesDose

change

Delivery

Device

PPQ lotsMfg

TransferMfg Process

Change

Qualification

Process Verification (CPV)

Global

Supply

Specification Life Cycle Management

ValidationOptimization Method ChangeContinuous VerificationS

up

po

rt PP

Q S

pe

cs

?Transfer

7

Points to Consider for AMC Studies

Krause/PDA, 2012

Consider:

A. Impact on DS/DP Specifications

B. Impact on CPV program (CPPs, KPPs)

C. Impact of ROW Lot Release Testing

FTIH POC BLAQTPP

CQ

A

De

ve

lop

me

nt

Sp

ec

s L

ife

Cy

cle

Mg

mt

CM

C a

nd

Te

ch

Tra

ns

fer

Pro

ce

ss

Analytical

Manufacturing

Strategic or

Tactical ChangesDose

change

Delivery

Device

PPQ lotsMfg

TransferMfg Process

Change

Qualification

Process Verification (CPV)

Global

Supply

Specification Life Cycle Management

ValidationOptimization Method ChangeContinuous VerificationS

up

po

rt PP

Q S

pe

cs

?Transfer

8

Analytical Method Comparability StudiesSuggested Performance Characteristics and Statistics

ICH Q2(R1) Category

Identification and/or

Pass/Fail Test (Qualitative)

Limit Test (Qualitative)

Limit Test (Quantitative)

Potency or Content (Purity

or Range) (Quantitative)

Accuracy Not Required Not Required Paired matched T-Test (TOST); Some Data could be at QL and/or at OOS level

TOST

Intermediate Precision

Not Required Not Required Mixed linear model; F-test statistics

Mixed linear model; F-test statistics

Specificity Proportions Test; Chi-Squared (Fisher Exact Test) for Number of Correct Observations

Proportions Test; Chi-Squared (Fisher Exact Test) for Number of Correct Observations

Not Required Not Required

Detection Limit

Not Required Depends on how DL was established. Proportions Test calculations may be used

Not Required Not Required

QuantitationLimit

Not Required Not Required Depends on how QL was established.

Not Required

Krause/PDA, 2012

9

AMC Categories (from ICH E9/PDA TR 57)

Qualitative Methods

• Non-inferiority

• Superiority

Quantitative Methods

• Equivalence

Krause/PDA, 2012

10

Demonstrating Non-Inferiority

No

n-I

nfe

rio

rity

Lim

it

-9

0%

CI

- Delta 0

+9

0%

CI

Current Method

“Better Results”

New Method

“Better Results”No difference

Me

an

Diffe

ren

ce

Non-inferiority Testing (PDA TR 57)Current Method = Reference

Non-Inferiority Demonstrated

Desirable Direction/Range

Krause/PDA, 2012

11

Demonstrating Superiority

-9

0%

CI

0

+9

0%

CI

Current Method

“Better Results”

New Method

“Better Results”No differenceM

ea

n D

iffe

ren

ce

Superiority Testing (PDA TR 57)Current Method = Reference

Superiority Demonstrated

Su

pe

rio

rity

Lim

it (

0)

Desirable Direction/Range

Krause/PDA, 2012

12

Qualitative MethodsDemonstrating Non-Inferiority

A faster and technologically advanced method for (upstream in-process) sterility testing was validated and compared to the compendial EP/USP Sterility Test.

The current proportions (hit/miss ratio) of the USP/EP method is approx. 77% (23% false negative results with SD of approx. 5.0%).

Using a Proportions Test, the non-inferiority comparison at the 95% confidence level (p=0.05) was chosen with a pre-specified non-inferiority limit of –10.0%.

Justification for Non-Inferiority Model and Limit:

Non-inferiority, equivalence, and superiority are all acceptable outcomes.

The increased testing frequency of daily (n=7 per week) and faster results for the new sterility versus twice weekly (n=2 per week) for the EP/USP Sterility test significantly increases the likelihood of detecting organisms with the new method.

The -10.0% limit versus the compendial (current) method was set and justified based on the compendial method performance (2 x EP/USP method standard deviations: 2x 5.0% = 10.0%) for the detection of (pooled) reference microbial organisms and/or plant isolates.

Krause/PDA, 2012

13

Demonstrating Non-InferiorityResults for the Non-Inferiority Test: Candidate Method vs. USP Sterility

Krause/PDA, 2012

Method Positives Total Samples (n) Positives-to-Fail Ratios

Candidate 225 300 0.75 (75%)

EP/USP 232 300 0.77 (77%)

Statistical Results

Difference = p (new method) - p (EP/USP)

Estimate for difference: -0.023 (-2.3%)

95% lower confidence interval limit for difference: -0.080 (-8.0%) (Limit = -10.0%)

14

Demonstrating Non-Inferiority

No

n-I

nfe

rio

rity

Lim

it

-9

0%

CI

- 10.0% 0

+9

0%

CI

Current Method

“Better Results”

New Method

“Better Results”No difference

Me

an

Diffe

ren

ce

Non-inferiority Testing (PDA TR 57)Current Method = Reference

Non-Inferiority Demonstrated

Desirable Direction/Range

Krause/PDA, 2012

15

Demonstrating Superiority

From the previous example for non-inferiority: When the relative testing frequency of our example of n=7 (new method) versus n=2 for the compendial method is integrated in our comparison studies, the superiority of the new method could be demonstrated.

Krause/PDA, 2012

16

Demonstrating Superiority

Results

Candidate Method (7x) vs. EP/USP Sterility (2x): Sample Positives Total Probability 95% CI for Probability Candidate 225 300 0.9999 0.9997 – 1.0000 EP/USP 232 300 0.947 0.921 – 0.967

Krause/PDA, 2012

Results/Conclusions:

Superiority at the 95% confidence level could be demonstrated because the new

method’s 95% confidence interval (0.9997-1.0000) for the positive-to-fail

probability (0.9999) lies entirely to the right of the 95% confidence interval

(0.921-0.967) of the compendial method’s positive-to-fail probability (0.947).

The superiority test was passed with a much greater relative margin than the

non-inferiority test. This is a good example why we should always consider

upfront which comparison study to select and how to defend our strategy in

regulatory submission.

17

Superiority of New Method (vs. Current/Compendial)

Demonstrated

Krause/PDA, 2012

92.1

%

94.7

%

91% 92% 93% 94% 95% 96% 97% 98% 99% 100%

96.7

%

USP/EP Method

“Better”

Candidate Method

“Better”

USP/EP

Probability:

99.9

7%

99.9

9%

10

0.0

0%

Candidate Method

(not drawn to scale)

18

Demonstrating Equivalence

Eq

uiv

ale

nce

Lim

it

-9

0%

CI

- 0.50% 0 + 0.50%

+9

0%

CI

New Method

“Lower Results”

New Method

“Higher Results”

Eq

uiv

ale

nce

Lim

it

No difference

Me

an

Diffe

ren

ce

Equivalence Testing (PDA TR 57)Current Method = Reference

Equivalence Demonstrated

Krause/PDA, 2012

19

Demonstrating EquivalenceIntroduction

It was decided to develop and validate a capillary electrophoresis (CE) method to replace a current SDS-PAGE electrophoretic method The method performance characteristics (quantitative limit test), are compared:

- Accuracy (“matching”) is directly compared through product sample testing (release and stability).

- (Intermediate) precision is directly compared through (recent) assay control comparison.

- Quantitation limits are compared historically from validation studies

For accuracy/matching: A delta of plus/minus 0.50% was chosen for the equivalence category between both impurity levels from the analysis of historical data with respect to the current specifications (for SDS-PAGE). Both methods were run simultaneously (side-by-side) for each of a total of n=30 reported results were compared by two-sided matched-paired t-test statistics with pre-specified equivalence limits of plus/minus 0.50% (% = reported percent and not relative percent).

Krause/PDA, 2012

20

Demonstrating EquivalenceIntroduction

It was decided to develop and validate a capillary electrophoresis (CE) method to replace a current SDS-PAGE electrophoretic method The method performance characteristics (quantitative limit test), are compared:

- Accuracy (“matching”) is directly compared through product sample testing (release and stability).

- (Intermediate) precision is directly compared through (recent) assay control comparison.

- Quantitation limits are compared historically from validation studies

For accuracy/matching: A limit of plus/minus 0.50% was chosen for the equivalence category between both impurity levels from the analysis of historical data with respect to the current specifications (for SDS-PAGE). Both methods were run simultaneously (side-by-side) for each of a total of n=30 reported results were compared by two-sided matched-paired t-test statistics with pre-specified equivalence limits of plus/minus 0.50% (% = reported percent and not relative percent).

Krause/PDA, 2012

21

Evaluating Process CapabilityDP Release and Stability

Time (years)

Current Method(Purity %)

1 2 3 4

100.0%

98.5%

DP EOSL NLT 97.0%

99.5%

99.0%

98.0%

97.0%

Pooled n=6 DP lots (R -EOSL = 0.60%)

Predicted R – EOSL Difference:Long-term assay variation + pooled slope

uncertainty (n=6 DP)

Mean +/- 3 SDs

Historical DP Release

Results (n=40)

DP Release

22

100.0%

98.5%

DP EOSL

99.5%

99.0%

98.0%

97.0%

Historical DP Release

Results (n=40)

DP Release

Mean +/- 3 SDs

0.5%

0.4%

CpK = 1.00 (- 3SDs)

4-Year Loss

(0.6%)

Predicted DP Stability (n=6)

Evaluating Process CapabilityDP Release and Stability

23

Setting Risk-Based AMC Acceptance CriteriaOptions/Examples

100.0%

98.5%

DP EOSL

99.5%

99.0%

98.0%

97.0%

DP Release

Mean +/- 3 SDs

0.50%

0.40%

CpK = 1.00 (- 3SDs)

4-Year Loss

(0.60%)

Options/Examples for Equivalence Limit(s) for 90% CI for Mean Difference(s): 1. Symmetrical limit(s) based on current mfg/testing capability: DP R – CpK 1.00 = +/- 0.50% - Pool release and stability data (if possible; may need to normalize data)- Evaluate separately (ex., for release: +/- 0.50%; for EOSL: +/- 0.40%)- Accept stability comparison results if < 0.40% different (90% lower CI)

2. Asymmetrical limit(s): -0.50% (protect mfger) and < +0.50% (protect patient)- ...etc.

24

Setting Risk-Based AMC Acceptance CriteriaSymmetrical Margins (for Release)

100.0%

98.5%

DP EOSL

99.5%

99.0%

98.0%

97.0%

DP Release

Mean +/- 3 SDs

0.50%

0.40%

CpK = 1.00 (- 3SDs)

4-Year Loss

(0.60%)

Options/Examples for Equivalence Limit(s) for 90% CI for Mean Difference(s): 1. Symmetrical limit(s) based on current mfg/testing capability: DP R – CpK 1.00 = +/- 0.50% - Pool release and stability data (if possible; may need to normalize data)- Evaluate separately (ex., for release: +/- 0.50%; for EOSL: +/- 0.40%)- Accept stability comparison results if < 0.40% different (90% lower CI)

2. Asymmetrical limit(s): -0.50% (protect mfger) and < +0.50% (protect patient)- ...etc.

25

Setting Risk-Based AMC Acceptance CriteriaSymmetrical Margins (for Release)

100.0%

98.5%

DP EOSL

99.5%

99.0%

98.0%

97.0%

DP Release

Mean +/- 3 SDs

0.50%

0.40%

CpK = 1.00 (- 3SDs)

4-Year Loss

(0.60%)

Options/Examples for Equivalence Limit(s) for 90% CI for Mean Difference(s): 1. Symmetrical limit(s) based on current mfg/testing capability: DP R – CpK 1.00 = +/- 0.50% - Pool release and stability data (if possible; may need to normalize data)- Evaluate separately (ex., for release: +/- 0.50%; for EOSL: +/- 0.40%)- Accept stability comparison results if < 0.40% different (90% lower CI)

2. Asymmetrical limit(s): -0.50% (protect mfger) and < +0.50% (protect patient)- ...etc.

Demonstrating Equivalence

Eq

uiv

ale

nce

Lim

it

-9

0%

CI

- 0.50% 0 + 0.50%

+9

0%

CI

New Method

“Lower Results”

New Method

“Higher Results”

Eq

uiv

ale

nce

Lim

it

No difference

Me

an

Diffe

ren

ce

Equivalence Testing (PDA TR 57)Current Method = Reference

Equivalence Demonstrated

Krause/PDA, 2012

27

AMC Study ResultsSymmetrical Limits (for Release)

Time (years)1 2 3 4

100.0%

98.5%

DP EOSL NLT 97.0%

99.5%

99.0%

98.0%

97.0%

Avg of n=6 DP lots (R - EOSL = 0.60%)

N=30

DP Release

N=30

Avg of n=3 DP lots (R - EOSL = 0.90%)

Mean Difference = -0.25% (R)

Mean Difference = -0.55% (R + EOSL)

Current

Method

(+/- 3 SDs)

New

Method

(+/- 3 SDs)

28

AMC Study ResultsSymmetrical Limits (for Release)

Time (years)1 2 3 4

100.0%

98.5%

DP EOSL NLT 97.0%

99.5%

99.0%

98.0%

97.0%

Avg of n=6 DP lots (R - EOSL = 0.60%)

N=30

DP Release

N=30

Avg of n=3 DP lots (R - EOSL = 0.90%)

Mean Difference = -0.25% (R)

Mean Difference = -0.55% (R + EOSL)

CpK > CpK

29

Equivalence of New Method Demonstrated

Krause/PDA, 2012

Eq

uiv

ale

nce

Lim

it

- Delta 0 + Delta

New Method

“Lower Results”

New Method

“Higher Results”

Eq

uiv

ale

nce

Lim

it

No difference

- 9

0%

CI

+ 9

0%

CI

- 9

0%

CI

+ 9

0%

CI

- 9

0%

CI

+ 9

0%

CI

- 9

0%

CI

+ 9

0%

CI

- 9

0%

CI

+ 9

0%

CI Passes Equivalence

(stat. different)

Passes Equivalence

(stat. not different)

Passes Equivalence

(stat. different)

Equivalence Unclear

(stat. different)

Fails Equivalence

(stat. different)

30

Points to Consider for AMC Studies and New

Method Implementation

• The comparison category should be justified.

– For example, a non-inferiority test may be suitable, if all outcomes (non-inferiority,

equivalence, and superiority) are acceptable, and if the new method is superior in other

aspects such as faster test results and/or increased sampling/testing.

• The pre-specified maximum allowable difference(s) should be justified.

The difference limit(s) should strike a balance among (potentially)

opposing incentives:

– Impact on patient and/or manufacturing.

– Passing AMC results as “comparable” when they are not, and vice versa.

Krause/PDA, 2012

top related