Friday, 18 May 2018

Common Mistakes in Method Validation and How to Avoid Them - Part 3: Accuracy


The validation of analytical methods is undoubtedly a difficult and complex task. Unfortunately this means that mistakes are all too common. As a trainer and consultant in this area I thought it might be useful to take a look at some common mistakes and how to avoid them. In this series of articles I will pick out some examples for discussion related to the method performance characteristics as listed in the current ICH guidance, ICH Q2(R1), namely: Specificity; Robustness; Accuracy; Precision; Linearity; Range; Quantitation limit; and Detection limit.
In previous articles I wrote about some common mistakes associated with ‘Specificity’ and 'Robustness'. This time I’ll take a look at ‘Accuracy’. The common mistakes that I have selected for discussion are:
1.       Not evaluating accuracy in the presence of the sample matrix components
2.       Performing replicate measurements instead of replicate preparations
3.       Setting inappropriate acceptance criteria
The definition of accuracy given in the ICH guideline is as follows: ‘The accuracy of an analytical procedure expresses the closeness of agreement between the value which is accepted either as a conventional true value or an accepted reference value and the value found.’ This closeness of agreement is determined in accuracy experiments and expressed as a difference, referred to as the bias of the method. The acceptance criterion for accuracy defines how big you are going to let the bias be and still consider the method suitable for its intended purpose.
The term accuracy has also been defined by ISO to be a combination of systematic errors (bias) and random errors (precision) and there is a note about this in the USP method validation chapter, <1225>: ‘A note on terminology: The definition of accuracy in 1225 and ICH Q2 corresponds to unbiasedness only. In the International vocabulary of Metrology (VIM) and documents of the International Organization for Standardization (ISO), accuracy has a different meaning. In ISO, accuracy combines the concepts of unbiasedness (termed “trueness”) and precision.’
From the point of view of performing validation, the difference in the definitions doesn’t make a lot of difference, we usually calculate both bias and precision from the experimental data generated in accuracy experiments. Personally I prefer the ISO definition of accuracy.
Mistake 1: Not evaluating accuracy in the presence of the sample matrix components
Since the purpose of the accuracy experiments is to evaluate the bias of the method, the experiments that are performed need to include all the potential sources of that bias. This means that the samples which are prepared should be as close as possible to the real thing. If the sample matrix prepared for the accuracy experiments is not representative of the real sample matrix then a source of bias can easily be missed or underestimated.
TIP: The samples created for accuracy experiments should be made to be as close as possible to the samples which will be tested by the method. Ideally these ‘pseudo-samples’ will be identical to real samples except that the amount of the component of interest (the true value) is known. This can be very difficult for some types of sample matrix, particularly solids where the component of interest is present at low amounts (e.g., impurities determination).
For impurities analysis, it may be necessary to prepare the accuracy samples by using spiking solutions to introduce known amounts of material into the sample matrix. Although this carries the risk of ignoring the potential bias resulting from the extraction of the impurity present as a solid into a solution, there isn’t really a workable alternative.
Mistake 2: Performing replicate measurements instead of replicate preparations
Performing replicate preparations of accuracy ‘pseudo-samples’ allows a better evaluation of what differences in the data are due to the bias and what are due to variability of the method, the precision. A minimum of 9 replicates are advised by the ICH guidance and these should be separate preparations. For solids, this could be 9 separate weighings into 9 separate volumetric flasks, as per the method.
However, the preparation does depend on the nature of the sample matrix and the practicality of controlling the known value for the component of interest. As discussed above, sometimes in the case of impurities methods, solutions may be required for practical reasons even though the sample matrix exists as a solid. In this case 9 separate weighings does not offer more representative ‘pseudo-samples’ and thus a single stock solution for the impurity would probably be a better choice.
TIP: Assess the sample matrix and try to prepare separate replicates when possible so that the data produced is as representative as possible and includes typical sources of variability.
Mistake 3: Setting inappropriate acceptance criteria
As mentioned previously, the acceptance criterion for accuracy is based on how much bias you will allow in the results from the method. It is obviously better not to have any bias in a method but there is always a certain amount of potential bias associated with the combination of the sample matrix, the level of the components of interest in the sample, and the instrumentation used for the measurement. For the method to be capable the bias needs to be less than the specification for the result. For example, if a drug substance specification requires that there must be between 99 to 101 %w/w of the drug present, then a method which has a bias of 2% is not going to be acceptable.
TIP: Make sure that the acceptance criteria set for accuracy in method validation are compatible with the requirements for the method, and in particular, the specification for the test.
References
1.       ICH Q2 (R1): Validation of Analytical Procedures: Text and Methodology, 2005, www.ich.org
2.       USP <1225> Validation of Compendial Methods, www.usp.org
In the next instalment, I will write about common validation mistakes for the method performance characteristic of precision. If you would like to receive the article direct to your inbox, then sign up for our eNewsletter. You will receive lots of helpful information and you can unsubscribe at any time. We never pass your information on to any third parties.
If you would like to learn more about method validation, and method transfer, then you may be interested in the 3 day course on the topic from Mourne Training Services Ltd. The course has two versions, one applied to small, traditional pharmaceutical molecules and one for large, biological/biotechnology derived molecules. Visit the MTS website for more information.

   

Tuesday, 15 May 2018

Tuesday, 8 May 2018

Tuesday, 1 May 2018

Wednesday, 25 April 2018

Common Mistakes in Method Validation and How to Avoid Them - Part 2: Robustness

A rather unfortunate mistake!
The validation of analytical methods is undoubtedly a difficult and complex task. Unfortunately this means that mistakes are all too common. As a trainer and consultant in this area I thought it might be useful to take a look at some common mistakes and how to avoid them. In this series of articles I will pick out some examples for discussion related to the method performance characteristics as listed in the current ICH guidance, ICH Q2(R1), namely: Specificity; Robustness; Accuracy; Precision; Linearity; Range; Quantitation limit; and Detection limit.

In the previous instalment I wrote about some common mistakes associated with ‘Specificity’. This time I’ll take a look at ‘Robustness’. The common mistakes that I have selected for discussion are:

1. Investigating robustness during method validation

2. Not investigating the right robustness factors

3. Not doing anything with the robustness results

The purpose of a robustness study is to find out as much as possible about potential issues with a new analytical method and thus how it will perform in routine use. Usually, we deliberately make changes in the method parameters to see if the method can still generate valid data. If it can, it implies that in routine use small variations will not cause problems. This definition is provided in the ICH guideline: “The robustness of an analytical procedure is a measure of its capacity to remain unaffected by small, but deliberate variations in method parameters and provides an indication of its reliability during normal usage.

There is another aspect to robustness that doesn’t neatly fit under this definition which applies to the performance of consumable items in the method, such as chromatography columns. The performance of the column when different batches of the same column packing are used may vary. Although column manufacturers aim for batch to batch reproducibility, most practitioners of HPLC will have come across at least one example of this problem. Another issue is the aging of the column, the column performance generally decreases with age and at some stage the column will have to be discarded. Strictly speaking, these column challenges would actually come under the heading of intermediate precision, following the ICH guideline, but it makes much more sense to investigate them during method development as part of robustness.

The method validation guidelines from both ICH and FDA mention the importance of robustness in method development and how it is a method development activity but they do not define whether it needs to be performed under a protocol with predefined acceptance criteria. Since the use of a protocol is a typical approach in most pharma companies it brings me to my first common mistake associated with robustness.

Mistake 1: Investigating robustness during method validation


What I mean by this is that the robustness investigation is performed during the method validation, i.e. the outcome of the investigation is not known. I do not mean the approach where the robustness has already been fully investigated and then it is included as a section in the validation protocol for the sole purpose of generating evidence which can be included in the validation report.

If robustness is investigated during validation for the first time, the risk is that the method may not be robust. Any modifications to improve robustness may invalidate other validation experiments since they are no longer representative of the final method. It will of course depend on what modifications have to be made. As FDA suggests… “During early stages of method development, the robustness of methods should be evaluated because this characteristic can help you decide which method you will submit for approval.”

TIP: If for some reason robustness hasn’t been thoroughly evaluated in method development then investigate it prior to execution of the validation protocol using a specific robustness protocol. If any robustness issues are identified, these can be resolved prior to the validation. The nature of the robustness problems will determine whether the resolution is just a more careful use of words in the written method or if method parameters need to be updated. 

Mistake 2: Not investigating the right robustness factors


If you choose the wrong factors you may conclude that the method is robust when it isn’t. Typically what happens then is that there are a lot of unexpected problems when the method is transferred to another laboratory, and since transfer is a very common occurrence in pharma, this can be very expensive to resolve.

When choosing robustness factors it is tempting to read through the method and select all the numerical parameters associated with instrumentation. For example, when assessing HPLC methods there is a tendency to only look at the parameters of the instrument without consideration of the other parts of the method, such as the sample preparation. Unfortunately, sample preparation is an area where robustness problems often occur. Detailed knowledge of how the method works is required to identify the most probable robustness factors.

TIP: The most important factors for robustness are often those which were adjusted in method development. Review all the steps in the method to choose robustness factors and use a subject matter expert to help if necessary. 

Mistake 3: Not doing anything with the robustness results


The reason for investigating robustness is to gain knowledge about the method and to ensure that it can be kept under control during routine use. Very often robustness data is presented without any comments in the validation report and is not shared with the analysts using the method. This tick-box approach may be in compliance with regulatory guidance but it is not making the most of the scientific data available. The discussion of the method robustness in the validation report should be a very useful resource when the method needs to be transferred to another laboratory and will assist in the risk assessment for the transfer.

TIP: Review the robustness data thoroughly when it is available and ensure that there is a meaningful discussion of its significance in the validation report.

References:

1.       ICH Q2 (R1): Validation of Analytical Procedures: Text and Methodology, 2005, www.ich.org
2.       FDA Guidance for Industry: Analytical Procedures and Methods Validation for Drugs and Biologics, 2015, www.fda.gov


In the next instalment, I will write about common validation mistakes for the method performance characteristic of accuracy. If you would like to receive the article direct to your inbox, then sign up for our eNewsletter. You will receive lots of helpful information and you can unsubscribe at any time. We never pass your information on to any third parties.

If you would like to learn more about method validation, and method transfer, then you may be interested in the 3 day course on the topic from Mourne Training Services Ltd. The course has two versions, one applied to small, traditional pharmaceutical molecules and one for large, biological/biotechnology derived molecules. Visit the MTS website for more information.


   

Tuesday, 3 April 2018

MTS Recommends... What's New In MHRA's Revised Data Integrity Guidance — A Detailed Analysis

What's New In MHRA's Revised Data Integrity Guidance — A Detailed Analysis 

A very detailed summary of all the changes in this guidance from the original GMP guidance issued in 2015, and the draft GxP version released in 2016.

By Barbara Unger
Pharmaceutical Online, March 19, 2018

Tuesday, 20 March 2018

MTS Recommends... Method Transfer Video



In this great video from Waters, the importance of getting method transfer right, and the consequences of getting it wrong, are highlighted.

Saturday, 10 March 2018

Guidance on GxP Data Integrity from MHRA Finalised

The GxP Data Integrity guidance from MHRA has now been finalised.

From the MHRA Website:

"The guidance is intended to be a useful resource on the core elements of a compliant data governance system across all GxP sectors (good laboratory practice, good clinical practice, good manufacturing practice, good distribution practice and good pharmacovigilance practice).

It addresses fundamental failures identified by MHRA and international regulatory partners during GLP, GCP, GMP and GDP inspections; many of which have resulted in regulatory action."

If you need help on evaluating or improving data integrity in your laboratory then MTS can help. We offer auditing services to identify your data integrity issues, consultancy advice on how to assess data integrity risk, perform a gap analysis and implement remediation measures, and customised training courses to inform your staff about data integrity principles and problems so that they are fully engaged with your data governance systems.

Contact us to find out more about how MTS can help you with your data integrity requirements.

Read more about the new guidance on the MHRA website.
Click here for the finalised MHRA guidance.

 

Wednesday, 28 February 2018

Visit Dublin City for an MTS Course!


Dublin is a great location for our training courses, it allows you to experience a very special city. Our venue is located close to Dublin Airport which makes it easy to get to but also has great transport links into the city centre.

Read more about things to do in Dublin on VisitDublin.com, Dublin’s official tourism information website.

Monday, 26 February 2018

Common Mistakes in Method Validation and How to Avoid Them - Part 1: Specificity

(If you can't find the mistake, the answer is at the bottom of this post)

The validation of analytical methods is undoubtedly a difficult and complex task. Unfortunately this means that mistakes are all too common. As a trainer and consultant in this area I thought it might be useful to take a look at some common mistakes and how to avoid them. In this series of articles I will pick out some examples for discussion related to the method performance characteristics as listed in the current ICH guidance, ICH Q2(R1), namely: Specificity; Robustness; Accuracy; Precision; Linearity; Range; Quantitation limit; and Detection limit.

In this first instalment we will consider some mistakes associated with ‘Specificity’. This characteristic is evaluated for both qualitative and quantitative methods but the aim is different for each. For qualitative methods, the aim is to demonstrate that the method can provide the correct information, e.g., an identification method. For quantitative methods, the aim is to demonstrate that the final result generated by the method is not affected by all the potential interferences associated with the method.

Generally, I find that mistakes relating to specificity arise from a basic lack of understanding about what is required to demonstrate that the method is satisfactory. I have selected the following three examples as being ones that I regularly encounter when advising people on method validation, during both consultancy and training courses.

1. Not setting appropriate acceptance criteria

2. Not investigating all the potential interferences

3. Not considering potential changes that could occur in the sample/method being tested

Mistake 1: Not setting appropriate acceptance criteria 

 

When the results of a validation study don’t comply with the acceptance criteria defined in the protocol, then either the method is not suitable for its intended use, or the acceptance criteria set in the protocol were inappropriate. I am often asked for help on how to explain why it’s okay that results did not meet the acceptance criteria, and not just for specificity. The usual reason for this problem is that generic acceptance criteria were used, typically predefined in an SOP, and no evaluation of their suitability to the method being validated was performed.

Example 1: An identification method by FTIR, which was based on a percentage match with reference material spectra in a database, was being validated. The validation failed because the acceptance criteria for the percentage match was set at 98% and the match in the validation study was always in the region of 97%. On investigation it was determined that the percentage match of 98% had no scientific justification, it was just what had been used before. No investigation of the method had been performed prior to the validation.

Example 2: A chromatographic impurities method was being validated. The method validation SOP defined that impurity peaks should have a resolution of 1.5 and thus an acceptance criterion of 1.5 was set in the validation protocol. During the validation study, one of the impurity peaks had a resolution of 1.4. On review of the method development information, it was found that the resolution of this peak was always around 1.4 and the chromatography had been considered acceptable but this information had not made it into the validation protocol.

TIP: Review all the acceptance criteria defined in the validation protocol against what is known about the method. Assess whether the criteria are reasonable, in terms of the method capability and what is considered acceptable. The use of generic acceptance criteria can be a very useful strategy as long they are used in a scientific manner by assessing what is known about the actual method being validated.

Mistake 2: Not investigating all the potential interferences 

 

In order to demonstrate that the final result generated by the method is not affected by potential interferences, it is essential that all the potential interferences are considered. This can sometimes be difficult for complex sample matrices so it is important to identify the constituents of the sample matrix as fully as possible. Additionally, it is easy to overlook other sources of interferences that may be introduced as part of the method such as solvents, buffers, derivatisation reagents, etc.

TIP: Carry out a thorough review of all potential interferences when designing the validation protocol, particularly if the sample matrix is complex in nature, or if the sample preparation involves the use of multiple reagents.

Mistake 3: Not considering potential changes that could occur in the sample/method being tested 

 

The potential interferences that are present in a sample matrix can change due to changes in the sample composition. The most common example of this situation is probably sample degradation. In situations where a method will be used for samples of different ages, such as in a stability programme, it is essential that this is taken into account during validation and that it is demonstrated that the method can be used for any sample which may require analysis.

This means that for some methods, particularly those which are considered to be stability indicating, the specificity section of the validation protocol should include experiments to gather evidence to prove that the method may be successfully used for stability analysis. For methods which analyse the degradation products it would be expected that forced degradation studies were performed during method development to allow the creation of a method that can separate all the components of interest. For other methods this may not have been necessary in method development but a forced degradation study may now be required as part of method validation to demonstrate that the method is stability indicating.

TIP: Consider the long term use of a method when designing the validation protocol. What samples will be tested and are there any anticipated changes that could occur to the samples that would affect the potential interferences for the method? If the method is to be used for stability testing, are there any additional requirements, such as a degradation study?


In the next instalment, I will write about common validation mistakes for the method performance characteristic of robustness. If you would like to receive the article direct to your inbox, then sign up for our eNewsletter. You will receive lots of helpful information and you can unsubscribe at any time. We never pass your information on to any third parties.

If you would like to learn more about method validation, and method transfer, then you may be interested in the 3 day course on the topic from Mourne Training Services Ltd. The course has two versions, one applied to small, traditional pharmaceutical molecules and one for large, biological/biotechnology derived molecules. Visit the MTS website for more information. We also offer a course on developing stability indicating HPLC methods that includes strategies for forced degradation studies.


Another amusing mistake! (Answer to puzzle at the top of this blog: the word 'the' is repeated in the question.)
 

Wednesday, 14 February 2018

MTS Website Revamp

The MTS website has been revamped to make it easier for our visitors to quickly find the information that they need. We've also added links to the website menu on the MTS blog. Visit the website now to find out about the training, consultancy and auditing services that we can offer.

 

Wednesday, 24 January 2018

MTS Recommends... Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements—Part II: By-Time-Point and Multivariate Control Chart

Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements—Part II: By-Time-Point and Multivariate Control Chart 

 "In Part I of this article series, the authors discussed the regression control chart method for identifying out-of-trend data in pharmaceutical stability studies. In Part II, the by-time-point method and the multivariate control chart method are investigated, and improved approaches are suggested. The method is illustrated using real data sets."

M. Mihalovits and S. Kemény, "Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements–Part II: By-Time-Point and Multivariate Control Chart," Pharmaceutical Technology 41 (12) 2017.

MTS Recommends... Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements–Part I: Regression Control Chart

Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements–Part I: Regression Control Chart 

"In Part I of this article series, the regression control chart method for identifying out-of-trend data in pharmaceutical stability studies is investigated, and an improved approach is suggested."

M. Mihalovits and S. Kemény, "Methods for Identifying Out-of-Trend Data in Analysis of Stability Measurements–Part I: Regression Control Chart," Pharmaceutical Technology 41 (11) 2017.

Monday, 8 January 2018

MTS Recommends... Data Integrity Metrics for Chromatography

Data Integrity Metrics for Chromatography

Dec 01, 2017
By Mark E. Newton, R.D. McDowall
LCGC Europe
Volume 30, Issue 12, pg 679–685

"The authors discuss metrics for monitoring data integrity within a chromatography laboratory, from the regulatory requirements to practical implementation."