Tuesday, 17 July 2018

5 Golden Rules for Effective (and Inspection Ready) OOS Investigations

The investigation of out of specification (OOS) results is a regulatory requirement in a GMP laboratory and these investigations are intensively scrutinised by health authority inspectors. The purpose of this article is to provide 5 ‘Golden Rules’ which will make sure that investigations are both effective and inspection ready. Rather than use the term OOS in the article, the term out of expectation (OOE) will be used throughout and should be interpreted as a catch all term which includes all OOS, out of trend (OOT) and atypical results since all of these should be investigated in a similar way.

OOE investigations are crucial in analytical labs for two reasons. The first is fairly obvious; if a reportable result is not as expected then it is important to find out whether it is a true result or if the lab has made a mistake. The second reason doesn’t seem to be appreciated by some labs, namely that how a lab approaches the investigation of OOE results provides an indicator of how effective they are. High numbers of lab errors may indicate underlying problems, and the quality of the investigation indicates their scientific know-how and commitment to continuous improvement.

Rule #1: Do it because you want to, not because you have to
An investigation into an OOE result is required by cGMP but it is also a great opportunity for improvement in the lab. Investigations that are driven by too much compliance and not enough science are easy to spot and seldom stand up to scrutiny. Typical signs include:
  • A formulaic approach without due consideration of the available evidence where the root cause appears to be pre-selected rather than identified as a result of the investigation. (see Rule #2)
  • Use of imprecise language leading to claims that hypotheses are proven when in fact they are just shown to be probable. (See Rule #3)
  • The investigation has been closed too soon and the true root cause(s) have not been identified. (See Rule #4)
  • Inadequate corrective actions and preventative actions (CAPAs) are generated which don’t address the problem(s) and root cause(s) of the OOE result, which then leads to repeat occurrences of the same issues. (See Rule #5)
Ultimately, an investigation is required anyway so it makes good sense to make the most of the opportunity for improvement that it presents.

Rule #2: Always follow the evidence
Poor OOE investigations will not fully consider the available evidence. Often this is because the investigator is not suitably qualified and experienced to lead the investigation. They may jump to conclusions that are not supported by the evidence, or they may investigate irrelevant hypotheses. A good checklist can be extremely helpful to assist in gathering evidence relating to an OOE, and is a very popular approach, but it should act as a useful tool, not a crutch. Unfortunately checklists can become something of a tick-box exercise, and combined with an SOP which contains ‘examples’ of hypotheses, may lead to the same formulaic approach being used for the hypothesis testing in all investigations. Scientific rigour is essential in OOE investigations and the people involved need to be competent and capable for the task.

Rule #3: Say what you mean, precisely
The language used in an OOE investigation report needs to be precise. There is a big difference between a lab error that is ‘proven’ and one that is ‘probable’. The terms chosen need to reflect the strength of the available evidence. Words like ‘eliminate’, ‘proven’, and ‘demonstrated’ should be used only if the evidence is definitive. When evidence is present but indefinite, then words such as ‘suggests’, ‘likely’ and ‘probable’ are more appropriate. Well written reports, which include precise use of words, will provide a scientific record of the investigation that is clear and understandable, even years after the event.

Rule #4: It’s not over till it’s over

Once the reason for the OOS result has been found, the temptation is to close out the investigation as quickly as possible, particularly when a lab error has been identified and the batch being tested now needs to be released. However, the reason for the OOE result is not the same as the underlying root cause and often there is more than one root cause which needs to be fully investigated if reoccurrence is to be prevented. An example could be where an inaccurate assay value was found to be due to a problem with a reference standard. Although the OOE investigation has correctly determined that the reference standard caused the OOE result, the root cause has not been identified. Further investigation is needed to find out why the reference standard became compromised, whether it affected just this standard or are all the reference standards now suspect, and whether the reference standard management system is fit for purpose.

Rule #5: Don’t let it happen again

The CAPAs that come out of an OOE investigation should meaningfully address the underlying root causes of the OOE result so that appropriate corrective and preventative actions may be implemented. Retraining the analyst is rarely enough to achieve this but it is probably one of the most common CAPAs in the case of a lab error being identified. Rather than telling the analyst to ‘get it right’ the next time, it would make more sense to look at the initial training that was delivered and discover whether it was adequate, and also whether the mistake may happen to other analysts that underwent the same training. Repeat instances of the same problem is a clear indication that a lab has not embraced an opportunity for improvement. Good labs will trend CAPAs and assess them for effectiveness.


Following these five rules will ensure that OOE investigations are conducted in a scientific and meaningful manner, with each instance providing a genuine opportunity for improvement. Hypothesis testing will relate to the available evidence and will be interpreted with clarity, the true underlying root causes will be identified, and appropriate CAPAs put in place to correct the problem and prevent reoccurrence.



   

Monday, 18 June 2018

Update of ICH Q2 'Validation of Analytical Procedures' Announced

At the ICH meeting in Kobe, Japan in June it was decided that the ICH guidance, 'Q2(R1): Validation of Analytical Procedures: Text and Methodology' would be revised and that an additional guidance, Q14, would be prepared on the topic of Analytical Procedure Development. Since the method validation guidance dates back to 1996, it is good news that an update is in the pipeline although it may be some time before the new version is ready. It will also be very interesting to see what the new method development guidance suggests.

   

Friday, 18 May 2018

Common Mistakes in Method Validation and How to Avoid Them - Part 3: Accuracy


The validation of analytical methods is undoubtedly a difficult and complex task. Unfortunately this means that mistakes are all too common. As a trainer and consultant in this area I thought it might be useful to take a look at some common mistakes and how to avoid them. In this series of articles I will pick out some examples for discussion related to the method performance characteristics as listed in the current ICH guidance, ICH Q2(R1), namely: Specificity; Robustness; Accuracy; Precision; Linearity; Range; Quantitation limit; and Detection limit.
In previous articles I wrote about some common mistakes associated with ‘Specificity’ and 'Robustness'. This time I’ll take a look at ‘Accuracy’. The common mistakes that I have selected for discussion are:
1.       Not evaluating accuracy in the presence of the sample matrix components
2.       Performing replicate measurements instead of replicate preparations
3.       Setting inappropriate acceptance criteria
The definition of accuracy given in the ICH guideline is as follows: ‘The accuracy of an analytical procedure expresses the closeness of agreement between the value which is accepted either as a conventional true value or an accepted reference value and the value found.’ This closeness of agreement is determined in accuracy experiments and expressed as a difference, referred to as the bias of the method. The acceptance criterion for accuracy defines how big you are going to let the bias be and still consider the method suitable for its intended purpose.
The term accuracy has also been defined by ISO to be a combination of systematic errors (bias) and random errors (precision) and there is a note about this in the USP method validation chapter, <1225>: ‘A note on terminology: The definition of accuracy in 1225 and ICH Q2 corresponds to unbiasedness only. In the International vocabulary of Metrology (VIM) and documents of the International Organization for Standardization (ISO), accuracy has a different meaning. In ISO, accuracy combines the concepts of unbiasedness (termed “trueness”) and precision.’
From the point of view of performing validation, the difference in the definitions doesn’t make a lot of difference, we usually calculate both bias and precision from the experimental data generated in accuracy experiments. Personally I prefer the ISO definition of accuracy.
Mistake 1: Not evaluating accuracy in the presence of the sample matrix components
Since the purpose of the accuracy experiments is to evaluate the bias of the method, the experiments that are performed need to include all the potential sources of that bias. This means that the samples which are prepared should be as close as possible to the real thing. If the sample matrix prepared for the accuracy experiments is not representative of the real sample matrix then a source of bias can easily be missed or underestimated.
TIP: The samples created for accuracy experiments should be made to be as close as possible to the samples which will be tested by the method. Ideally these ‘pseudo-samples’ will be identical to real samples except that the amount of the component of interest (the true value) is known. This can be very difficult for some types of sample matrix, particularly solids where the component of interest is present at low amounts (e.g., impurities determination).
For impurities analysis, it may be necessary to prepare the accuracy samples by using spiking solutions to introduce known amounts of material into the sample matrix. Although this carries the risk of ignoring the potential bias resulting from the extraction of the impurity present as a solid into a solution, there isn’t really a workable alternative.
Mistake 2: Performing replicate measurements instead of replicate preparations
Performing replicate preparations of accuracy ‘pseudo-samples’ allows a better evaluation of what differences in the data are due to the bias and what are due to variability of the method, the precision. A minimum of 9 replicates are advised by the ICH guidance and these should be separate preparations. For solids, this could be 9 separate weighings into 9 separate volumetric flasks, as per the method.
However, the preparation does depend on the nature of the sample matrix and the practicality of controlling the known value for the component of interest. As discussed above, sometimes in the case of impurities methods, solutions may be required for practical reasons even though the sample matrix exists as a solid. In this case 9 separate weighings does not offer more representative ‘pseudo-samples’ and thus a single stock solution for the impurity would probably be a better choice.
TIP: Assess the sample matrix and try to prepare separate replicates when possible so that the data produced is as representative as possible and includes typical sources of variability.
Mistake 3: Setting inappropriate acceptance criteria
As mentioned previously, the acceptance criterion for accuracy is based on how much bias you will allow in the results from the method. It is obviously better not to have any bias in a method but there is always a certain amount of potential bias associated with the combination of the sample matrix, the level of the components of interest in the sample, and the instrumentation used for the measurement. For the method to be capable the bias needs to be less than the specification for the result. For example, if a drug substance specification requires that there must be between 99 to 101 %w/w of the drug present, then a method which has a bias of 2% is not going to be acceptable.
TIP: Make sure that the acceptance criteria set for accuracy in method validation are compatible with the requirements for the method, and in particular, the specification for the test.
References
1.       ICH Q2 (R1): Validation of Analytical Procedures: Text and Methodology, 2005, www.ich.org
2.       USP <1225> Validation of Compendial Methods, www.usp.org
In the next instalment, I will write about common validation mistakes for the method performance characteristic of precision. If you would like to receive the article direct to your inbox, then sign up for our eNewsletter. You will receive lots of helpful information and you can unsubscribe at any time. We never pass your information on to any third parties.
If you would like to learn more about method validation, and method transfer, then you may be interested in the 3 day course on the topic from Mourne Training Services Ltd. The course has two versions, one applied to small, traditional pharmaceutical molecules and one for large, biological/biotechnology derived molecules. Visit the MTS website for more information.

   

Tuesday, 15 May 2018

Tuesday, 8 May 2018

Tuesday, 1 May 2018