Monday, 21 September 2015

Help on: Where do the Acceptance Criteria used in Method Validation Come From?

Do you have any problems relating to analytical chemistry for pharmaceuticals or training? Send your questions to the MTS helpdesk using our contact form.

"I know that for an assay method, typically the accuracy recoveries should be between 98 to 102% and the precision, expressed as %RSD, should be less than 2% but where do these values come from?"

"A good place to start when you want to understand the significance of method validation acceptance criteria is to consider what the acceptance criteria actually mean. It is a way of expressing the amount of error that you are prepared to accept in the result generated by the method, or to put it another way, how far from the actual value would you still consider the result to be a reasonable estimation.

So how much error will you allow? Obviously you want the error to be as small as possible but it will depend on what is practically achievable. In a modern analytical laboratory, error is minimised by the competent use of suitable equipment. Examples include: analytical balances to minimise error during weighing operations; volumetric glassware to minimise error in solution preparation; and instrument maintenance and calibration to minimise error in measurements.

Since these approaches are common to all laboratories, the practically achievable amount of error is fairly constant and leads to the example you quoted: "for an assay method, typically the accuracy recoveries should be between 98 to 102% and the precision, expressed as %RSD, should be less than 2%".

However, it will depend on the complexity of both the sample preparation and measurement since this may involve more sources of error being present. For example, when a sample preparation involves a difficult extraction, such as a drug extraction from a cream or ointment then it is possible that there will be a higher level of error (when compared to simply dissolving the drug) and the typical acceptance criteria may not be achievable. A way to deal with this may be to accept the error but increase the replication of samples to gain higher confidence in the result. The measurement may also be subject to higher levels of error in some assays. For example, when using UV absorbance to measure a drug molecule which has a poor chromophore, the error may be higher than that present for a drug molecule which has a strong chromophore.

In method development all sources of error should be considered and minimised where possible so that the results generated by the method are the best estimate of the true value. In method validation the practically achievable level error is compared to values which are considered to be reasonable, based on experience of using analytical practices. Some flexibility in terms of acceptance criteria is advantageous for those circumstances where it is particularly difficult to control the sources of error in a method and where more generous acceptance criteria may be felt to be satisfactory. This is why I am of the opinion that it is good not to have generic acceptance criteria in regulatory guidance documents. It is helpful to include these in in-house guidance documents but flexibility is important which allows different criteria, if scientifically justified."

On the MTS course, Validation and Transfer of Methods for Pharmaceutical Analysis, you can learn more about acceptance criteria and their justification. The topic of analytical error is covered extensively in our new MTS course, Applying Data Integrity in the Laboratory; Minimising Analytical Error, part of our Laboratory Data Integrity programme. Visit the MTS website for more information.


Thursday, 17 September 2015

What is Meant by 'Manual Integration'?

What is meant by 'manual integration'?
As a chromatographer of many years my understanding of the term 'manual integration' is that you view an individual chromatogram using chromatography software and use the mouse to move the cursor on the screen so as to adjust the peak start and/or peak end, resulting in the integration which you feel is most suitable to quantify that particular peak. It is a contentious procedure, since it is very susceptible to falsification. Small adjustments can make the difference between a passing or failing result.

If this sounds familiar then you should be aware that this is not always what is meant when people use the terminology, particularly regulatory inspectors. It is now commonly used to describe the process of reintegrating, i.e. changing the integration parameters from the original settings and reprocessing the chromatograms. It may be used even if the same integration method and thus parameters have been applied automatically by the software, and sample and standard chromatograms are all integrated in the same way.

A distinction between the terms 'manual intervention' and manual integration' has been suggested by R McDowall in his excellent article, 'Questions of Quality: Where Can I Draw The Line?' (a previous post under MTS Recommends...). I think it would be advisable to include this distinction both in a policy on manual integration and also in the training programme for chromatographers.

MTS are offering a new course, 'How to Improve Data Integrity in the Pharma Lab', as part of our Laboratory Data Integrity programme in which the topic of chromatographic integration, including manual integration, will be explored fully. Visit the MTS website for more information.


Thursday, 10 September 2015

MTS Recommends... How to Estimate Error in Calibrated Instrument Methods - And Why We Have Stopped Doing It!

How to Estimate Error in Calibrated Instrument Methods—And Why We Have Stopped Doing It!

By Tony Taylor in The LCGC Blog,
Aug 18, 2015

"So when was the last time you reported your results with an estimate of the error associated with the data?"