- convergence
- parallelism
- surrogate matrix
Biomarkers continue to become increasingly important in the drug development process in recent times, and accordingly there has been more drive to find means of establishing rugged bioanalytical methods for their quantification. The most popular means of bioanalytical quantification of biomarkers using LC–MS is the so-called surrogate matrix approach. I am very pleased to write the foreword to this special themed issue which is focused on the major fundamental analytical aspect that lends reliability to methods that adopt this approach.
Bringing surrogate matrices methodologically closer to authentic matrices
With the surrogate matrix regime, calibration standards and often a portion of the quality control samples are spiked and prepared in an alternative matrix or simple solution, free of the compound of interest or for which analyte levels are known to be far below the interferent levels. Clearly, the use of surrogate matrices with an absence of significant levels of the compound of interest is pivotal and this comes with a clear caveat of great importance, the focal point of this issue. For best performance amenable ultimately to method validation, with regard to the influences on analyte and internal standard peak area responses, the chosen surrogate matrix must adequately mimic the characteristic output from the genuine matrix within the selectivity aggregate of the complete LC–MS methodology. This is the essence of parallelism.
The subject matter is peppered with very intriguing thinking points, not even to mention comparisons with the main competitor approach of surrogate analyte.
With the inherent challenges, method development times can be prohibitively long. This is the main disadvantage of this approach. The issues that lead to this are inherent in the means. Compositions of biological fluids, especially certain components or classes of components, are known to have a profound effect on peak areas in bioanalytical LC–MS methods, even with laborious sample extractions and powerful chromatography. This leads to the very real prospect of any calibration standards prepared in a composition distinct to the authentic sample matrix composition showing a biased response, giving clear implications for the accuracy of resultant calculated concentration data.
In the interests of reducing these method development times and other resource, to make the methodology only as good as it needs to be is a concept widely accepted when it comes to being able to use a method to verify significant biomarker concentration changes. A biomarker assay is not a PK assay, after all. However, would it not be ideal to avoid the prospect of skimping on quality of method performance if it was at our fingertips to do so, perhaps thanks to fresh insight and novel methodologies?
Is it a valid point of view to consider that the simpler and more interferent-free the makeup of the surrogate matrix, the better the method will perform in terms of correlation with samples in authentic matrix? To continue in this questioning vein, can solution calibration standards be considered a good default starting point for method development or is it wiser to work with as close a mimic as possible to the authentic matrix composition? Furthermore, if additives are required in solution in order to negate nonspecific adsorptive effects, should this be translated to samples in authentic matrix? How important is the material the sample container vessels are composed of, especially if nonspecific adsorption is suspected or anticipated and how will storage temperature influence this? These questions I hope give a decent taste of what is dealt with in this arena.