Student. Master of pharmacy (Quality Assurance) RGS-COP, Anjaneri, Nashik
The execution of all the steps necessary to show that a specific technique employed for the quantitative measurement of analytes in a given biological matrix (such as blood, plasma, serum, or urine) is reliable and consistent for the intended application. Three categories of bioanalytical validation exist: cross-validation, partial validation, and full validation. Accuracy, precision, linearity, selectivity and specificity, limit of detection, limit of quantitation, standard curve (calibration curve), recovery, stability, robustness, and ruggedness are among the important bioanalytical technique properties evaluated in this review. The objective of validation of bioanalytical procedure is to demonstrate that it is suitable for its intended purpose. The most widely accepted guideline for method validation is the ICH guideline Q2 (R1), which is used both in pharmaceutical and medical science. Other guidelines, which are much more detailed, which require more extensive validation and which also have defined strict limits for the most of determined parameters are focused directly toward bioanalysis.
The techniques used to identify the pharmaceuticals in biological fluid are becoming more and more crucial for the research of drug bioavailability, bioequivalence (BE), pharmacokinetics (PK), and quantitative assessment.[1] concentration and their metabolites, innovation in pharmaceutical and biomedical sciences, research on novel drugs, and monitoring of therapeutic drugs, among other things. Because of its great reliability and excellent selectivity, high pressure liquid chromatography (HPLC) is one of the most commonly used analytical procedures, particularly in the pharmaceutical, environmental, forensic, clinical, and culinary departments. [2,3] A biological matrix of a chemical molecule is collected, processed, stored, and analysed using a variety of techniques known as bioanalytical methods. The procedure used to determine whether a quantitative analytical method is appropriate is called bioanalytical method validation, or BMV. for use in biological sciences. Validation of bioanalytical methods is used to quantify medicines and their metabolites in biological fluids, and this process is important for assessing and interpreting results from studies on bioavailability, bioequivalence, pharmacokinetics, and toxicokinetic. Validating bioanalytical methods is essential for regulatory submission as well as for guaranteeing the production of high-quality data during the drug discovery and development process. BMV guarantees that the analyte quantification in biological fluids is accurate, dependable, and appropriate for the intended use.[4]
Method Development:
The process of creating a procedure to identify and analyse a novel or unknown component in a matrix is known as bioanalytical method development. A molecule is frequently quantifiable by the chemical characteristics of the analyte, concentrations, sample matrix, cost of the analysis method and instruments, speed and duration of the analysis, quantitative or qualitative measurement, precision, and required equipment are all factors in the choice of analytical method. The process of developing a method involves sample preparation, sampling, detection, separation, evaluation of the findings, and conclusion. [5]
Need of bioanalytical method validation
Finding the concentration of medications, their metabolites, and/or endogenous chemicals in biological matrices such blood, plasma, serum, cerebrospinal fluid, urine, and saliva is done by a process called bioanalysis. The procedure comprises gathering, preparing, storing, and analysing a drug's biological matrix. Validating bioanalytical methods entails recording precise laboratory experiments that have been established and confirmed for the quantitative assessment of a medicinal ingredient in a particular biological matrix. All factors that determine the quality of data, including selectivity, sensitivity, calibration model, accuracy, precision, stability, lower limit of quantification (LLOQ), recovery, linearity, limit of detection, repeatability, and ruggedness, are included in the basic parameters of validation.
Quality control laboratories use results from established bioanalytical procedures to verify drug product authenticity, purity, quality, potency, and bioavailability. In order to ensure inter-laboratory dependability, it is crucial to test the bioanalytical method (s) at each lab and provide appropriate validation information for different labs when a study calls for sample analysis in numerous labs. Current bioanalytical procedures and methods are frequently appropriately adjusted to meet the needs of an analytical process. Therefore, in order to obtain accurate data that can be sufficiently inferred, it is crucial to establish well defined and verified bioanalytical procedures. The assessment and interpretation of bioavailability, bioequivalence, pharmacokinetic, and toxicokinetic study data are significantly influenced by the bioanalytical method validation used for the quantitative determination of pharmaceuticals and their metabolites in biological fluids2. Regulatory filings are often supported by these findings.[10]
What Gives Validity to Bioanalytical Methods? The goal of validating a bioanalytical procedure is to show the efficacy and dependability of the approach, and as a result, the findings can show the confidence things. Furthermore, if the outcomes are to be utilized to support the registration of a novel medicine or the reformulation of one that has previously obtained approval, then all bioanalytical methods need to be confirmed, as stated by Shah et al. Remembering that the initial validation is just the start of a process that needs to be continuously assessed to make sure the strategy is functioning as planned is crucial. In order to prove that the method's performance characteristics are reliable and appropriate for the planned analytical applications, validation involves the use of particular procedures. [11,12]
Types of Bioanalytical Method Validation
There are three categories of validation for bioanalytical methods.
B. Partial validation
C. Cross validation
Full validation is the process of determining all validation parameters to be used in sample analysis for the bioanalytical method for each analyte. [4-9,11,31,29,50]
It's important to validate everything completely.
1. In the early phases of developing and utilizing a bioanalytical technique.
2. For a whole new prescription.
3. A comprehensive validation of the modified assay is essential if metabolites are introduced to an existing test for quantification. [21,29,49]
Adjustments of previously validated bioanalytical procedures or adjustments of validated bioanalytical methods that do not always require full revalidation are known as partial validations. [9,31,50] One intra-assay accuracy and precision determination can be considered partial validation, or it can be almost fully validated. This group of typical bioanalytical method modifications includes, but is not limited to:
When two or more bioanalytical procedures are employed to generate data for research, or across studies, cross-validation is the comparison of validation parameters. [9,31,51]
1. scenario in which the updated bioanalytical technique acts as the comparator and the old, verified bioanalytical method as the reference is an illustration of cross-validation. Both approaches should be used in the comparisons.
a. Cross validation using spiked matrix standards and subject samples should be carried out at each site or laboratory to verify interlaboratory reliability when sample analyses within single research are undertaken at multiple sites or laboratories.
b. When data from various investigations utilizing various analytical methods (e.g., LC-MSMS vs. ELISA) are included in a regulatory submission, cross-validation should also be taken into account. [4,9,11,21]
Current Validation Practice on Bioanalytical Methods Validation
Highly sensitive and selective approaches are needed in today's drug development environment to measure pharmaceuticals in matrices including blood, plasma, serum, or urine. The most often used technology for the bioanalysis of tiny molecules is chromatography, and the main words listed below relate to this kind of analytical technique. The FDA Guidance for Industry, Bioanalytical Methods Validation (2001), is widely recognized as a source for contemporary validation practices. A synopsis of this guidance is provided using standard nomenclature. [17,18]
The fundamental parameters involved in bioanalytical validation
1. Accuracy
2. Precision
3. Linearity
4. Selectivity and specificity
5. Limit of detection
6. Limit of quantitation
7. Standard curve (calibration curve)
8. Recovery
9. Stability
10. Robustness
11. Ruggedness
Accuracy
The degree of similarity between the measured concentration and the nominal or known real concentration. [8,21,22-24] The common measurement for it is relative error (%RE).[25] Specificity and precision are only two of the factors that influence an accurate method, as accuracy is a precise measurement. [11,26] Accuracy is sometimes expressed as trueness. Accuracy is evaluated by duplicate analysis of samples with known analyte concentrations (QCs).[27] Accuracy should be determined using at least five determinations for each concentration. Using a minimum of three concentrations within the range of expected research sample values is recommended. Mean value should be within 15% of nominal value, except for LLOQ, where variance should not exceed 20%. By calculating the mean's departure from the nominal value, accuracy is determined.[8] The two most common ways to evaluate an analytical method's accuracy or bias are to compare it to a reference method and analyse control samples that have been spiked with analyte.
The easiest way to report accuracy is as a percentage bias, which can be found using the following formula:
Abso % Bias = measured value-true valuetrue value
×100
Precision:
The degree of agreement between several measurements made from repeated sampling of the same homogeneous sample under the specified conditions is known as the precision of a bioanalytical method, which is a measure of the random error. [9,24] Determination of the scatter for concentrations acquired from repeated samplings of a uniform sample. It is commonly expressed as the replicate measurements' relative standard deviation (R.S.D.) or coefficient of variation (%CV). [25,28]
% CV = standard deviationmean
×100
For each concentration, a minimum of five determinations should be used to quantify precision. It is advised to use at least three concentrations within the range of anticipated concentrations. With the exception of the LOQ, where it should not exceed 20% CV, the precision determined at each concentration level should not exceed 15% coefficient of variation (CV). [21,23,29.30] Three levels of accuracy can be distinguished: reproducibility, moderate precision, and repeatability.
Linearity:
The capacity of the bioanalytical process to produce test findings that, within the range of the standard curve, are exactly proportionate to the analyte concentration in the sample. [9,22,25,31] At the very least, the concentration range of the calibration curve ought to include the values anticipated to be detected in the research samples. Two calibration ranges can be validated if one calibration curve is unable to adequately describe the entire range. It is important to remember that extending the range beyond what is necessary will have a detrimental impact on the method's accuracy and precision at the extremes. The most common tool for determining linearity was the correlation coefficient. Despite being useful in indicating a strong degree of link between answer data, the correlation coefficient is not very useful.
when proving linearity. Thus, merely evaluating an acceptable high correlation coefficient does not ensure linearity; other tests, such as a lack-of-fit test, are required. Regardless of the stage of drug development, the method's linear range needs to be ascertained.
Selectivity and Specificity:
The capacity of the bioanalytical approach to quantify and distinguish the analytes when potentially present components are present. Metabolites, contaminants, degradants, and matrix elements are a few examples of these.[25] The ability of the bioanalytical process to distinguish the analyte from components that interfere is known as selectivity, and it is demonstrated through documentation. [28,32] The ability of the bioanalytical method to measure clearly and distinguish the analytes in the presence of potentially present components is the standard definition. Examinations of blank samples of the relevant. [4,41]
At least six sources of biological matrix—plasma, urine, or another matrix—should be used. It is necessary to check each blank sample for interference and guarantee selectivity at the lower limit of quantification (LLOQ).[29] These interferences could be caused by a component of the biological matrix being studied. They might be based on the traits of the subject of the study, which could be an animal (age, sex, race, ethnicity, etc.) or a plant (development stage, variety, soil type, etc.). They might also be based on exposure to the environment (climatic conditions like UV light, temperature, and relative humidity).[10] In order to establish the selectivity of the approach, the FDA's actual guidance for bioanalytical method validation calls for the use of at least six distinct sources of matrix. The capacity to evaluate the analyte without a doubt in the presence of potentially expected components is known as specificity. In liquid chromatography with mass spectrometry detection (LC-MS), for instance, the detector may measure selectively an analyte, even if it is not fully separated from endogenous compounds, etc. The method is specific in high-performance liquid chromatography with UV detection (HPLC-UV), a traditional chromatographic method, if the assigned peak at a given retention time belongs only to one chemical entity. There is widespread consensus that specificity and selectivity form the essential foundation of every analytical process, notwithstanding this disagreement.
Limit of Detection (LOD):
The minimum quantity of analyte that is detectable but not quantifiable.[22] Because some bioanalytical laboratories only measure the lowest quantity of a reference solution that can be detected and others measure the lowest concentration that can be identified in the biological sample, the LOD calculation is susceptible to error.[12] Most people agree that the lowest detectable concentration or amount of the target analyte should be represented by the LOD.
Limit of Quantitation (LOQ):
The lowest quantity of analyte in a sample that can be quantitatively measured with appropriate precision and accuracy is known as the quantitation limit of specific analytical methods. [22,23,33]
Standard Curve (Calibration Curve):
The link that currently exists, within a given range, between the response (signal, such as area under the curve, peak height, or absorption) and the concentration (amount) of the analyte in the sample is known as the standard curve for a bioanalytical method.[28] The association between the instrument response and known analyte concentrations is known as the calibration (standard) curve. Another name for it is a calibration curve. This calibration curve, also known as the standard curve, should ideally be represented by a straightforward monotonic response function that is strictly growing or decreasing and produces correct results, as will be covered in the discussion that follows.[34]
By adding known amounts of the analyte to the biological matrix, a calibration curve should be created in the same biological matrix as the study's sample. A blank sample, which is a matrix sample processed without an internal standard, a zero sample, which is a matrix sample processed with an internal standard, and six to eight non-zero samples that span the predicted range, including LLOQ, should make up a calibration curve. If the analyte response is identifiable, discrete, and reproducible with a precision of 20% and accuracy of 80–120%, and if it is at least five times the response compared to the blank response, the lowest standard on the calibration curve should be accepted as the limit of quantification.[8]
Recovery:
The percentage of the known amount of an analyte that is carried through the sample extraction and processing steps of the method that represents the extraction efficiency of an analytical process.[34] Recovery is concerned with an analytical method's extraction efficiency within a given range of variability. While analyte recovery does not have to be 100%, both the analyte's and the internal standard's levels of recovery should be exact, consistent, and repeatable. Analytical data for extracted materials at three concentrations (low, medium, and high) should be compared with unextracted standards that indicate 100% recovery in order to conduct recovery experiments. [8,9,29,33,35] It can also come from total recuperation. [25,36]
Stability:
The capacity of an analyte to endure, under specific conditions, for a predetermined period of time in a given matrix, either chemically or physically. Finding any degradation of the target analytes during the entire period of collecting, processing, storing, preparing, and testing the sample is the goal of a stability test. The conditions under which stability is evaluated are mostly determined by the kind of analyte, the biological matrix, and the expected duration of storage (prior to analysis). [4,37] The FDA guidelines on bioanalytical technique validation and the most recent AAPS/FDA white paper both state that analyte stability needs to be evaluated at multiple stages. Stability checks should be performed at every step of the sample preparation and analysis procedure as well as the long-term storage settings. They also include bench-top stability (i.e., stability under sample preparation conditions), long-term stability at, say, -20°C or -70°C (i.e., stability during sample storage conditions), and stability of samples on the auto-sampler. The analyte stability in the biological matrix is evaluated through multiple freeze-thaw cycles. [12,40,29]
One way to compute percent stability would be as follows:
The settings of a stability experiment should be similar to those that may occur when handling and analysing samples in real life (e.g., workbench, room temperature, freeze-thaw cycles, long-term, and storage).
1. Short-term stability:
At room temperature, the stability of the analyte in the biological matrix must be evaluated. Three aliquots with low and high concentrations should be refrigerated for at least a day before being analysed. [9,23]
2. Long-term stability:
The time interval between the initial sample collection date and the last sample analysis date has to be at least as long as the analyte's stability in the matrix. [9,23,39]
3. Freeze and Thaw Stability:
During freeze/thaw stability testing, stability samples should be frozen and thawed in a manner consistent with the sample handling procedures that will be used for sample analysis. Stability assessment must be conducted following a minimum of three freeze-thaw cycles. [9,29]
4. Bench-Top stability:
When designing and carrying out bench-top stability tests, consideration should be given to the normal laboratory conditions under which study samples are handled. [3,29]
5. Stability of stock solutions:
It is important to assess the drug's stock solutions' stability. The stability data for the stock solution should be produced to support the length of stock solution storage stability when it differs from the certified reference standard in terms of state (solutions vs. solids) or buffer composition (usually the case for macromolecules). [3,29]
6. Processed Sample Stability:
It's critical to determine the stability of processed samples, as well as how long it will take to complete the analysis. [3,29]
Repeatability:
Within-assay and intra-assay, repeatability refers to the analytical variability over a brief period of time under the same operating conditions. It shows how closely several measurements taken over a brief period of time under the same operating conditions agree. Repeatability of a chromatographic technique can be assessed by injecting a single sample solution generated at the 100% test concentration at least six times in duplicate.
Repeatability is the ability of a method to function consistently over a particular day in one lab and on one instrument. Measured with precision under ideal circumstances (brief period, one analyst). As an alternative, the precision from at least nine judgments that fall within the method's specified range can be used to assess repeatability. One of the nine determinations would be the 100% test concentration and the other nine may be made up of triplicate determinations at each of the three concentration levels [3,29]
Intermediate precision:
In the lab, intermediate precision manifests itself in several ways, such as different days, analysts, equipment, etc. The term "Mfactor different immediate precision" was used in the ISO standard, and it denotes the number of factors (operator, equipment, or time) that change between successive determinations. The phrase "intermediate precision" describes the method's performance in the lab, both qualitatively and statistically, but now varying from instrument to instrument and day to day. An intermediate precision test may involve two distinct analysts, each preparing six sample preparations in total in accordance with the analytical method. [22,40]
Reproducibility:
Standardizing analytical procedures might benefit from reproducibility, which is the precision across laboratories (collaborative or interlaboratory studies). Reproducibility is not a need for submission. The method's capacity to produce a comparable concentration. How well procedure works in terms of both quality and quantity from lab to lab, day to day, analyst to analyst, and instrument to instrument is referred to as reproducibility. [25,20,40]
Quantification Range:
The concentration range, which includes the LLOQ and ULOQ, that can be accurately and precisely measured using a concentration response relationship in a consistent and repeatable manner. [23,25,43] The lower limit of quantification (LLOQ) and upper limit of quantification (ULOQ) are defined as follows in the FDA Bioanalytical Method Validation document:
Lower limit of quantification (LLOQ):
The minimum amount of an analyte in a sample at which a quantitative determination can be made with reasonable accuracy and precision [4,29,25,28,39]
Upper limit of quantification (ULOQ):
The highest concentration of an analyte in a sample that can be determined quantitatively with adequate accuracy and precision is known as the upper limit of quantification, or ULOQ.
There are several approaches that can be used to calculate the lower limit of quantification (LLOQ). The first technique is based on the well-known signal-to-noise (S/N) ratio method. It is anticipated that a 10:1 S/N will be sufficient to distinguish the analyte from the ambient noise. The "Standard Deviation of the Response and the Slope" serves as the foundation for the other strategies. The calculation for LLOQ is as follows:
LLOQ = 10?/S
where, S is the calibration curve's slope
? is the response's standard deviation.
Plotting the RSD against concentrations around the anticipated LLOQ is an additional method for estimating the LLOQ. [4,29,25,28,39]
Range:
The period between the higher and lower concentrations (amounts) of analyte in the sample (including these concentrations) for which it has been shown that the analytical technique has an appropriate degree of linearity, accuracy, and precision is known as the range of the procedure. [22,40] The concentration range over which an analyte can be determined with reasonable accuracy and precision is known as the bioanalytical assay's range [12,43]
Robustness:
As per the criteria set forth by the International Chemical Committee, the robustness of an analytical procedure refers to its ability to withstand minor yet intentional changes in method parameters and signifies its dependability when used regularly [24, 25, 36]. A robustness test is an experimental setup used to assess a method's robustness. Robustness is defined as the capacity to replicate the (analytical) method in various laboratories or under various conditions without the occurrence of unexpected differences in the obtained result(s).
Ruggedness:
This covers various analysts, labs, columns, tools, and suppliers of chemicals, solvents, and reagents. The degree of repeatability of test results from the analysis of the same samples under various typical test conditions is known as the analytical method's ruggedness. The method's ruggedness was investigated by varying the experimental conditions, including [45]
Specific Recommendation for Bioanalytical Method Validation
Application of Validated Method to Routine Drug Analysis:
Biological samples can usually be analysed with a single determination, without the requirement for duplicate or replication analysis, assuming the assay method has acceptable variability as indicated by validation data. This is valid for procedures whose precision and accuracy variability continuously falls within permitted tolerance limits.When using a bioanalytical approach for regular drug analysis, keep the following suggestions in mind: [9]
CONCLUSION:
The generation of pharmacokinetic, toxicokinetic, and metabolic data through bioanalysis is essential to pharmaceutical research and development, which is a crucial step in the drug discovery and development process. An effort has been made to comprehend and elucidate the development and validation of bioanalytical methods from the perspective of the quality assurance department. This article reports on some of the methods and validation procedures that were detailed in various scenarios that arose during the research sample analysis. We've talked about these many crucial aspects of bioanalytical methodology creation and validation in order to raise the bar and increase acceptability in this field of study.
REFERENCE
Sakshi B. Tarle*, Smita. S. Aher, Durga. B. Zade, Dr. Rushikesh. S. Bachhav, An Approach To Bioanalytical Method Development And Validation: A Review, Int. J. of Pharm. Sci., 2024, Vol 2, Issue 7, 382-393. https://doi.org/10.5281/zenodo.12671255