The second stage of the validation lifecycle is called process qualification. This stage is customary and is referred to as cleaning validation. Usually, three consecutive successful runs are performed to qualify the process using well-characterized, well-documented, and consistent cleaning procedures.
During these studies, the practitioner cleans the equipment, collects the appropriate samples, and evaluates the data using pre-defined statistical tools. For years, it was not commonplace to use statistics to evaluate the cleaning process, and it might be a new concept for many readers. However, I strongly encourage the use of such methods; they will provide meaningful insights into sources of cleaning process variability. Remember, a result that you did not expect or is hard to explain still tells you something and thus becomes a part of the story. And I believe that each process tells a story.
Inter- and Intra-run Variability
So, what statistical methods should we employ during Stage 2? We should first take into consideration what types of variability exist in a process. The types of variability are:
Within each individual cleaning run variability, which is known as inter-run variability
Between the cleaning runs variability, which is known as intra-runs variability
The examination of within-the-run variability often reveals parts of the equipment train or individual parts of equipment that are harder to clean or sample, possibly causing variable or aberrant results. However, if the cleaning process is consistent, the results of the validation studies should illustrate this consistency.
The first step in the review of data is to check its normality. Don’t be discouraged if your data set is non-normal. It is a very typical outcome of the cleaning validation study since the point of a cleaning process is to completely remove manufacturing and cleaning process residuals. Therefore, the results of many samplings and tests yield either zero or close to zero, meaning they are below detection or at the equitation level, which doesn’t really mean zero. You should determine the procedure for the treatment of such censored data to go forward. Upon finding out how normal your data is, you should calculate confidence intervals around sample results population data sets.
Confidence Intervals for Process Capability Values
A confidence interval gives an estimated range of values that is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data. If independent samples are taken repeatedly from the same population (a consistent cleaning process should produce the same results), and a confidence interval is calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are typically calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter. Then we can examine tolerance intervalsand perform early process capability analyses using, depending on the normality of the data sets, normal and non-normal capability analyses. When data is found to be non-normal, find the best-fitting model. (1)
The following inter-cleaning run variability analyses can be utilized for the cleaning process qualification study examination:
Individual value plot
Two sample t-test
Two one-sided t-test
Non-parametric tests (e.g., Mann-Whitney U test)
To collect data to be used in subsequent analyses, one needs a reliable, data integrity-proof, sustainable digital system like ValGenesis. ValGenesis is designed to manage the entire validation lifecycle digitally and remove the inefficiencies that plague paper-based processes. The system reduces the cost of validation by allowing you to optimize and standardize your existing cleaning processes and procedures and eliminate the pain points associated with traditional, paper-based cleaning validation.
Editor's note: This blog post is part two of three-part series examining the three stages of cleaning validation. Links to the other posts are listed below.
[i] ISO16269-6 Statistical interpretation of data – Part 6: Determination of statistical tolerance Interval
Stage 2 of the validation lifecycle is called Process Qualification. Usually, 3 consecutive, successful runs are performed to qualify the process using well-documented and consistent cleaning procedures.
Industry Expert Igor Gorsky has been a pharmaceutical industry professional for over 30 years. He held multiple positions with increasing responsibility at Alpharma, Wyeth and Shire. He worked in Production, Quality Assurance, Technical Services and Validation including an Associate Director of Global Pharmaceutical Technology at Shire Pharmaceuticals. He is currently holding a position of Senior Consultant at ConcordiaValsource, LLC. His over the years accomplishments include validation of all of the aspects of pharmaceutical and biotechnology production and quality management, technical support of multi-billion dollar drug product lines and introduction of new products onto the market. He had published articles and white papers in pharmaceutical professional magazines and textbooks. In addition he had been a presenter at Interphex. He is also very active with PDA participating in several Task force groups authoring PDA Technical Reports 29 (Points to Consider for Cleaning Validation) and 60 (Process Validation). He is leading PDA Water Interest Group. He holds a BS degree in Mechanical and Electrical Engineering Technology from Rochester Institute of Technology.