Cleaning Validation Stage 2: Performance Qualification
The second stage of the validation lifecycle is called Process Qualification. This stage is customary and is referred to as Cleaning Validation. Usually, three consecutive successful runs are performed to qualify the process using well characterized, well documented and consistent cleaning procedures. During these studies one cleans the equipment; collects appropriate samples and evaluates the data using pre-defined statistical tools. We should mention here that for years it was not habitual to use statistics to evaluate the Cleaning Process and it might be a new concept for many readers. We strongly encourage the usage of such methods as they will provide meaningful insights into sources of a Cleaning Process variability. Remember that a result that you did not expect or is hard to explain still tells you “something” and thus becomes a part of the “story”. And we believe that each process tells a “story”. So, what statistical methods should we employ during Stage 2 – Process Qualification? We should first take into consideration what types of variability exists in a process. The types of variability are:
- Within each individual cleaning run variability – also called “inter-run” variability, and
- Between the cleaning runs variability, which is known as “intra-runs” variability.
The examination of within the run variability often reveals those parts of the equipment train or individual parts of equipment that perhaps are harder to clean or sample, thus possibly causing variable or aberrant results. However, if the cleaning process is consistent, the results of the validation studies should illustrate this consistency. The first step in the review of data would be checking its normality. Don’t be discouraged if your data set is non-normal. It is a very typical outcome of the cleaning validation study since the point of a Cleaning Process is to completely remove manufacturing and cleaning process residuals. Therefore, the results of many samplings and tests yield either “0” or close to “0” (meaning that they are at below detection or at the equitation level, which doesn’t really mean “0”). One should determine their procedure for treatment of such censored data to go forward. Upon finding out how normal your data is, you should calculate confidence intervals around sample results population data sets.
A Confidence Interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data. If independent samples are taken repeatedly from the same population (consistent cleaning process should produce the same results), and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are typically calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter. Then we can examine tolerance intervals[i] and perform early process capability analyses using (depending on normality of the data sets) normal and non-normal capability analyses. When data is found to be non-normal, find the best fitting model.
The following inter cleaning run variability analyses can be utilized for the cleaning process qualification study examination:
- Individual value plot
- Two sample t-test
- Two one sided t-test
- Non-parametric tests (for instance Mann-Whitney) test
To collect data to be used in subsequent analyses, one needs a reliable, data integrity-proof and sustainable electronic system. ValGenesis is such a system which should help you collect and store such data. ValGenesis is designed to manage an entire Validation lifecycle electronically and remove the inefficiencies that plague paper-based processes; ValGenesis reduces the cost of the validation process by allowing optimization and stabilization of existing processes adding electronic management of validation process and procedures. This has shown to reduce the cost of the validation process by eliminating much of the paper-based documentation and process/procedure approvals.
[i] ISO16269-6 Statistical interpretation of data – Part 6: Determination of statistical tolerance Intervals.
About the Author
Igor Gorsky has been a pharmaceutical industry professional for over 30 years. He held multiple positions with increasing responsibility at Alpharma, Wyeth and Shire. He worked in Production, Quality Assurance, Technical Services and Validation including an Associate Director of Global Pharmaceutical Technology at Shire Pharmaceuticals. He is currently holding a position of Senior Consultant at Concordia Valsource, LLC. His over the years accomplishments include validation of all of the aspects of pharmaceutical and biotechnology production and quality management, technical support of multi-billion dollar drug product lines and introduction of new products onto the market. He had published articles and white papers in pharmaceutical professional magazines and textbooks. In addition he had been a presenter at Interphex. He is also very active with PDA participating in several Task force groups authoring PDA Technical Reports 29 (Points to Consider for Cleaning Validation) and 60 (Process Validation). He is leading PDA Water Interest Group. He holds a BS degree in Mechanical and Electrical Engineering Technology from Rochester Institute of Technology.