Why Spreadsheet-Based QbD Fails to Scale Across the Product Lifecycle
Summary
Spreadsheets often become the default for QbD because they’re accessible and flexible. But treating QbD as a set of files freezes knowledge, weakens continuity across handoffs, and makes decisions harder to defend over time.
As products move through tech transfer, validation, and commercial operations, spreadsheet-driven QbD can lead to version sprawl, traceability gaps, and repeated work. A more durable approach treats QbD as governed knowledge that stays usable across the lifecycle.
Key Takeaways
- Spreadsheet-based QbD often loses decision rationale, forcing teams to reconstruct “why” months later.
- Multiple spreadsheet versions create competing sources of truth and slow alignment across functions.
- QbD scales better when treated as governed, reusable lifecycle knowledge with traceable change history.
Who is this for
- CMC development scientists and technical leads
- Process development engineers
- Tech transfer leaders and MSAT teams
- Quality risk management / QA professionals
- Validation and continued process verification (CPV) leads
- Regulatory affairs and CMC submission authors
- Manufacturing and operations leaders
Spreadsheets have become the default tool for Quality by Design (QbD). They are familiar, easy to access, and flexible enough to support a wide range of development activities. Teams use them to evaluate risk, assess the criticality of process parameters based on the potential impact to critical quality attributes (CQAs), summarize experimental results, and document design space assumptions.
For many organizations, this feels reasonable. Spreadsheets are readily available, and QbD work often begins under tight timelines.
That comfort has shaped a common mental model: QbD is a collection of analyses and assessments, and spreadsheets are a convenient container for them.
The problem is not the spreadsheet itself. The problem is what gets lost when QbD knowledge is treated as a set of files rather than as an evolving system of scientific understanding.
Regulatory guidance makes it clear that QbD is intended to build and retain knowledge across the product lifecycle, not simply generate development documentation (International Council for Harmonization [ICH], 2009). When spreadsheets become the primary mechanism for managing that knowledge, hidden costs begin to surface later, often when teams can least afford them.
QbD Is Not a Set of Files
At its core, QbD is about understanding how product and process variables interact and using that understanding to manage risk. That product and process knowledge base should strengthen over time as new data becomes available.
Spreadsheets tend to freeze knowledge in place. They capture what was known at a specific moment, based on a specific dataset, owned by a specific group. As products move from development into tech transfer, validation, and commercial operations, that snapshot approach starts to break down.
When QbD knowledge cannot move forward intact, teams lose confidence in the decisions built on that knowledge. The impact is rarely immediate. It shows up later, in subtle but persistent ways: teams struggle to defend earlier decisions, manage competing versions of the truth, maintain traceability, and apply prior learning at scale.
These challenges tend to surface through five hidden costs that spreadsheet-based QbD introduces over time.
Hidden cost #1: Loss of defensible decision rationale
Spreadsheets capture conclusions well, but they struggle to preserve the context behind them.
A risk score may be marked as “high.” A parameter specified range may be listed as acceptable. Months later, when a change is proposed or a regulator asks for justification, the team must explain how those decisions were made and which evidence supported them.
When scientific rationale is scattered across spreadsheet tabs, emails, and meeting notes, teams often end up reconstructing decisions after the fact. This becomes more difficult when subject matter experts rotate or projects change hands. The result is rework, delay, and inconsistent decision-making, not because teams lack expertise, but because the knowledge transfer is incomplete.
This gap directly conflicts with regulatory expectations. ICH guidance emphasizes that QbD relies on scientific understanding and quality risk management, not just documented outcomes, and that this understanding should support decision-making throughout the lifecycle (ICH, 2009). Over time, decisions that should remain clear and defensible become harder to explain, justify, and trust.
Hidden cost #2: Version sprawl undermines knowledge integrity
Spreadsheet-based QbD rarely exists as a single, shared artifact. Instead, files multiply as work moves across functions and stages. Development teams maintain one version, tech transfer adapts another, and validation often creates a third. Each reflects a different moment in time and, often, a slightly different interpretation of the data.
As these versions diverge, teams begin operating from competing sources of truth. Parameter ranges, risk rankings, and control assumptions no longer align cleanly across groups. What started as a practical way to manage work locally becomes a source of confusion when decisions must be coordinated across development, manufacturing, and quality.
This version sprawl also strains communication and collaboration. Without a shared, current view of QbD knowledge, teams rely on meetings, emails, and manual comparisons to stay aligned. The result is duplicated effort, inconsistent updates, and delays as groups work to reconcile differences that should never have existed in the first place.
Over time, a growing share of effort is spent sorting out which information is correct rather than using that information to advance process understanding or make strategic decisions. What teams lose is not just efficiency, but confidence in the knowledge itself.
Hidden cost #3: Data integrity and traceability gaps
Regulatory expectations for data integrity continue to evolve, with increasing emphasis on the ability to trace how data was generated, modified, and used to support quality decisions. Guidance from the Medicines and Healthcare products Regulatory Agency (MHRA) outlines core principles that data should be attributable, legible, contemporaneous, original, and accurate, with appropriate controls to ensure trustworthiness over time (MHRA, 2018).
Spreadsheets are not designed to consistently support these expectations. While they can store data, they make it difficult to reliably capture authorship, change history, and context for updates, particularly when files are shared, copied, or modified outside of controlled systems. Manual data entry, copy-and-paste workflows, and formula changes introduce opportunities for error that are not always visible after the fact.
To compensate, organizations often rely on procedural controls layered on top of spreadsheets. Additional reviews, verification steps, and sign-offs are introduced, becoming a significant operational burden without fully closing traceability gaps.
The result is a fragile approach to data integrity. Teams spend effort managing documentation mechanics rather than strengthening confidence in the underlying data. In practice, the burden shifts from improving process understanding to managing the limitations of the format itself.
Hidden cost #4: QbD knowledge does not scale across the lifecycle
Quality by Design is intended to support decisions throughout the product lifecycle, not just during early development. Regulatory guidance frames process understanding as something that should deepen over time, informing scale-up, technology transfer, continued process verification (CPV), and post-approval change management.
FDA guidance on process validation describes validation as a lifecycle activity, spanning process design, process qualification, and continued process verification (U.S. Food and Drug Administration [FDA], 2011).
Spreadsheet-based QbD makes that difficult in practice. Spreadsheets tend to capture point-in-time assessments rather than evolving understanding. When new data becomes available, teams often create new files or revise summaries instead of extending existing knowledge in a structured way. Earlier rationale and assumptions become reference material rather than active inputs.
As a result, QbD knowledge does not scale cleanly as products advance and processes change. During scale-up, site transfer, or post-approval changes, teams frequently recreate risk assessments and justifications that already exist in some form. What should be reusable knowledge turns into repeated work, limiting the value of QbD across the lifecycle.
Hidden cost #5: 'Unofficial systems' still carry system risk
Many organizations treat spreadsheets as documents rather than systems. In practice, spreadsheets often function as systems when they are used to support GMP-relevant decisions such as risk ranking, control strategy definition, and validation justification.
As QbD activities expand, spreadsheets accumulate logic and decision rules. They become embedded in workflows and relied upon to generate outputs that influence quality decisions. At that point, the distinction between a document and a system becomes less meaningful than the role the spreadsheet plays.
Regulators focus on use, not labels. EU GMP Annex 11 makes clear that computerized systems used in GMP activities must be appropriately governed, validated, and controlled in proportion to risk (European Commission, 2011). When spreadsheets perform system-like functions without corresponding controls, organizations face a mismatch between how critical the tool is and how lightly it is managed.
This mismatch tends to surface during inspections, when teams struggle to explain how calculations are controlled, changes are managed, or decisions are traced. The risk is not the presence of spreadsheets, but the absence of governance aligned to how they are actually used.
The Biggest Cost: Slower Organizational Learning
The most significant cost of spreadsheet-driven QbD is not operational. It is strategic.
When knowledge is fragmented, teams struggle to trend data, identify recurring risks, and apply learning across products and sites. Instead of compounding understanding over time, QbD becomes repetitive work, recreated with each new program or transfer.
PIC/S guidance reinforces that data integrity reflects organizational maturity and management control, not just technical compliance (Pharmaceutical Inspection Co-operation Scheme [PIC/S], 2021). The same principle applies to QbD knowledge. How organizations manage and reuse what they learn determines whether experience translates into better decisions.
Organizations that cannot reliably build on prior understanding move more slowly over time, even as experience increases. This is where the hidden costs converge, and where the limits of a spreadsheet-driven approach become most visible.
A Catalyst for Change: Rethinking How QbD Knowledge is Managed
A more durable approach starts by treating QbD as governed knowledge rather than a collection of files. That shift changes how teams think about execution, regardless of the tools they use.
In practice, this means applying a few core principles that support decision-making across the lifecycle, not just documentation at a single point in time:
-
Single source of truth: QbD decisions are easier to apply and defend when scientific rationale, risk assessments, and control strategies are maintained in one shared context, reducing reconciliation effort across teams.
-
Traceable change history: As data evolves, teams need visibility into what changed, why it changed, and which evidence supported the update to preserve confidence in decisions over time and across handoffs.
-
Lifecycle knowledge reuse: Insights developed during early process design should remain accessible and relevant during scale-up, technology transfer, and post-approval change, allowing learning to compound rather than reset.
-
Risk-based governance: Not every QbD activity requires the same level of control, but critical decisions should be managed with appropriate oversight, visibility, and accountability without introducing unnecessary process burden.
- Synchronous collaboration across teams and sites: Effective QbD execution depends on how teams work together. When knowledge is shared in real time and communication remains efficient across functions and sites, alignment improves and decisions happen faster.
Fragmented files make collaboration harder, while governed knowledge makes it easier. Taken together, these principles shift QbD from a documentation task to a capability. They focus attention on how knowledge is created, maintained, and applied over time, rather than on the format in which it is stored.
The Costs of Staying With the Current Approach
Pharmaceutical development continues to move faster, with more complex modalities, tighter timelines, and fewer resources available to absorb rework. At the same time, regulatory expectations around lifecycle management, data integrity, and sustained process understanding continue to sharpen.
In that environment, the cost of “good enough” approaches compounds quietly. What feels manageable in early development can become difficult to sustain during scale-up, technology transfer, and post-approval change, when decisions must remain clear, traceable, and defensible.
Spreadsheets will always have a place in scientific work. The challenge is relying on them as the foundation for QbD knowledge. When QbD is treated as a collection of files, organizations spend more time recreating understanding than building on it.
The more durable path is to treat QbD as governed knowledge that can support decisions across the lifecycle. That shift does not start with a tool. It starts with recognizing what QbD is meant to deliver: confidence in process understanding, continuity in decision-making, and learning that carries forward.
Are you still doing QbD the hard way? Watch the video to learn more about the value of digital QbD.
References
European Commission. (2011). https://health.ec.europa.eu/medicinal-products/eudralex/eudralex-volume-4_en
EudraLex, Volume 4: EU guidelines for good manufacturing practice for medicinal products for human and veterinary use, Annex 11: Computerised systems. Accessed Date: 12 January 2026.
International Council for Harmonisation. (2009). https://www.ema.europa.eu/en/ich-q8-r2-pharmaceutical-development-scientific-guideline
ICH Q8(R2): Pharmaceutical development. Accessed Date: 23 January 2026.
Medicines and Healthcare products Regulatory Agency. (2018). https://www.gov.uk/government/news/mhra-gxp-data-integrity-definitions-and-guidance-for-industry
GxP data integrity guidance and definitions. Accessed Date: 30 January 2026.
Pharmaceutical Inspection Co-operation Scheme. (2021). https://picscheme.org/en/publications
PIC/S guidance on good practices for data management and integrity in regulated GMP/GDP environments. Accessed Date: 11 February 2026.
U.S. Food and Drug Administration. (2011). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/process-validation-general-principles-and-practices
Process validation: General principles and practices. Accessed Date: 18 February 2026.
The opinions, information and conclusions contained within this blog should not be construed as conclusive fact, ValGenesis offering advice, nor as an indication of future results.
FAQs
Related Blog Posts
Regulatory Scrutiny: Why a Paperless Approach Is a Non-Negotiable in CQV
Discover why transitioning to a paperless approach in CQV is essential to meet regulatory expectations and ensure data integrity in pharma manufacturing.
By Sweta Shah
Read
The Real Cost of Bottlenecks: Why Manual Cleaning Validation Increases Regulatory Risks
Discover why manual cleaning validation increases regulatory risk and how digital solutions can enhance data integrity and compliance in pharma operations.
By Sweta Shah
Read
From Weeks to Minutes: How Intelligent Automation Transforms CQV Workflows
Discover how intelligent automation transforms CQV workflows and streamlines equipment qualification in pharma—cutting costs, time, and risk.
By Peter Liang
Read