The time and cost of operational definitions in RWE Design
Andrew Ecob. Head of Growth.
·
3 minute read
A great deal of time and resource is currently spent by research teams in the development of operational definitions as part of the overall process of study design and protocol creation in Real World Evidence research. This article aims to outline the challenge to study teams, the costs and time impact, and make recommendations that RWE organisations can make to address them.
The Burden of Operational Definitions.
For each study protocol, and for each study design element within them, be they inclusion/exclusion criteria, endpoint, outcome measure, exposure, sub-groups or covariates, there needs to be a clearly aligned operational definition.
Additionally when exploring the application of real world data sources to a particular study protocol, creation of the associated code lists and value set must be performed before evaluation of a given real world data source can be performed.
Both the definition of the research question through operational definitions and the associated value sets need to be available before assessment can be made as to whether a Real World Data source is fit-for-purpose.
The process of creation and selection of operational definitions, and their associated code lists and value sets create significant burden to RWE Research organisations. This burden falls specifically in the following areas:
- Burden of Literature Search - Comprehensive scientific literature review and analysis to understand the current scientific landscape and tag which relevant operational definitions have been used in previous research in a given indication, and to evaluate this across the entire scientific landscape.
- Robust Clinical Review - Alignment of study elements to the conceptual and operational definitions to ensure that they reflect the reality of standard of care and clinical practice.
- Data Science Review - Creation of relevant code list and value sets against each study element and conceptual and operational definitions, to ensure that what is envisioned in the protocol is represented in a computable format in order to perform fit for purpose data feasibility assessments.
Reinventing the Wheel.
Managing the process of operational definition and value set creation activities across multiple stakeholders (Medical, HEOR, Biostatistics, Data Science, Data Partners) repeatedly fall on EACH study team lead and are repeated for EACH study design.
In order to cushion this impact, teams look for knowledge resources and pre-existing study designs in an effort to re-use definitions and code-lists/value sets. Whilst this seems a logical solution, this information is usually dispersed across the sponsor organisation, causing inefficiency and in some cases study teams are having to reinvent the wheel, or apply knowledge and information with a high risk of being "out of date" simply to be expedient and move the project forward.
"The risk is we go back to old definitions and value sets every time we look at a study. This means we are not moving forward, just looking back all the time."
Group Director, Data Science. Top 5 Pharma.
A survey of our customers at Navidence estimated a timeline of 4-6 weeks to create operational definitions, at a resource cost of around 200 hours or $50,000...Every Study.
Multiply this cost impact across all study protocols across your organisation, and the cost impact can be in the millions.
We need Industry Standards but who will define them?
Pre-curation of operational definitions and computable phenotypes has been attempted in our industry, mainly with a view to establishing an industry standard in a similar way to our approach to standardising data models. This has however had limited success. This is mainly due to the burden of aligning industry, academic and regulatory stakeholders, ensure a industry standard is managed across all indications of research, and ensure that these standards remain current as the scientific landscape shifts.
With a lack of industry standards, this places the burden back on the researcher organisation to address this issue internally, creating their own knowledge base that study teams can draw on. Again, this requires significant time and resource commitment which must be maintained over time, and externally validated.
The challenge in creating internal resources and the current lack of external resources has created an environment where in many cases study teams are forced to reinvent the wheel on every study.
Regulatory Demand to show Fit-for-Purpose.
Compounding the challenge of creating a knowledge base of operational definitions, is the fact that the FDA have provided clear guidance of the need to show conceptual and operational definitions, as well as associated mapped data elements and computable phenotypes in all study protocols.
With regulators expecting this information as part of protocol submission, there is a need to be able to create outputs to enable review of each protocol, not only at the operational definition level, but how associated code-list and value sets align with each study element and align with sources of real world data being considered for the study.
This is never more relevant than in the case of responses to safety requirements. The ability to be timely in the responses to regulators on how safety endpoints will be defined and collected is critical; and when time constrained, teams need to be able to turn to a library of definitions that is trusted, and where time has already been spent to align operational definitions and data definitions.
The demand to show how you define your study and perform fit-for-purpose data assessment is now clearly articulated by regulators. Research teams now have additional burden of meeting these requirements.
Do the Hard Work to make it Simple.
Given the significant costs and time impact occurring within study teams, and the pressures arising from the demand from external stakeholders, there is a strong financial and strategic argument to invest in building a robust library of operational definitions and associated computable phenotypes.
Pre-curated libraries of CODefs (Computable Operational Definitions) can drive a range of significant benefits for an RWE Organisation not only in the ability to perform RWE Design, but to apply these definitions in RWD Assessment and in use cases from External Control Arms to Trial Tokenisation where specificity of definitions, precision of data selection, and the need to provide robust justifications to internal and external are paramount.