Top Banner
DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN Page 1 of 46 Digital Data Flow Solution Framework and Conceptual Design Version 1.0, Dated 1 November 2019 TransCelerate DDF Project Team
46

Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

Mar 15, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 1 of 46

Digital Data Flow Solution Framework and Conceptual Design

Version 1.0, Dated 1 November 2019

TransCelerate DDF Project Team

Page 2: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 2 of 46

Table of Contents OVERVIEW OF TRANSCELERATE ............................................................................................................................. 3

1 INTRODUCTION .............................................................................................................................................. 3

PURPOSE........................................................................................................................................................... 3 BACKGROUND .................................................................................................................................................... 4 PROBLEM STATEMENT ......................................................................................................................................... 4 OBJECTIVES AND BENEFITS ................................................................................................................................... 5

2 SOLUTION FRAMEWORK ................................................................................................................................ 6

SCOPE .............................................................................................................................................................. 6 HIGH-LEVEL SYSTEM ROADMAP ............................................................................................................................ 7 DESIGN PRINCIPLES FOR THE PROPOSED DDF SOLUTION ............................................................................................ 7

Support for Workflows ........................................................................................................................... 7 Built in Automation ................................................................................................................................ 7 Leverage Best Practices and Industry Standards ................................................................................... 7 Remain System Agnostic ........................................................................................................................ 7 Expose Content and Transactions via APIs............................................................................................. 8 Remain Modular .................................................................................................................................... 8 Provide Change Notifications ................................................................................................................. 8 Provide Transaction Support .................................................................................................................. 8 Leverage Existing Authentication and Authorization ............................................................................. 8

Built-in Interrogative User Interface ...................................................................................................... 8 3 CONCEPTUAL DESIGN ................................................................................................................................... 10

USER NARRATIVES ............................................................................................................................................ 10 IMPACT TO CURRENT PRACTICE ........................................................................................................................... 10 KEY FEATURES .................................................................................................................................................. 10 CONCEPTUAL SYSTEM ARCHITECTURE ................................................................................................................... 11 COMMON DATA MODEL (CDM) ......................................................................................................................... 13 ADOPTION AND EVOLUTION ................................................................................................................................ 16

4 INTERFACES TO COLLABORATIVE AND EXTERNAL INITIATIVES ..................................................................... 16

CDISC 360 & TRANSCELERATE DDF WORK STREAM ............................................................................................. 16 ICH M11 (CLINICAL ELECTRONIC STRUCTURED HARMONIZED PROTOCOL/CESHARP) ................................................... 16 TRANSCELERATE INITIATIVES ............................................................................................................................... 17

Common Protocol Template: ............................................................................................................... 17 eSource ................................................................................................................................................ 17 Patient Technology .............................................................................................................................. 17 Placebo Standard of Care..................................................................................................................... 17 Investigator Registry ............................................................................................................................ 17 Shared Investigator Platform ............................................................................................................... 18

APPENDIX A: USER PROCESS SUPPORTED BY STUDY BUILDER AND DATA FLOW SCHEMATICS ............................ 19

APPENDIX B: CURRENT STATE PROTOCOL CREATION & STUDY START-UP SYSTEMS CONFIGURATION ................ 31

APPENDIX C: KEY FEATURES FOR STUDY BUILDER ............................................................................................... 39

APPENDIX D: MATURITY MODEL ......................................................................................................................... 43

Page 3: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 3 of 46

Overview of TransCelerate TransCelerate is a nonprofit organization dedicated to improving the health of people around the

world by accelerating and simplifying the research and development (R&D) of innovative new

therapies. TransCelerate’s mission is to collaborate across the global biopharmaceutical R&D

community to identify, design, and facilitate implementation of solutions designed to drive the

efficient, effective, and high-quality delivery of new medicines.

To achieve this mission, TransCelerate has many Workstreams, each working on initiatives to

improve some aspect(s) of clinical development. The workstreams are composed of individuals

from TransCelerate Member Companies (pharmaceutical and biotechnology companies that

chose to join TransCelerate). The TransCelerate Leadership Team oversees all the workstreams.

Many workstreams are developing solutions that involve technology systems. Currently

TransCelerate is launching an initiative around the concept of Digital Data Flow (DDF) to optimize

study start-up (SSU). The intent is to share with vendors, potential providers, and others what

TransCelerate believes would be the optimal platform. TransCelerate welcomes outreach and

collaboration with vendors of all shapes and sizes and any other interested parties.

1 Introduction

The aim of TransCelerate’s DDF initiative is to optimize study start-up processes and automate

system configuration and readiness. The current state typically involves disconnected study

design services and assets, and transcription or re-entry of the same information into many

systems across Sponsors, Contract Research Organizations, and systems vendors. This

inefficiency results in systems configuration falling onto the critical path for study start-up and

adds risks for transcription errors and unnecessary delays.

The solution we advocate would enable interoperability across multiple systems in a clinical

study, improve efficiency and data quality, and reduce cycle times. That solution should capture

protocol elements and present them in standardized formats to enable automated configuration of

downstream systems and efficient consumption of protocol information across the study

ecosystem.

Purpose

This document describes a potential Solution Framework and Conceptual Design that would

optimize processes and automate asset development from study concept finalization through

study start-up. This package articulates the overall vision for the solution and will help technology

providers to create an innovative, central protocol platform that will be accessible, open, flexible

and vendor-agnostic toward downstream clinical applications.

The platform of solutions should automate the workflow for configuration of all systems that

support both data collection and study operations. To accomplish this, a Study Builder to capture

structured elements of a study protocol including Study Objectives, Endpoints and Schedule of

Activities (SoA) will be an essential foundation and core technology. The Study Builder should be

linked to a Study Design Repository that will store therapeutic area (TA)-specific standard design

templates and other elements in a Common Data Model. The platform also should enable the use

of simulation and machine learning to support protocol designers by leveraging operational data,

real world data, historical clinical study data, or other knowledge bases from internal or external

data sources.

Early releases of the described platform should prioritize study-specific configuration of the

Electronic Data Capture (EDC) system. Other systems in scope for consideration, but probably of

Page 4: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 4 of 46

lower priority, include other clinical data collection (for non-Case Report Form (CRF) or laboratory

data), Clinical Trial Management System (CTMS), Interactive Response Technology (IRT),

electronic Trial Master File (eTMF) and project- or resource management systems (Portfolio

Project Management).

Background

TransCelerate’s Common Protocol Template (CPT) has been widely adopted across the industry,

providing significant value by improving consistency and ease of authoring by sponsors, and

review by sites. The CPT includes a common structure, proposed model text, and regulator-

accepted endpoint definitions that may be used across protocols with little to no editing at the

discretion of the user. The first CPT was released in 2015 and is a foundational element in the

longer-term movement towards an electronic, machine-readable protocol and overall end-to-end

efficiencies.

The technology-enabled edition of the CPT provides automated links to a Common Statistical

Analysis Plan (SAP) and Common Clinical Study Report (CSR) template and automates the

population of many elements across these assets. Though the SAP and CSR are not

components of study start-up, many elements of these assets flow directly from the Protocol and

other SSU assets. The DDF project will build on this approach and link those and other common

data points, further eliminating manual transcription from one document or system to the next.

The common structure provided by CPT reduces health authority and ethics committee review

time. CPT improves protocol quality by ensuring correlation of objectives, endpoints, and

analyses. Use of a consistent template also reduces the frequency of avoidable Protocol

Amendments due to administrative errors and facilitates disclosure readiness and consistency.

Efficient adoption and maintenance of a standard protocol template across the industry requires

sponsors to adopt consistent data standards. Implementation of data standards, including CDISC,

is variable across the industry, with many sponsors using company-specific versions of data

standards. This inconsistency means that achieving automation at scale will be arduous and

perhaps cost-prohibitive for both sponsors and systems providers.

Finally, the biopharma industry lacks a common data model to enable consistent automation and

to permit disconnected systems to communicate with one another. Customization also requires

vendors to perform significant sponsor-specific configuration or programming and impedes their

ability to proactively automate general workflows at scale.

Problem Statement

SSU system configuration workflow and asset creation is currently not automated, which makes it

inefficient and increases the risk of error. Current workflows also include a number of redundant,

manual activities. Sponsors are not able to utilize resources efficiently due to the siloed,

document-based environment.

Page 5: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 5 of 46

Objectives and Benefits

Objectives:

Facilitate the development of a platform comprised of major building blocks that technology

vendors will use to automate the configuration of upstream and downstream SSU assets and

create efficiency, reduce manual and redundant processes, and shorten cycle time.

Benefits:

By automating workflow, the platform will help move sponsors away from managing discrete

documents with redundant content, and enable more efficient content reuse throughout the study

lifecycle, both for automation of system configuration and for information flow to support the

generation of key documents (including the SAP and CSR).

Simultaneous configuration of assets: Enable the development of start-up assets

simultaneously through system interfaces that expedite the completion of each asset in real-

time and can be extended to include additional systems over time.

Eliminate manual data entry: Eliminate the need for manual transcription and reconciliation

of information across system interfaces ensuring consistency across study start-up assets.

Reduce the time to identify and resolve issues.

Reduced compliance risk: Improve compliance and reduce risk associated with

inconsistent or incomplete documentation. Applications access the same data standards or

metadata, resulting in consistency.

Enforced use of standards: Enforce re-use of common elements (e.g., study design

elements and metadata associated with Inclusion/Exclusion Criteria, Study Objectives, and

assessments) across studies, development programs, or compounds through the creation of

knowledge and standards databases.

Efficient processing of amendments: Reduce the impact and cost associated with study

amendments by automatically updating relevant downstream systems while maintaining

adequate version control.

Page 6: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 6 of 46

2 Solution Framework

Scope

The TransCelerate DDF project aims to facilitate the development of a core technology platform

(Study Builder, Study Design Repository, and connections from the builder to the design and

other metadata repositories). It is our intention that this platform will form the center of a broader

ecosystem of applications which will include upstream data sources to inform study design and

modeling, and downstream systems to manage studies, collect data and report study results. As

noted above, any initial, principal release of a DDF platform should support automated

configuration of the study EDC. Subsequent releases should expand this scope to include a small

number of additional systems (IRT, CTMS, eTMF and Project Portfolio Management (PPM)) but

the platform architecture must support extension to a broader spectrum of data sources and client

systems, and a broad range of use cases (See Figure 1: Use Cases). Yellow stars on the

diagram indicate focus areas of ongoing or completed TransCelerate workstreams.

The DDF project also creates an incentive for sponsors to rigorously adopt existing data

standards, consistent terminology and design templates. It will encourage development and

maintenance of a Common Data Model that can be leveraged across sponsor companies.

Figure 1: Use Cases

The technology platform would be accessible to any sponsor company, including TransCelerate member

companies as well as non-members. Platform outputs would also be accessible to any clinical system

vendor.

Page 7: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 7 of 46

High-Level System Roadmap

As DDF offers to deliver significant efficiencies and benefits to the industry and consumers, timely

engagement by the vendor community is paramount. Initial releases of a solution are not

expected to cover all study execution, analysis and reporting. Study start-up, which currently

consumes multiple months, is a desirable initial target. Hence, early releases should be

sufficiently comprehensive to facilitate automated configuration and utilize application

programming interfaces (APIs) by downstream systems supporting configuration of EDC, IRT,

CTMS, eTMF and project management systems.

Eventually, the core platform and the associated standards will facilitate the automated

configuration of systems involved in external data transfers, data analysis, and data reporting. In

addition, the platform will also use simulation and machine learning to support protocol designers

by leveraging operational data, “real world” data, or historical clinical study data or other

knowledge bases from internal or external data sources. As a result, the platform (and the overall

ecosystem) has potential to automate the end-to-end clinical value chain from program design to

dossier filing.

Design Principles for the Proposed DDF Solution

The Design Principles listed are intended to inform the way the platform should be implemented.

These principles will provide consistency across the architecture and ensure alignment with

business and technology objectives. Ideally the platform will incorporate each of the following

principles.

Support for Workflows

The technology solution should consist of a workflow component that interfaces with various

systems throughout the clinical study lifecycle, encapsulating the defined business process.

Workflows are a key aspect to accelerating and standardizing a business process. The workflow

should provide the flexibility for event-driven and reactive components. It should execute not only

on control flow dependencies but also on data-driven dependencies.

Built in Automation

Intelligent automation is critical in gaining efficiencies and eliminating redundancies in process.

The technology solution should consider the various options available and be able to implement

the best, fit-for-purpose option. The focus should be on automating processes and workflows and

growing into intelligent automation that will utilize artificial intelligence to learn and make

decisions as it goes.

Leverage Best Practices and Industry Standards

The technology solution should rely on existing standards wherever possible, limiting the need for

Standard Setting Organizations (SSOs) to develop new ones. Where gaps exist, the solution will

extend using best practices consistent with existing standards and conventions.

Remain System Agnostic

The solution must not be dependent on any particular vendor’s product and therefore will be

system agnostic. This will be enabled by leveraging a Common Data Model, a library of common

study procedures, and a catalog of services. Customers would have freedom to choose

which vendor products to use, and any vendor that wants to contribute solution components

would be able to participate. The Common Data Model and the catalog of services will enable

greater automation between different types of solutions and from different solution vendors.

Page 8: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 8 of 46

Expose Content and Transactions via APIs

The technology solution should expose all its content and transactions via well‑defined,

documented, and open APIs. Exposing content and functionality via platform-neutral APIs would

enable interoperability, automation, and cross-solution orchestration. Accessing functionality via

APIs introduces a layer of abstraction which insulates the calling systems from changes both in

underlying system structure and database schema. Solutions must not assume or require a

particular development environment or computing platform nor any specific interface technology

in order to interoperate with other systems.

Remain Modular

A technology solution should offer the option to exclude modules of overlapping functionality and

leverage a business enterprise’s existing data/technology platforms as much as possible. The

solution’s modular design should provide the flexibility to leverage a sponsor’s existing technical

platform where possible. The architecture would thereby avoid conflicts with overlapping or

redundant component capabilities.

Provide Change Notifications

The technology solution should provide and store information about changes to data controlled by

the solution at the time of the change. It is easy to identify changes at the point of occurrence, but

it is very costly to compute this information at the point of consumption. Availability of change

information would allow consuming systems to efficiently process updated data and react in near

real time. Integrating solutions based on change notifications (publish/subscribe pattern) supports

loose coupling of solutions and removes direct interdependencies.

Provide Transaction Support

The technology solution should provide mechanisms to securely commit information to storage

and should not assume that transactions are processed independently from transactions in other

solutions. In a well-integrated system landscape, individual solutions potentially participate in

business process automation scenarios where consistent changes need to be made across

individual solutions. Each solution should support the two-phase commit protocol.

Leverage Existing Authentication and Authorization

The technology solution should be open to authenticate users with different identity providers

(federated identity) and support a flexible authorization model. Customers and sponsors would

not want to unnecessarily replicate user identity information across numerous systems. Utilizing

existing role definitions and authorization policies would ensure consistent information security

across disparate solutions. The proposed solutions should be able to leverage an organization’s

existing authentication and authorization services (single sign-on [SSO], OAuth, JSON Web

Tokens, Kerberos, etc.) whenever possible to avoid segmented, siloed security implementations.

Built-in Interrogative User Interface

The User Interface should guide users through a stepwise process, specifying incremental detail

about the study, its objectives, specific assessments and data standards as the user progresses

through study creation, demonstrated on the left side of the below diagram. This could be based

on an interrogative model using menus of options or an optional “wizard” function that could be

dismissed by the user. This would be analogous to many proprietary tax preparation software

packages which lead the user through a series of questions, convert received information into an

XML file, and then automatically configure the required documentation, accordingly, as

demonstrated on the right side of the diagram. See Figure 2: Interrogative User Interface, which

Page 9: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 9 of 46

compares the functions of a typical, easy to use, tax preparation software package to illustrate the

described functionality of a possible user interface.

Figure 2: Interrogative User Interface

Page 10: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 10 of 46

3 Conceptual Design

User Narratives

In order to facilitate automated study-specific systems configuration, a Study Builder platform

would need to assemble structured study elements from existing standards repositories according

to the study design. The Study Builder would constitute a central capability of the platform. The

Study Builder Narrative and Data Flow Schematics (Appendix A) describes the user workflow that

we anticipate the platform would need to support (See Section 2.1 and Figure 1: Use Cases).

To provide greater context regarding the current workflows to support configuration of core

clinical systems, the Current State Protocol Creation and Study Start-up Systems Configuration

narrative describes user processes for Protocol Author, Data Collection and Operational Systems

(Appendix B).

Impact to Current Practice

A Study Builder would require a significant change to the current practices of most sponsor

companies. In general, studies are designed off-line and outside of core systems and only once

key design elements have been determined, those elements are captured in a Protocol document

which becomes the central record of standard and study-specific requirements.

A Study Builder likely would enable or require teams to begin working in the platform much

earlier, during the study design phase. Because it largely would be “assembling” pre-specified

standard elements based on the study design, it would enable more immediate feedback

regarding the impact that design choices have on downstream assets and processes.

The primary study design roles (clinical science, biostatisticians and medical writing) engaging

with the tool early in the design phase would constitute a shift in the way studies are currently

built and configured. Data managers and systems specialists would continue to play an important

role, but they may no longer have primary accountability for the bulk of systems configuration. In

addition, the efficiencies that would be enabled by a Study Builder platform would constitute an

incentive for sponsors to create and maintain libraries of pre-specified study design elements and

templates, or to utilize libraries created by others.

Key Features

The Key Features for the Study Builder (Appendix C) lists the functionality that that we believe

should be prioritized in the development of such a platform. The requirements are not intended to

be comprehensive, in that they do not describe all anticipated Use Cases which could be

implemented in subsequent releases.

Page 11: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 11 of 46

Conceptual System Architecture

The following Figure describes the potential future state conceptual system architecture to

support automated study start-up in clinical systems. Major solution components and concepts

are noted on the diagram and described in text following the diagram.

Figure 3: Future State Conceptual System Architecture

1. CDISC Libraries use linked data and a REST API to deliver CDISC standards metadata

to software applications that automate standards-based processes. CDISC Library

provides access to new relationships between standards as well as a substantially

increased number of versioned CDISC standards and controlled terminology packages.

2. Sponsor Global Libraries: It is assumed that the global libraries would be systems in

the sponsor’s ecosystem. The global libraries would be used by sponsors to define global

and disease specific clinical standards for use by the Study Builder application, and also

standard descriptors to compliment the use of clinical standards. The combined output

from all standards like those defined by CDISC, would provide the data elements,

controlled terminology, allowed values, etc. that ensures consistency across the sponsors

study designs.

3. The Study Builder would be the central component of the DDF solution vision, providing

study authors an easy to use, guided (e.g., interrogative, wizard-based) application to

design and assemble a “machine readable” instance of a clinical study. The Study Builder

should consist of a modular architecture to allow functional components to be “plugged

in” as needed to support sponsor-specific study design requirements. Functionality could

consist of, but not be limited to, the following:

Page 12: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 12 of 46

• Study Build Components that make the sections of a traditional protocol such as

Objectives, Endpoints, Schedule of Activities, etc.

• Analysis Components to help automate study design work by tapping into

historical and external study data such as Simulations and Machine Learning.

• Core Application Functionality such as versioning, workflow, and security.

4. The Study Design Repository (SDR) would be a versioned repository of all study

designs, using a Common Data Model (CDM) as its native storage and exchange format.

The SDR is an integral part of the Study Builder and would likely be implemented as a

component of the Study Builder. It is identified in this conceptual design to emphasize the

importance of providing external access using CDM format, to support open operability

with systems provided by other suppliers. In addition to required study structures as

specified in the CDM, the SDR will also contain additional metadata to enable the system

to maintain an audit trail of user and system updates, and full version history to be able to

support editing workflows and allow for prior versions to be accessed from any prior point

in time.

5. Study Design Data Sources: Any supporting data that would be used in future releases

of the Study Builder to help inform study design. These sources would be used by

simulation and machine learning algorithms to perform advanced functionality such as

suggesting assessments would be needed in a study, estimating study costs, etc.

6. The Deployment Engine would provide translation between Study Builder APIs and

vendor-specific system formats/interfaces where needed for existing/legacy systems

exists. This capability would be needed for systems that are not compatible with the

platform and unable to access its APIs. Ideally, this would be a temporary capability

needed until consumer systems are able to update their native interfaces to access the

platform APIs directly. This should be considered a conceptual component at this time, if

may be provided as a DDF technology component, or left to sponsor companies or

system integrators to implement using their own preferred integration technology (ESB,

iPaaS, etc.).

7. “DDF Compatible” Systems would have the ability to directly interface with Study

Design APIs, using the Common Data Model for data exchange. Note that in the diagram

the arrow indicates data flow, not the direction of connectivity. For DDF compatible

systems that would follow these open design principles, they would connect to the DDF

APIs to directly access and retrieve study designs. The API designs would be defined

and developed as part of future DDF activity. The APIs would be expected to comply with

modern API best practices (e.g., REST, JSON, OAuth, etc.).

8. Common Protocol Template: The platform should be able to generate published

documents based on the machine-readable study design. This component would use a

template-based approach to auto generate output document formats for each study

design version. This includes the protocol as well as documents which include identical

content, such as the SAP and CSR.

9. Multiple vendors and systems would make up the ecosystem supported by the

platform. The project objective is to support multiple vendors, whether compatible with

CDM APIs or not. This does not indicate that a given sponsor would be expected to have

multiple systems, but each may use systems from different vendors. The primary design

principle of DDF is to encourage and support an open ecosystem, with no preference

given to any particular vendor.

Page 13: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 13 of 46

Common Data Model (CDM)

The CDM defines a set of data elements, controlled terminology and relationships based on

existing industry standards which would be leveraged to automate the transfer and re-use of the

same data by different systems. As a conceptual design element, the CDM constitutes a key

component of the platform, enabling the platform to be utilized by different study sponsor

organizations and a broad range of vendor types and vendor systems. The greater the alignment

that can be achieved on the CDM, the greater would be the efficiency of the platform.

Foundational industry standards (CDISC (SDTM, ODM-XML, CTR-XML, etc.), CPT libraries,

Clinical Trials.gov, EudraCT, HL7/FHIR) would need to be leveraged to define the data elements

of the CDM. The selections made in the Study Builder will utilize the controlled terminology (e.g.

CDISC, LOINC, SNOMED, MedDRA, ICD9/10, IDMP) and relationships to identify additional

components from other libraries (CDISC Library, Sponsor Global Libraries) which would identify

the data to receive during defined study time periods and will be stored in additional data

elements of the CDM. Utilizing controlled terminology will be critical in order to facilitate

downstream automation.

The data elements of CDM and their relationships are represented in Figure 4 below. The yellow

notes in the diagram explain various components in the CDM, and the relationships among them.

The Table that follows provides examples of some principle elements of the CDM. Note that

many of the data elements included in the Data Flow Schematics diagram in Appendix A either

correspond to these elements or would be derived from these elements.

Page 14: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 14 of 46

Figure 4: CDM Architecture

Page 15: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 15 of 46

Table 1: Example of CDM Data Elements

Data Element Definition Expected Terminology

Required?

ProtocolID Any unique identifier assigned to the protocol by the sponsor.

Required

ProtocolTitle The title of the clinical study, corresponding to the title of the protocol.

Required

TherapeuticArea The therapeutic area the study is investigating.

Required

Indication The indication(s) the study is investigating.

SNOMEDCT Required

StudyPhase For a clinical study of a drug product (including a biological product), the numerical phase of such clinical study, consistent with terminology in 21 CFR 312.21 and in 21 CFR 312.85 for phase 4 studies. Select only one.

N/A, Early Phase 1, Phase 1, Phase 1/Phase 2, Phase 2, Phase 2/Phase 3, Phase 3, Phase 4

Required

ObjectiveFullText Full text of the objective from the objective library.

Required

ObjectiveLevel Level of the objective for the study.

Primary, Secondary, Tertiary, Exploratory

Required

ObjectiveBiomedicalConcept Linked biomedical concept(s) of the objective from the objective library.

Conditionally Required (if Objective does not, Endpoint must have to Biomedical Concept)

ObjectiveAnalysisType Linked analysis type(s) of the objective from the objective library.

Change, Occurrence, Time to Event

Conditionally Required (if Objective does not, Endpoint must have analysis type)

ObjectivePrimaryTimepoint Primary timepoint of the objective selected for the study. Note: for analyses that assess change over time, a primary and at least one secondary timepoint must be defined.

Required

ObjectiveSecondaryTimepoint Secondary timepoint(s) of the objective of the objective selected for the study.

Conditionally Required (if analysis involves more than one timepoint)

Page 16: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 16 of 46

Adoption and Evolution TransCelerate member companies and study sponsors are not currently working in ways that would be enabled by this platform. We anticipate that efficiency gains and full realization of benefits will require an evolution of system adoption, process development and adoption of data standards. Appendix D provides a Maturity Model that describes how an organization could theoretically progress through various stages along the path to full adoption and realization.

4 Interfaces to Collaborative and External Initiatives

The TransCelerate DDF initiative’s goal of improved automation and data flow across the clinical

development process is shared by many, as demonstrated by numerous related projects in this

ecosystem. An important pillar of the initiative’s success is recognizing the project linkages and

collaborating with both TransCelerate internal initiatives and ongoing projects that are external to

the organization. Examples include:

CDISC 360 & TransCelerate DDF Work Stream

Both CDISC and TransCelerate are running complementary initiatives intended to facilitate the

automation, and improved data flow across the clinical development process. CDISC 360 and

TransCelerate DDF work stream points of contact have monthly meetings to discuss progress

and to highlight accomplishments and open issues.

CDISC 360 is focused on:

1) A conceptual model that provides all the semantics for creating a standard analysis

output associated with a study endpoint, including all the upstream data artifacts needed

to create the analysis output.

2) An automated workflow that will use this model to auto-create a study specification and

then auto-create the study artifacts (analysis output, analysis data, tabulation data,

collected data and collection instrument configuration).

CDISC 360 Project and TransCelerate DDF initiatives have a shared goal of:

1) Digitize study design, leveraging existing CDISC standards.

2) Share learnings and identify enhancements in the use of standards to enable automation.

ICH M11 (Clinical Electronic Structured Harmonized Protocol/CeSHarP)

The new ICH ML11 objectives are to develop:

1) A template to include identification of headers, common text, data fields, and a set of data

fields and terminologies which will be used for the basis of efficiencies in data exchange.

2) A technical specification using an open, nonproprietary standard to enable electronic

exchange of clinical protocol information.

Draft guidance is anticipated to be released for public comment in June 2020.

Page 17: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 17 of 46

TransCelerate Initiatives

The following TransCelerate initiatives have a shared goal of reducing cycle times and burden by

enabling technical solutions:

Common Protocol Template:

The Common Protocol Template Initiative is working with industry stakeholders and regulators to

create a model clinical study protocol template containing a common structure and model

language to improve accuracy in data recording and speed study start-up.

Key Connection: Assess future roadmap e.g. CPT as the user interface and positioning of

DDF/Study Builder (work in progress). Managed transfer service (with logging. audit trail, version

control) for document to document transfer (e.g., Protocol to SAP and CSR).

eSource

The eSource Initiative seeks to assist TransCelerate Member Companies, and ultimately other

study sponsors in overcoming real and perceived challenges to influence more efficient data

gathering practices to benefit patients, sites and sponsors.

Key Connection: Standards Methodology and Collaboration with HL7, Automated configuration

for Electronic Health Record (EHR) data acquisition.

Patient Technology

The Patient Technology (PT) Initiative seeks to facilitate and accelerate the industry’s progression

towards a future where patients have access to innovative technologies that enhance the patient

experience and reduce patient burden in clinical studies.

Key Connection: Automated search from Study Builder based on matches to study design

(objectives, endpoints, patient demographics) returns applicable technology from a technology

knowledgebase to study designers and protocol authors. Method of using a data into Study

Builder to then search PT would be needed.

Placebo Standard of Care

The Placebo and Standard of Care (PSoC) Initiative was established to enable data sharing and

maximize the value of clinical data collected historically in the placebo and standard of care

control arms of clinical studies.

Key Connection: Companies participating in PSoC could search historical data to find matches

to studies with potential historical control data.

Investigator Registry

The Investigator Registry Initiative enhances TransCelerate’s Shared Investigator Platform, and

accelerates identification and recruitment of investigators, which will avoid duplication creation of

investigator documentation, thereby reducing cost and study length.

Key Connection: CTMS data sharing tool for sites and consenting investigators.

Potential downstream system connections.

Page 18: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 18 of 46

Shared Investigator Platform

The Shared Investigator Platform (SIP) will facilitate interaction between investigators and

multiple clinical study sponsors, enabling study planning, study start-up and study conduct

activities while reducing the administrative burden on site staff.

Key Connection: Study Builder would be able to support aspects of study-specific configuration.

Page 19: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 19 of 46

Appendix A: User Process Supported by Study Builder and Data Flow

Schematics

Future State Narrative

Purpose

This appendix describes a technology-enabled business process for defining a clinical study

design supported by a Study Builder platform. This narrative lays out the activities and

functionality the tool should support. The final section of this appendix consists of schematics

which explain the data flows implied by the business process.

The Study Builder will constitute the central component of a broader application ecosystem.

Study Builder will articulate the study design by accessing centrally stored definitions from a

Study Design Repository; it will enable the automated use/reuse of design elements in

downstream applications by leveraging clinical data standards and study design metadata. The

use of industry standards (e.g., CDISC, HL7), or company-specific standards and metadata, will

enable structured protocol elements to be “prepared” for presentation to Application

Programming Interfaces (APIs) that will in turn automate configuration of those downstream

systems. The system configuration APIs themselves are out of scope for this document.

A Study Builder would be expected to support these two main capabilities:

1. Import, storage and accessibility of study design templates and associated standards.

2. Sharing or presentation of information that would support configuration of downstream

systems.

Initially, the platform should prioritize information supporting configuration of electronic data

capture (EDC) systems. Subsequently, the platform also should facilitate configuration of IRT,

CTMS, eTMF and Portfolio Project Management systems. Eventually, other downstream

applications could include clinical study registries, e-Consent and e-Source applications.

Architecture of the tool should also support data inputs from other “upstream” sources of

information such as site performance benchmarking data, study cost estimators, and other data

sources that support study designers in determining patient populations and optimizing trial

design.

The tool’s user interface should be based on structured prompts (e.g., selection menus, drop

down lists, interrogative prompts, or a “wizard”) that will guide users through the workflow and

design process.

In addition, the tool should support multiple output formats, including readable document

formats (to populate the Protocol document itself) and digital formats according to the technical

requirements of downstream systems. Each of those outputs should allow for the addition of

supplementary data or specifications, or formatting for documentation. Any output data that

requires correction (rather than supplementary information) should be highlighted back to the

user, to prompt investigation of the associated standard, or to ensure links are correct.

A detailed listing of Key Functionality is provided (See Appendix C).

Page 20: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 20 of 46

Supported Roles

The primary roles involved in the study design and build process include clinical science,

biostatistics and medical writing; other functions such as data managers or system

programmers may also have input to specific study design aspects. The narrative is written from

a process or activity perspective to focus on the functionality the tool will support, rather than the

specific job titles responsible for individual tasks. Because the process involves several roles,

the tool should support collaboration among multiple users and should provide permission and

access controls to avoid unintended changes.

Study Build Workflow

The workflow described below outlines how we envision different portions of the study being

completed as “modules” that represent different aspects of the study design process.

Information will be defined iteratively, likely by different authors, until all design elements have

been defined.

The anticipated study design workflow is:

• Study Level Content

• Fixed/Common Study Design Elements

• Protocol Level Content

• Randomization Schema

• Visit Matrix

• Data Collection Elements

Start of Activity in Study Builder

Each company has a clear trigger that activates study design activities. In most cases, this is

related to executive approval or budget approval. At this stage, high-level information

(compound, therapeutic area, indication, protocol ID, phase and study type) may already be

available in company systems (e.g., CTMS or Master Data).

Study Level Content

Once study design activities are triggered, study build starts by entering protocol ID (study

unique identifier), primary study treatment (compound), therapeutic area (TA), indication, study

phase and study type (i.e., interventional, randomized, blinding, etc.). This information is most

often available in source systems within the study sponsor (e.g., CTMS, Portfolio Project

Management systems, or Clinical Development Plans) and the builder should be able to

automatically upload the elements to be surfaced to the user upon prompting. If these data

elements are not available in source systems, the user should be able to enter them into the

Study Builder directly (See Figure 6).

Specifying TA, indication and compound would ensure subsequent steps are linked to the

appropriate libraries of objectives, endpoints, assessments and forms. Subsequent choices will

Page 21: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 21 of 46

be directed toward the appropriate sets of standards by dynamic prompting to the appropriate

sections of those libraries. Similarly, other information such as study phase, study type, study

participant population, dose or study geography may further refine the appropriate libraries to be

presented as study build progresses. For example, if study attributes indicate a phase 3,

placebo-blinded oncology study of a particular compound versus standard of care in breast

cancer, the library elements of objectives, endpoints and associated EDC and data collection

elements for oncology/ breast cancer would automatically become prioritized options to the

user.

For instances where a required standard or template does not exist (e.g., a new indication), the

Study Builder would support creation of a new term and a workflow and approval process for

that new term to be considered and incorporated as an additional standard in the Study Design

Repository.

Figure 5: Study Level Specification

Fixed Study Concepts and Study Operational Requirements

All studies share specific elements that are dependent on the TA or indication (objectives,

endpoints, eligibility criteria and assessments) and many that are not dependent on the TA. The

latter include baseline demographics, medical history, treatment preparation and administration,

concomitant medications and adverse events (AE). The tool would ensure that all components

are included in each complete study design. These non-TA dependent structural elements tend

to be consistent across all studies, though specific versions of forms may depend on the

therapeutic area (e.g., oncology-specific AE forms). Information regarding study comparator

arms, randomization schema, study blinding, and treatment administration would also be

entered in the tool.

Protocol Level Content

The protocol title, protocol short title and protocol acronym would be confirmed (if uploaded from

a source system) or added. Each of these would be a redundant unique identifier linked to the

protocol ID. Based on the TA, indication, compound, phase and other attributes, the tool will

Page 22: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 22 of 46

direct the user to choose from the most appropriate reference libraries of pre-defined objectives,

endpoints and assessments.

Study Objectives would be selected/assigned from the relevant participant-, therapeutic area- or

compound-specific content library, and categorized as primary, secondary, tertiary or

exploratory objectives. Each objective would be linked to a TA-specific set of study endpoints

which in turn is associated with biomedical concepts (clinical procedures, instrument or

laboratory measurements) and definitions. Note that, in cases where the study being designed

is identical or similar to another study (e.g., a prior study or a “sister study”) the tool should be

able to access and pre-populate that information, with confirmation from the user after any

required modification or adjustment.

Page 23: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 23 of 46

Figure 6: Protocol Level Specification

For example, using the TA Library for Asthma, a study in severe asthma could have as its

Primary Objective “To evaluate the effect of drug x in participants with severe asthma.” The

primary endpoints linked to this objective are limited to “absolute change in percent of predicted

FEV1 from baseline to [Week X]” OR “increase [magnitude of change] in FEV1 from baseline to

[Week X].” This also implies that the FEV1 biomedical concept will require spirometry

assessments to be scheduled at baseline (CDM: primary timepoint) and week X visits (CDM:

secondary timepoint), and that FEV1 measurements will need to be captured in the study

database, either by EDC or via data transfer. Further, options for Secondary Objectives include

FVC or FEV1/FVC ratio (spirometry), reduction in symptoms (questionnaire data) or fewer

Clinical Exacerbations (medical history or diary data) or reduction in the use of rescue

medication (diary, dosing device or medication count data). As each objective is chosen, the

appropriate choice of linked assessments and measures would also be assembled in the tool

using the latest available standards for that assessment.

Randomization Schema and Study Design Concepts

The tool would prompt the user to specify the randomization schema and other elements of the

protocol design. It would provide suggestions of study design elements based on the user’s

previous choices of indication, compound and other details.

The tool would support creation of new design elements as they are required and a workflow

and approval process for that new term to be considered and incorporated as an additional

standard in the Study Design Repository, for use in designing future studies. This intelligence is

key to the vision for the tool.

Page 24: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 24 of 46

Figure 7: Specification of Objectives and Endpoints

Visit Matrix Specification

The study visit matrix (or Schedule of Assessments, SoA) normally appears in table format, with

rows representing the assessments to be completed and columns indicating the timing of the

study visits or interventions. This representation serves as a central reference for the

configuration of most study-specific tools and systems. The Study Builder should be capable to

derive the study visit matrix by combining previously specified assessments with required

timing, based on design templates associated with the therapeutic area, indication and study

type or phase. Visit schedule should also be logically consistent with study objectives and

primary efficacy or safety endpoints (i.e., if the primary endpoint is based on an efficacy

assessment at Week X) and may be tied to pre-determined schedules (e.g., oncology studies

tied to treatment cycles). If standards or source data are not available in the Common Data

Model repository, the builder will allow manual configuration.

Page 25: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 25 of 46

Figure 8: Specification of Timing and Analytical details for Assessments

Figure 9: Selection (or Auto-Selection) of Standard Library Elements

To support configuration of downstream systems, specific details would be required for each

visit, including, for example, limits for deviations from expected visit schedule (“visit windows” or

expected study day for a visit, plus minimum and maximum allowable), and whether or not

treatment is scheduled to be administered on that visit. In addition, planned analyses could also

be associated with specific assessments.

Data Collection

Outputs of the Study Builder should link specified assessments to a current version of data

collection standards. This would include standard EDC forms or file transfer specifications for

non-CRF data.

The tool would allow specified assessments to be linked with relevant source systems or

libraries (i.e., repository of design templates by TA or phase, data standards or CDM elements)

that would link those assessments to standards and forms (such as EDC) and include any

associated logic check specifications and validation scripts. These would also be linked to data

standards which will automatically annotate fields according to the applicable standard data

model (e.g., CDISC ODM for systems configuration).

Page 26: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 26 of 46

Version Control: Support for Protocol and Protocol Amendments

Once all details of study design are complete, and outputs that allow configuration of

downstream systems have been created, the tool must allow outputs to be saved as an intact

“version” of all structured elements for that study.

When the Study Protocol is subsequently amended, the tool should allow for specific sections or

technical details of the design to be updated. Updates would be automatically linked to

standards repositories. Outputs would be like the initial version, including both text and digital

file outputs to update downstream documents or systems configuration, respectively. Once

completed, the updated versions would be saved as a new, comprehensive version.

User Experience: “Real Time” Synchronization versus Discreet Updates to

Downstream Outputs

As a protocol is created in the tool, it should be able to generate outputs in text and digital

formats. This could be designed as discrete generation steps, where user command will cause

the system to generate or render its outputs. Alternatively, the system could be configured to

automatically create outputs continuously, in real time. The latter would be particularly beneficial

for text outputs, as it would allow users early visualization of their work and facilitate early

detection of errors or inconsistencies.

Page 27: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 27 of 46

Data Flow Schematics

The following diagrams show the decision tree that would collect data essential for designing a

study and feeding downstream systems. The sequence of figures demonstrates the logical flow

described in the narrative, as the user moves from providing high level study descriptions

(Protocol ID, TA, Compound) to more specific details about the study design (Item 1 below).

The tool should prompt the study author to include “fixed” study concepts that are not study-

specific but are included in every study (Item 2). The tool should also be able to derive the

Schedule of Assessments from the structured information provided (Item 3). Items 4, 5 and 6

further describe how those data elements would flow through the process.

Diagram 1

1. General Flow, Overall Data Elements: The Study Level Content or Asset Level

Content would represent the highest level of data required to plan a study or asset

strategy. This information is typically stored at a strategic decision level – leading to a

Go/No Go decision on an asset or a study. The data maintained here defines the

essential study level data points that can be used to limit choices or libraries used to

create study-level systems or plans. represents the details that would be required to

begin drafting a specific protocol. By collecting this information as individual data points,

the tool should not only be able to drive protocol authoring, but the data would be

useable by other downstream systems, such as EDC.

2. General Flow Protocol, Fixed Study Concepts: These are items that are minimally

required to be included in every study. There may be some variations of particular items

based on the study level data.

a. Study Concepts include items that are minimally required to be included in every

study of the design driven by the study level data selections. For example,

“oncology” was selected as the Therapeutic Area, specific requirements for oncology

studies (e.g., specific adverse event or treatment administration forms) would be

prioritized for selection.

Page 28: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 28 of 46

b. Assessments and Measures are driven by the selected Objectives and Endpoints.

Each endpoint and objective contain 3 pieces of data that can be used for decisions:

Biomedical Concept being analyzed (e.g., lab value); Timing (e.g., baseline vs 12

weeks); Analysis Requirements (e.g., collected as a % value).

c. Study Operational Requirements are decisions that are required to support

operational needs of a study. For example, an endpoint does not require the dosing

information to be captured, but the analysis and study design will depend on this

information.

d. Orphan Assessments are assessments that have been manually added to a study

without being associated directly with a specific Objective or Endpoint, are not

required for Study Operations and do not fit into the TA level requirements. These

assessments would require some business approval to be added to the final study

design.

3. General Flow Protocol, Schedule of Assessments: one of the key outputs from the

information collected would be a digitized version of a SoA. Each point of required data

would have been defined in the earlier steps.

Diagram 2

4. Fixed Study Concepts: These represent “minimal required” information to run a study.

They are common to all studies. For example, Patient Demographic information is required in all studies with only minor variations, but the information collected in Study

Level content can drive the available variants that are presented.

Page 29: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 29 of 46

Diagram 3

5. TA Driven Concepts: Beginning at a “Study Level” or “Program Level” the tool would prompt

high level questions that would limit the number of available libraries or data sets to select from in

future steps. For example, using a specific TA will allow selections from any library (or standards

repository) that are designated for that area. The Trial Concepts represent required material for

the selected Study Level Content.

Page 30: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 30 of 46

Diagram 4

6. Assessments and Measures: The Objectives and Endpoints would be selected from

available libraries, limited by prior selections at the Study Level Content, and each

endpoint will contain information that can be applied to three main areas: data collection,

analysis requirements and timing in the study.

Page 31: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 31 of 46

Appendix B: Current State Protocol Creation & Study Start-up Systems

Configuration

Current State Narrative

Purpose

This appendix describes current, typical workflows for study build and configuration activities

within sponsor companies (and Contract Research Organizations). This provides context for the

challenges to be addressed by the DDF solution platform and illustrates the data flows and

redundancies of current approaches to these business processes.

The three categories of system configuration addressed are the creation of the Study Protocol

Document (Section I), the configuration of Data Collection Systems such as electronic data

capture and non-EDC data collection (Section II), and the configuration of systems used for

operational management of clinical studies, such as Clinical Trial Management.

I. Protocol Authoring at Study Design and Study Start

This section describes the typical workflow and tasks undertaken as a Study Protocol is

authored. Because the Protocol defines study requirements for all other systems, it is important

to understand how the document is created, from retrieval of the overall document template, to

assembly of language elements for standard procedures, and to creation of new language for

unique study requirements.

The primary roles involved in this process include clinical science, biostatistics and medical

writing or medical editing. Drug Safety may also have a role in authoring specific sections. The

involvement of each of these roles varies across different companies.

Timeframe

The focus is on creation of the Protocol document, once the initial study concept has been

developed. This starts when the protocol author begins the formal writing process in a

structured document template. The timeframe ends when the Protocol is complete and is ready

for internal approval.

Protocol Template

The primary protocol author begins the formal writing process once agreements about the study

concept are in place and often long before formal internal approvals have occurred. There may

be no specific trigger event and it varies by sponsor and circumstance.

The appropriate template is taken from the Document Management System (or shared template

repository). Templates can be employed company-wide or can be adapted to a specific therapy

areas or molecule. The alternative to using a template would be to start with a recent Protocol

within the same indication. The use of prior Protocols as a starting point is discouraged by most

sponsors to avoid inconsistencies, omissions and the propagation of past errors.

Templates are normally pre-populated with background information on the disease and details

of the study treatment. The author may be required to update this information based on recent

information.

Page 32: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 32 of 46

Study-Specific Content

Clinical scientist will develop the rationale and study objectives based on literature research and

assumptions regarding the study population, including primary and secondary response criteria

and expected response rates. The biostatistician would use these assumptions to develop the

sample size calculation, treatment allocation plan and statistical sections related to analysis of

efficacy and safety.

Eligibility Criteria fall into three categories: standard or general criteria (e.g., age, reproductive

status and contraception), disease-specific (diagnostic criteria related to baseline disease

characteristics), and those that are study-specific. Only the study-specific criteria would be

written by the protocol author. The other categories would exist in the template.

Drug Safety may have specific requirements to incorporate in the Protocol and may directly

contribute to writing within the document management system. Requirements could include

safety “run-in” procedures or specific assessments that need to be included. In addition, Drug

Safety will provide input to any Adverse Events of Special Interest which may be specific to the

therapeutic area, study population, disease area, therapeutic class or the particular molecule.

Study Assessments and Procedures

The clinical scientist populates the Schedule of Assessments in the template, in tabular format

and in full text. The template normally includes the names of each of the protocol-specified

procedures. Details are usually expressed in footnotes to the table and are normally part of the

template, except for study-specific details.

Study Interventions and Procedures

The author provides detail to describe the main treatment being studied, as well as comparator

or any “background” treatment to be administered. The details of treatment administration will

also address any precautions required to maintain the study blind, including treatment

packaging. Storage, handling, preparation, dosage and administration of the treatment(s) would

also be described.

Final Review and Internal Approval

Editor reviews for consistency, typographical errors and adherence to template requirements.

II. Data Collection Configuration in Key Systems Set up

This section describes the current, typical workflow and tasks undertaken to configure systems

at a global or study level for collection of primary clinical data. Primary clinical data include

direct measurement of study participant characteristics that will be analyzed to assess eligibility

and baseline participant characteristics and responses during study conduct.

The systems considerations addressed in this section are study-level configuration of EDC

(electronic Case Report Form data capture) and preparation for acquisition of “non-EDC” data

from laboratory and other sources, usually via electronic data transfer.

The activities described in this narrative are typically led by roles such as the study data

manager with specialist involvement from database programmer, EDC builder and clinical

programmer or data acquisition specialist.

Page 33: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 33 of 46

Study Operational data required to enable, oversee and document the conduct of the study,

such as CTMS, IxRS and eTMF, are considered in Section III (Operational Systems

Configuration).

Timeframe

Specification for EDC configuration and other data collection begins when Protocol details are

final. This is triggered at the date of Protocol approval or when the Protocol Synopsis is

approved, for sponsors who recognize the Synopsis as a valid study specification document.

Primary data collection systems (eCRF/EDC) must be activated before the first patient starts the

study (e.g., first patient screened). Data collection for non-EDC data must be configured for any

study-specific requirements before first patient assessments, but finalization and testing of data

transfer specifications may occur later.

Overview

Preparing for primary data collection requires information to inform system configuration and

programming at four levels:

• High level study information related to Therapeutic Area and overall design

• Overall logic regarding study design, treatment allocation and visit schedule

• Detailed information regarding assessments, forms, data fields, and associated content

to be collected at each study visit (including CRF and non-CRF data sources)

• Information regarding logic checks and validation associated with different data types

and data fields

High level study parameters are taken from the Protocol (Therapeutic Area, Specific Indication,

Sub-Indication, study phase, overall sample size, study duration, control arms, blinding and

randomization schema) and documented in an overall Data Management Plan. The Protocol is

referenced either in .pdf format or from within a document management system and the

parameters would be tabulated in Excel or Word. Key study milestone Planned dates (FPI, LPI,

LPO, DBL, etc.) would also be captured.

• Study parameters are used to identify TA- or Indication-specific standards libraries that

will be used to configure EDC, non-EDC data capture and associated logic-check or

validation programs

• Primary product and comparator, primary indication, study phase and study duration

• Number of treatment or control arms and randomization details (e.g., study blinding)

Map Data Sources

When study measurements are finalized (upon Protocol Approval), all data sources should be

categorized according to type of data capture (EDC or non-EDC) and how data will be

transmitted to the study database. Details are tabulated in Excel or Word and include the

following types of categorization:

• Data entered directly to the eCRF (e.g., Clinical measurements, transcription from

participant diaries, or conclusions from diagnostic assessments including image reading

centers)

Page 34: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 34 of 46

• Data captured directly in a service provider database that will be batch-transferred to the

study database or computing environment (e.g., PK-PD, biomarker or specialty

laboratory data)

• Data captured directly in a service provider database and automatically transferred to

study database or computing environment (e.g., safety laboratory or IxRS data)

• Data transmitted from devices to a service provider database (e.g., e-PRO)

Some non-EDC data have existing data standards at the company or therapeutic area level and

established data transfer mechanisms (e.g., central laboratory data) where some may be less

standardized (e.g., certain ePRO) and others may be study-specific and require de novo

configuration (e.g., novel biomarkers from specialty laboratories).

Electronic Data Capture (EDC)

Visit Logic, Detailed Assessments and Data Forms

Detailed requirements regarding the study’s visit structure and logic are taken from the Protocol

to populate a study specification template (Excel):

• Number of visits, visit timing and assessments to be completed at each study visit during

each study stage (screening, randomization visit, on-treatment visits, number and timing

of follow-up visits, allowance for unscheduled study visits)

• Visit “windows” or limits for acceptable deviations from protocol-specified visit schedule

(this could be taken from company data standards, pre-populated to the template or from

study-specific requirements stipulated in the study Protocol)

• Specify the visits and folder naming conventions where the forms will be located

Information from the Protocol (Inclusion Criteria, Exclusion Criteria, Study Endpoints and

Schedule of Assessments) is used to determine detailed structure of each study visit:

• Forms, fields and associated content required to be collected at each visit

• Standard and non-standard eCRF forms to be used or created for the study (consult

EDC global standards library to identify standard forms to be used as well existing

validation checks associated with standard forms)

• Data forms or fields for Targeted Source Document Verification according to the Critical

Variables List (located Trial Monitoring Plan in Word, or in an Excel template associated

with the TMP)

• Specification of additional logic checks that need to be programmed

This information is captured in an EDC specification template (Excel) that would be used to

configure and program EDC.

Non-CRF Data and Data Transfer Specifications

Non-CRF data can include laboratory or biomarker data, ECG data, data and diagnostic

narratives from imaging data or other assessments (e.g., EEG, PET or fMRI), Clinical Outcome

Assessments (COA), electronic Patient Reported Outcomes (e-PRO), subject diaries or subject

questionnaires captured on separate electronic devices, as well as specialty laboratory

information.

Page 35: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 35 of 46

These data types are captured in service provider systems and transferred to the study

database or computing environment in batches or via regular (e.g., daily) data interfaces.

Programmers (clinical programmer or database developer) will collect details of these data

sources, file format specifications and data transfer specifications. Details of these requirements

and specifications are captured in tabular format (Word or Excel) including file format

specifications, validation checks and data transfer requirements.

Specify Standard Listings and Configure Study-Specific Listings

Data listings include company- or therapeutic area-standard listings which need to be confirmed

and configured for the study. Study-specific (custom) listings may also requiring specification

and programming.

Site Information

Detailed site contact information containing all relevant roles and addresses (Investigators, sub-

Investigators, Study Coordinators, Pharmacy, associated IRB/REB) is entered and maintained

in a separate repository.

After the study is configured in EDC, sites are given access to the study-specific URL after

confirmation by a Study Manager that all necessary approvals and training has been completed.

III. Operational Systems Configuration in Key Systems Set up

Purpose

This section describes the current, typical workflow and tasks undertaken by team members

with responsibility for general study operational management, either at a global- or country level.

While site-level activities are also addressed (e.g., for CTMS and IxRS), the focus is on study-

level configuration of key systems used in the execution of multi-center (and often multi-country)

clinical studies.

Four major, data-driven areas of activity are managed in the course of conducting studies.

These include project management (resources, finance and timelines), investigator site

management and study monitoring, medication and supplies management, and management of

the primary study data.

The key systems considered are CTMS, IxRS (IRT), eTMF, Project Portfolio Management

(PPM) and EDC. EDC and other systems for primary data collection are considered in a

separate narrative (Data Collection). Other systems may be added to this scope, such as

Protocol Deviation Management Systems, Safety Document Distribution, Site Payment or

Budget Management, etc., but current focus is on the five primary systems which enable clinical

operations, maintenance of study documentation and data collection.

The activities described in this narrative are led by roles which vary among study sponsor

companies. Many of the activities are initiated or coordinated by a global study manager role

with accountability for overall study execution. But different systems or tasks will be undertaken

by specialist roles or separate system owners.

Page 36: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 36 of 46

Timeframe

The activities described, start at study executive approval (e.g., Study “Go” by portfolio

committee). It is assumed that all key systems will be activated before the first patient starts the

study (e.g., first patient randomized or “FPI”). Some activities may build on information captured

in systems or documents during early planning activities; an effort is made to identify such

assumptions where relevant.

Project Portfolio Management (PPM)

PPM system is configured manually (study operational lead or project manager). To support

budget discussion and study executive approval, studies may be entered to PPM system when

only sparse “placeholder” information is available, and the information modified as details are

determined.

Study details may be entered to the PPM system from other source systems (e.g., Master Data

Management) and fed to PPM via electronic interface.

Key study information is maintained in PPM (Protocol Number, Therapeutic Area, Specific

Indication, Sub-Indication, study phase, overall sample size, study duration, control arms,

blinding, initial assumed number of sites and country list) and key study milestone planned

dates (FPI, LPI, LPO, DBL, etc.).

PPM System calculates required study resources at the study- country- and site level, based on

an algorithm driven by parameters derived from Study Protocol and assumed number of sites

and countries.

Clinical Trial Management System (CTMS)

High level study parameters are initially entered into portfolio tracking or other systems (e.g.,

“Master Data Management” or Portfolio Management systems) and fed into CTMS via electronic

interface:

• Therapeutic Area, Specific Indication, sub-Indication, study phase, study control or

comparator, study blinding

• Overall sample size, study duration, assumed number of sites and assumed country list)

• Key study milestone planned dates (FPI, LPI, LPO, DBL, etc.)

Unique Protocol Number is generated by this system.

In general, CTMS study configuration per se begins when protocol details are final or after

Protocol approval. This is triggered at the date of Final Protocol approval or when the Protocol

Synopsis is approved, for sponsors who recognize the Synopsis as a valid, final study

specification document.

Study details are specified in a template (Excel) with the following information from the Study

Protocol. The Protocol is referenced either in .pdf format or from within a document

management system.

Page 37: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 37 of 46

• Final study sample size, treatment duration, number of treatment or control arms and

randomization allocation, study blinding

• Anticipated timing of study visits in each of the overall study stages (screening visits,

randomization visit, number of on-treatment visits, number and timing of follow-up visits,

any allowance for unscheduled study visits) and “Names” of study visits (company

standard or therapeutic area-specific naming)

• Visit “windows” (limits for deviations from protocol-specified visit schedule) – this could

be taken from company standards, pre-populated to the template or from study-specific

requirements stipulated in the study Protocol

CTMS templates may be populated specifically for CTMS or adapted from a more detailed

matrix created to specify EDC requirements (see Data Collection Narrative). CTMS study

specification template (.xls) is uploaded to CTMS or used to configure CTMS according to study

requirements.

Major study-level Planned milestone dates (FPI, LPI, etc.) and assumed list of participating

countries are also entered in CTMS, either via direct feed (from the Master Data Management

or Project Portfolio Management system) or via the CTMS Excel template.

Once the study has been configured, the following information is also assigned or associated to

the study in CTMS:

• Names and roles of study team members and the start and end dates of their study

assignment

• Countries planned to participate in the study

• List of Investigative sites planned to participate

Team member names and detailed site contact information containing all roles and addresses

(Investigators, sub-Investigators, Study Coordinators, Pharmacy, associated IRB/REB) are

entered and maintained separately.

Interactive Response Technology (IRT) or Interactive Voice- or Web-Response

Systems (IxRS)

Detailed information about study design, blinding and treatment allocation are recorded in an

IRT specification template (Word or Excel) using information from the Study Protocol. Sponsors

may work with more than one IRT vendor and different vendors will have different templates and

requirements to capture IRT specifications. The Protocol is referenced either in .pdf format or

from within a document management system.

The following detailed information is required:

• Protocol Number, primary study treatment(s), indication and sub-indication

• Study Treatments (Investigational Medicinal Product and non-IMP), dosage form(s) and

detailed information about treatment allocation

• Study overall sample size, treatment duration, number of treatment or control arms and

study blinding

• Detailed information regarding the number, anticipated timing and Protocol “names” of

study visits in each of the study stages (screening visits, randomization visit, number of

Page 38: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 38 of 46

on-treatment visits, number and timing of follow-up visits, allowance for unscheduled

visits)

• Detailed information from the Study Protocol or Statistical Plan (or Protocol Appendix) on

the Randomization Plan for the study, including the randomization scheme (e.g. number

and name of treatments, strata, block size) and how randomization will be carried out

• Details from the Study Protocol on the Randomization Procedure, to define the detailed

requirements or steps to randomize subjects

• Detailed Inclusion or Exclusion criteria or other randomization requirements (e.g.,

specific laboratory test or other diagnostic requirements)

• Procedures for Interim Unblinding may be required for IxRS programming or

configuration

Randomization Programming or Master Randomization List must also be provided to the IxRT

Vendor or technical lead (provided as a separate Excel file). This information may originate in

Statistical or Biometrics department and is available in electronic format (Excel or csv).

List of IMP and non-IMP Kit numbers (separate Excel file). This information may originate in

Clinical Supplies or labeling department.

Study Manager will provide details for required Validation or User Acceptability Testing.

After the IRT is configured, the initial list of Investigative sites planned to participate is assigned

or associated to the study.

Detailed site contact information containing all relevant roles and addresses (Investigators, sub-

Investigators, Study Coordinators, Pharmacy) is maintained in a separate repository which

accommodates contact information (drug shipment addresses, specified in format required for

drug distribution systems).

Electronic Trial Master File (eTMF)

Basic study design parameters and assumptions are provided to begin configuration of eTMF.

This includes Protocol Number, assumed country list and high-level design specifications (e.g.,

number of study sites, treatment arms, whether a Data Safety Monitoring Board or endpoint

adjudication are required, etc.). This information is extracted from the Study Protocol and

captured in Excel format.

Configuration of eTMF is based on study-specific attributes and sponsor-specific eTMF

“Reference Model” which stipulates the anticipated folder structure at Study-, Country- and Site

level. Country-specific documentation requirements (for approval from regulatory, institutional or

other research oversight authorities) are also mapped into eTMF configuration.

Page 39: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 39 of 46

Appendix C: Key Features for Study Builder

The following table provides a summary of the key functionality that the Study Builder platform

should provide. It is not intended to be a complete list of User Requirements nor Functional

Requirements, but those will be derived from the features listed.

Table 2: Key Features

ID Epic Feature Description

1 Core System

Functionality

Audit Trail The system should record an audit trail on all transaction of who, when,

and what action was taken (add, edit, delete), including reason for the

change.

2 Version

Control

Study Design

Versioning

The platform (Study Builder and the Study Design Repository) must

have versioning capabilities at both a granular item level and an

aggregate level – such as a collection of design elements that

constitutes a study definition.

The system should provide the ability to version a Protocol after it has

reached the status of APPROVED in its lifecycle, upon versioning the

Protocol a new copy of that protocol would start the status life cycle

and me the next sequential version. The version would initially start

pre-populated with all associated data from the prior version. Any

changes made to the protocol would only be reflected in that version

not any prior versions.

3 Security SSO The system should allow authentication through various Single Sign On

options.

4 Security Systems

Access

The system should have role-based user access and permissions.

5 Security Functional

Roles

Read, Author, Review, Approve and Delete functionality must be

available based on functional role definitions.

6 Data Access Reporting/

Analytics

The system should allow for any reporting, data visualization or

analytics technology to pull and utilize the data.

7 Reference

Libraries

Metadata

Management

Study Builder must enable the ability to centrally define study design

data elements.

8 Reference

Libraries

Metadata

Management

Must be able to define and maintain relationships between elements

within the Design Repository.

9 Reference

Libraries

Reference

Libraries

Must be able to access study design elements and the relationships

between elements stored in the Design Repository.

Provide “Standard Library Content and relationship Management” for

the following key elements:

1. Therapeutic Area 2. Indication 3. Study Phase 4. Study Type 5. Study Objectives 6. Endpoints 7. Assessments

Page 40: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 40 of 46

8. Forms (data collection) 9. Countries

Each element should capture key attributes for: Therapeutic Area,

Indication, Study Phase, Subject Population, dose, geography, and

Study Type, Required (Flag).

10 Reference

Libraries

Import The system should provide the users the ability to import/integrate data

into the defined element libraries.

11 Study Build Study Design The system should provide end users the ability to launch/create a new

“Protocol” entity by entering a new Unique Study number and selecting

other key attributes from standard library values for: Therapeutic Area,

Indication, Study Phase, and Study Type.

12 Study Build Study Design Study Builder must be able to visualize study design elements and

their relationship on screen to facilitate the study design definition

process.

13 Study Build Study Design Selection/entry of study design elements within a design module will

identify and present the most relevant design elements Study

Builder will present within and across design modules to facilitate the

definition and use/reuse of study design elements.

14 Study Build Study Design Must have the ability to manage the study design definitions available

within the Design Repository. For example, should be able to designate

an element defined at the study level as a standard level element

available for use/reuse. [Enable to ability to add new elements defined

for a study to the standard].

15 Study Build Study Design The study design definition process is facilitated by presenting a series

of design modules that collectively constitute design elements to be

defined within a study:

1. Study Level Content 2. Protocol Level Content 3. Randomization Schema 4. Fixed Study Design Elements 5. Visit Matrix 6. Data Collection

16 Study Build Study Design Study Builder must enable the selection of study design elements from

the Design Repository and/or manual entry via the Study Builder user

interface.

The system should provide the users the ability to capture key

attributes on the protocol record for:

1. Study Long Name 2. Short Name 3. Acronym 4. Study Blinding 5. Treatment Administration 6. Comparator Arms 7. and other defined fields

Page 41: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 41 of 46

17 Study Build Workflow A study definition is initiated by selecting/entering Study Level Content

design elements. Once available, all other design modules are

available for completion by the appropriate user role.

18 Study Build Fixed

Elements

Concept of Fixed (Common) Data Elements: Specified design

elements are common to all studies and should be included in each

study design. For example, fixed protocol level elements are:

Objectives and Endpoints.

19 Study Build User

Guidance

The system should provide the user guidance text on each key element

that will help them make the appropriate selection for their study

design.

20 Study Build Workflows The system should provide the user the ability to review the pre-loaded

entities on the protocol: Study Objectives, Endpoints, Assessments,

and forms. Upon review the user should have the ability to approve or

reject those entities that are not flagged as required from being applied

to the protocol record. Upon rejection of an auto populated entity,

record a reason why the end user does not think it applies to their

Protocol design.

21 Study Build Workflows The system should provide status lifecycle workflow management

during the creation of a protocol record that tracks and manages from

DRAFT through to APPROVED.

22 Study Build Workflows The system should provide the user the ability post approval of auto

populated entities to edit those now applied values on their protocol

record. These changes should only carry forward on the applied

protocol and not reflect on the standard library (parent) records they

were initially auto populated from. These changes should, once

created, go through a review and approval workflow with defined roles.

23 Study Build Intelligent

Design

The system should auto generate the initial protocol design entities

(Study Objectives, Endpoints, Assessments, Forms) based on the

selected key attributes on the protocol record of (Therapeutic Area,

Indication, Study Phase, and Study Type). These selections are based

on a combination of:

1. Are these elements related to the selected Protocol Entities? 2. Have previous Protocols applied these elements and does the

new protocol have the same core attributes?

24 Study Build Visit Matrix Study Builder must enable the selection/definition of the desired

assessments and measures to be collected within a study design.

25 Study Build Visit Matrix Study builder must be able to derive the visit matrix based on

previously defined study design elements and study design templates.

26 Study Build Schedule of

Assessments

The system should generate a schedule of assessments/visits

schedule based on the applied protocols initial library elements of

assessments and the associated schedule of those assessments. It

should allow for the end user to manually create additional visits and

associated assessments. For those manual entries the system should

track the lifecycle of the approval status of those new attributes.

Page 42: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 42 of 46

27 Study Build Schedule of

Assessments

The assessments and measures defined as part of the study design

elements will be used to drive downstream Electronic Data Capture

(EDC) configuration.

28 Study Build Visual

Preview

The system should provide the ability at any point in the Protocol

creation or post Approval workflow stage to preview a rendered

document version of that protocol based on a predefined template.

29 Study Design Import Study Builder must be able to import study design elements and

related metadata from external data sources such as CSV, ODM, RDF,

etc.

30 Study Design Export Must be able to export study design attributes and their relationships

to other elements from Study Builder in various formats such as CSV,

ODM, RDF, etc.

31 Study Design Export Study design elements may be shared (imported/exported) bi-

directionally between the Design Repository and Sponsor source

applications.

32 Study Design Export The system should provide the ability to export the digital rendered

document version of the Protocol based on the predefined template in

a read only PDF format.

33 Reference

Libraries

Metadata Study Builder and the Design Repository must leverage the data

definitions as defined in the TransCelerate Common Data Model which

leverages clinical data standards defined by Standards Setting

Organizations (e.g., CDISC, HL7, etc.).

Page 43: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 43 of 46

Appendix D: Maturity Model

Maturity comes with time and experience. This model explains how an organization could advance their

levels of technical and functional maturity throughout different levels of implementation of comprehensive

workflow automation.

Figure 11: Maturity Model Defined

The model also provides additional context to the current state in the industry. As of 2019, we

estimate that the majority of companies are at the lowest levels of maturity (level 1 or 2) with

several sponsors reporting extensive experimentation with various aspects of automation.

Level 0 Maturity: Lack of Standards. Standards are not present in the standard operational

process nor in the systems that support those processes. The clinical trial design to operations

processes are siloed.

Level 1 Maturity: Adoption of Standards. Adoption of Data Standards lays the foundation for

point to point integration between key systems. Early stages show limited information re-use,

disconnected security models and repetitive, manual data entry. Unclear ownership of standards,

data governance and limited process harmonization are often observed.

Page 44: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 44 of 46

Level 2 Maturity: Standards Governance and Systems Integration. Growth in integration of

internal and external systems, limited process automation, with some growth in operational and

clinical systems integration. Talent competencies aligned with advancing process and technology.

In addition, foundation for governance of data standards and security are put in place allowing for

additional system exchanges.

Level 3 Maturity: Efficient Integration and Process Automation. Continued growth in the

capability for exchange across systems in the growing ecosystem and the ability to provide

consistent real-time data collection, data ingestion, processing and transformation, storage and

access. Services enable a robust interactive foundation for analytics and machine learning. This

growth and efficient integration and interaction of data across systems within the ecosystem are

the foundational elements needed to introduce machine learning and robotic process automation.

Level 4 Maturity: End to End, bi-Directional Automated Ecosystem. Fully mature ecosystem

would include end-to-end, bi-directional automation with the ability to “plug and play” new

systems. Maturity of Artificial Intelligence has moved into deep learning which provides the ability

to apply learnings from completed studies to subsequent new studies conducted on the platform.

The fully mature ecosystem demonstrates a living growth that allows for evolving technology and

concepts landscape to seamlessly assimilate, demonstrating an established strength and

foundation.

Page 45: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 45 of 46

Table 3: Acronym List

Acronym Meaning

AE Adverse Event

API Application Programming Interface

CDISC Clinical Data Interchange Standards Consortium

CDM Common Data Model

CeSHarP Clinical electronic Structured Harmonized Protocol

COA Clinical Outcome Assessments

CPT Common Protocol Template

CRF Case Report Form

CSR Clinical Study Report

CSV (file format .csv)

CTMS Clinical Trial Management System

CTR-XML Currency Transaction Report – Extensible Markup Language

DBL Database Lock

DDF Digital Data Flow

eCRF Electronic Case Report Form

EDC Electronic Data Capture

EEG Electroencephalogram

EHR Electronic Health Record

ePRO Electronic Patient Reported Outcomes

eTMF Electronic Trial Master File

FEV1 Force Expiratory Volume after 1 second

FHIR Fast Healthcare Inoperability Resources

FPI First Patient Included/First Patient Randomized

FVC Fixed Vital Capacity

HL7 Health Level 7

ICD International Classification of Diseases

ICF Informed Consent Form

ICH International Council for Harmonization

ID Identifier

IDMP Identification of Medicinal Products

IMP Investigational Object Medicinal Product

IR Investigator Registry

IRT Interactive Response Technology

IxRS Interactive Voice- or Web-Response Systems (IxRS)

JSON JavaScript Notation

LOINC Logical Observation Identifiers Names and Codes

LPI Last Patient Included/Last Patient Randomized

LPO Last Patient Out (Last Patient Completed)

MedDRA Medical Dictionary for Regulatory Activities

MRI Magnetic Resonance Imaging

OAuth Open Authorization

ODM Operational Data Model

ODM-XML Operational Data Model – Extensible Markup Language

PET Positron Emission Technology

PPM Project Portfolio Management

PSoC Placebo Standard of Care

PT Patient Technology

REST Representational State Transfer

R&D Research and Development

RDF Rich Data Format

SAP Statistical Analysis Plan

Page 46: Digital Data Flow Solution Framework and Conceptual Designtransceleratebiopharmainc.com/wp-content/uploads/2019/11/... · 2019-11-01 · DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL

DIGITAL DATA FLOW SOLUTION FRAMEWORK AND CONCEPTUAL DESIGN

Page 46 of 46

SDR Study Design Repository

SDTM Study Data Tabulation Model

SIP Shared Investigator Platform

SNOMED Systemized Nomenclature of Medicine

SoA Schedule of Assessments

SSO Standard Setting Organization

SSU Study Start-Up

TA Therapeutic Area

TMP Trial Monitoring Plan