S17: Systems Demonstrations - Interoperability
As clinical and research enterprises move toward computable measurements, it is imperative for a modern health system to support a diverse set of patient generated health data (PGHD) . Despite the rise in use of digitized PROs, their systematic and sustained use in the context of routine clinical workflows poses technological and logistical challenges. Substitutable modular applications with adherence to open standards such as SMART on FHIR efficiently enable reusability across institutions and implementations. Here, we demonstrate “SMART Markers,” a standards-based framework application developers can use for capturing a variety of PGHD including patient-reported outcomes and sensor data integrated with the health system. The presenters will address the challenges of standardizing PGHD and the benefits of the SMART Markers framework for rapid development of custom experiences for both patients and practitioners
Given increased challenges in intensive care, there is a critical need to develop real-time systems for remote monitoring and early warning of patient deterioration. We built and evaluated the Intensive Care Warning Index (I-WIN) at a tertiary-care children’s hospital. I-WIN has key components including real-time data acquisition component, distributed AI platform, and a graphical user interface. I-WIN currently provides real-time streaming vital-sign waveform monitoring and early prediction of deterioration in intensive care.
We are presenting the system design and prototype development of SEADRAGON (Surveillance Estimates Attributed to Drug-Related Adverse events – Generated ONline), a web application which calculates national estimates of U.S. emergency department visits for drug-related adverse events based on user defined queries of the NEISS-CADES database. The development of the prototype was initiated as a result of a Centers for Disease Control and Prevention initiative to make data available for public use.
S39: Systems Demonstrations - Population Health
In the U.S., mortality rates are rising, reversing the long term longevity trends. MortalityMinder enables healthcare researchers, providers, payers, and policy makers to gain actionable insights into where and why midlife mortality rates are rising in the US. MortalityMinder is a web-based visualization tool that enables interactive exploration of social, economic, and geographic factors associated with premature mortality among adults aged between 25-64. The app reveals striking insights into "Deaths of Despair" and other causes of death. As an open source project, we seek a community effort to improve the tool to help stakeholders at the national, state, county and community levels identify and address unmet healthcare needs. We are adapting MM to capture disparities in COVID-19 risk factors, mitigation, and mortality.
Community wide health-related social needs (HRSN) screening programs are resource intensive and difficult to scale. The Centers for Medicare and Medicaid Services funded MyHealth – a regional health information exchange (HIE) in Oklahoma – and the Route 66 AHC Consortium – a consortium of Oklahoma stakeholders – to create a community-wide screen-and-treat platform. We will demonstrate MyHealth’s solution, which uses an HIE-linked consumer smartphone application. At clinical intake, the provider’s electronic health record sends an admission-discharge-transfer message to MyHealth and MyHealth sends to the patient’s smartphone an HRSN screening. Patients reporting at least one need receive links to resources customized by zip code. MyHealth has enrolled 90 health systems and delivered 447,027 questionnaires to patients. At least one social need was present in 13,935 responses.
Ideally, clinical decision support (CDS), population health management (PHM) interventions, and electronic quality measures (eCQMs) are based on scientific evidence about how a treatment or process affects a set of patients. A common element linking evidence to relevant CDS/PHM interventions and eCQMs is the specification of patients being targeted. The better that the inclusion/exclusion criteria of the research studies match those of the measure and match the criteria of the CDS and PHM interventions, the more likely that the interventions will have scientific merit and will promote the goals of the measure. This system demonstration will highlight how a common approach to define patient cohorts is a key strategy to create a learning health system that integrates research, care, and quality efforts.
S67: Systems Demonstrations - Genetic Data Processing and Support
GNomEx, a genomic laboratory information management system (LIMS), is a powerful online tool for ordering, tracking, billing, and reporting for various genomic sequencing systems while making their use readily accessible to parties internal and external to our organization. In addition, GNomEx provides for data sharing, and it supports reporting and figures for overall system oversight. Since its initial release in 2002, the system has connected Huntsman Cancer Institute (HCI) with collaborators around the nation and beyond, simplifying and enabling collaboration while assisting in minimizing both service cost and machine downtime.
We have implemented a Population Health Management platform to identify and manage patients who meet guideline-based criteria for genetic testing of breast and colorectal cancer risk based on their family history in the Electronic Health Record (EHR). The platform uses the FHIR and CDS Hooks standards to integrate with different EHRs. In preparation for a randomized controlled trial, the platform has been deployed and piloted in clinical settings at the University of Utah and New York University. The trial will compare two patient outreach approaches for genetic testing: standard outreach (i.e., messages sent to the patient portal and phone calls) versus self-directed using chatbot technology. In this demonstration, we will describe the platform, the clinical workflow, integration of the chatbot with the EHR, lessons learned, and future plans.
Karyotyping is commonly used to diagnose many genetic diseases. Karyotypes are written in the International System for Human Cytogenomic Nomenclature (ISCN). The lack of software to parse karyotypes precludes the potential of this data from being realized.
We developed CytoGenetic Pattern Sleuth (CytoGPS) to solve this problem. CytoGPS examines an input karyotype; it can suggest a revised karyotype when meeting an incorrect but correctible karyotype. To enable cytogeneticists to perform downstream analysis, CytoGPS extracts important biological information embedded in karyotypes.
We developed a website (http://www.cytogps.org) to freely deploy CytoGPS. Users can examine single karyotypes or upload a file for batch analysis. By converting text-based karyotypes into a computational format, CytoGPS makes it possible to combine them with complementary clinical and genomic data for discovery purposes.
S86: Systems Demonstrations - Analytic Tools
The Accrual to Clinical Trials (ACT) network is a nationwide federation of Clinical and Translational Science Award (CTSA) institutions that provides researchers access to patient sets with regional diversity helping with clinical trial cohort discovery and study feasibility. The network consists of local installations of Informatics for Integrating Biology at the Bedside (i2b2) EHR data repositories that are linked by the Shared Health Research Information Network (SHRINE) platform. To date, the network connects 41 CTSA sites and contains data on more than 125 million patients 1. The existing SHRINE user interface (UI) was derived from and closely resembles i2b2 code that is over 12 years old. In order to serve the needs of a larger researcher audience who are unfamiliar with i2b2, or complex Boolean queries, the SHRINE/ACT team at Harvard Medical School are developing a more intuitive, user-friendly UI with modern usability standards of design, look-and-feel, and accessibility.
Informatics for Integrating Biology and the Bedside (i2b2) is a well-established open-source clinical data warehousing and analytics platform used at over 200 locations worldwide. Twelve years since its first release, i2b2 continues to evolve. Here we demonstrate the latest additions to the i2b2 core platform, focusing on two major improvements. First, we will demonstrate a completely redeveloped term search interface. The exponential growth of biomedical ontologies necessitated new tools to navigate and search the repository’s very large terminologies. Second, clinical research often collects Patient Reported Outcomes, which must be integrated with clinical data sources to enable analytics. We will demonstrate i2b2’s new integration of the Research Electronic Data Capture (REDCap) software, which creates a live link between a REDCap survey and an i2b2 database.
The All of Us Research Program has collected health data on over 270,000 participants of a target of at least a million with the goal of accelerating discovery in precision medicine. At this scale, downloading such a dataset is expensive, slow, and excludes many researchers. To address these barriers, the AoU Data and Research Center designed the Researcher Workbench platform to “bring researchers to the data” in a cloud-based analytical environment that meets robust security and regulatory requirements, and includes layers of functionality to address researcher needs at various stages of research design and execution. This systems demonstration will provide an overview of the initial Workbench analysis suite, including custom tools for cohort and data exploration and covariate selection, and Jupyter notebooks for analysis.
S107: Systems Demonstrations - Natural Language Processing
Natural language processing (NLP) applications generally involves multiple component systems, like tokenization, de-identification, named entity recognition, etc. Many component systems have been developed for these tasks and are publicly available. However, it is frequently not easy for end users to adapt and integrate these existing component systems to process their own data. These systems are often developed by different groups with different system environments, and therefore not trivial for end users to execute them in their own system environment. Here, we present BENTO, a web-based workflow management platform built on top of CodaLab. BENTO mitigates the aforementioned challenges to help users effectively build clinical NLP pipeline applications. BENTO comes with a number of state-of-the-art NLP systems that are ready to use. BENTO composes customized pipelines through its graphical user interface. It also allows users to integrate their own systems (e.g., pre-trained NLP models) into the platform.
The last few decades see an explosion of biomedical text in scientific literature, clinical notes and other sources. Extracting useful information from biomedical text is a vital task for natural language processing (NLP). There is a great need for an easy-to-use tool that can effectively annotate text and build the state-of-the-art deep learning models. Here we introduce CLAMP-cloud, a newly developed online NLP tool for biomedical text annotation and deep learning-based information extraction (IE). CLAMP-cloud is an upgrade of the CLAMP NLP toolkit 1. It not only provides online learning strategies and automatic quality control to annotate text efficiently with high accuracy, but also can train and deploy deep learning-based IE models effortlessly in an integrated pipeline.
NLM-Scrubber is the leading non-commercial software application for clinical text de-identification. Unlike most other applications, especially experimental ones, NLM-Scrubber does not require training data to produce reliable de-identification. Many academic institutions use NLM-Scrubber to process protected health information in their repositories, successfully producing de-identified clinical data for scientific use, while protecting patient privacy. Following HIPAA Safe Harbor Privacy Rule, NLM-Scrubber can produce fully de-identified clinical text as well as user-tailored limited data sets by preserving certain identifiers in the output based on the recognized needs of research. NLM-Scrubber is free and simple to use and it also provides a rich set of features to experienced users. It is a workhorse, capable of de-identifying millions of records per day.