• Nov 12 - 16, Chicago

    AMIA 2016 Annual Symposium

    AMIA 2016 Annual Symposium is the 40th anniversary of the Symposium. We want this year to be a homecoming that energizes our new and longtime members, welcomes attendees who haven’t visited the meeting in a few years; and attracts first time attendees to the incredible informatics community AMIA represents.

AMIA 2016 Tutorials & Working Group Pre-symposia

Individual session registration is required for each tutorial or working group pre-symposia you wish to attend. Please select only one session per time slot. Seats are limited.

Tutorials: Half-day and full-day tutorials are dedicated to in-depth treatment of special topics and interests of relevance to informatics. Half-day tutorials include three hours of instruction; full-day tutorials include six hours of instruction. The SPC selects the slate of presentations to offer a balance between tutorials that address essential core informatics theory and principles, with those that address practical applications, current issues, and emerging trends and developments in informatics. Tutorials range from the general introductory level through specialized advanced treatments.

Working Group Pre-symposia: Pre-symposia promote formal discussion among constituents sharing common interests and raising the profile of AMIA Working Groups at the Annual Symposium. Presentations bring together individuals with similar or different roles in developing, implementing, or using informatics in practice, management, education, research, or policy.

CME/CE/MOC eligibility:

  • Half-day sessions = 3 CME/CE credits
  • Full-day sessions = 6 CME/CE credits

For sessions offering MOC-II credit, check the Self-assessment Booklet of Multiple Choice Questions.

Saturday, Nov. 12, 8:30 a.m. – 12:00 p.m. (Half-day sessions)

T01: Patient Engagement and Consumer-facing Health Information Technologies

J. Wald, RTI International; D. Sands, Harvard Medical School

Patient engagement has been called the “blockbuster drug of the century.” While patient engagement concepts vary, health care professionals and patients intrinsically understand its power. Over the last decade, patients and caregivers, along with their clinicians have employed connected technologies to enhance self-care and manage their conditions in order to better engage in their health and health care. Innovative use of technologies such as patient portals, mobile applications, and mHealth services are enhancing communication, access to clinical records, use of medical reference information, participation in online communities, self-diagnosis, and tracking/sharing biometric and other health information. However, challenges including technology silos, device and data incompatibility, information gaps, workflow challenges, and policy or cultural conflicts have limited the impact of these technologies for patient engagement.

This popular tutorial will offer clinicians, system administrators, IT developers, policymakers, and patients (we are all patients, eventually!) examples of how these tools are used to enhance patient engagement. Instructors will present material from research and practical perspectives, with a particular focus on identifying and addressing the challenges of patient engagement and the use of patient portals, patient-generated health data, and other technologies to promote patient-provider partnerships.

Drawing from five decades of experience in the patient engagement space, Dr. Wald with RTI and formerly with Partners HealthCare and Cerner Corporation, and Dr. Sands with Beth Israel Deaconess Medical Center and the Society for Participatory Medicine, have substantial experience leading and researching innovations in consumer health and will examine a broad set of topics including: implementation strategies; clinician adoption; patient adoption; opportunities and limitations of patient engagement; clinician, practice, and patient workflow; and patient-generated health data.

In summary, this tutorial will provide an experience-based, practical introduction to consumer-facing health technologies and patient engagement, with particular attention to the clinical challenges of engaging patients through health IT.


T02: Big Data Technologies for Biomedical Knowledge Discovery

R. Madduri, University of Chicago; N. Ashish, Fred Hutch Cancer Center; K. Chard, University of Chicago; I Foster, University of Chicago

This tutorial will present the “Big Data” biomedical discovery technologies, end-to-end solutions, and applications developed at the Big Data for Discovery Science (BDDS) Center of Excellence for Big Data Computing in Biomedical Research. The BDDS center itself is uniquely focused on handling big data in biomedical research. The center introduces solutions to key biomedical informatics challenges such as big data organization, storage, processing, distribution, and sharing data across collaborative networks. All BDDS developments aim for interaction of basic science, biological and engineering researchers using vast data collections and distant computers and storage systems to explore, interact and understand what the data mean and to derive knowledge from them.

This tutorial will describe and demonstrate the technologies that we are developing for addressing the complexity, scalability of analysis, and ease of interaction with big data and associated analytic methods. Participants will learn how BDDS researchers apply these tools to process genomic, imaging, and other data from tens of thousands of patients, and will gain the knowledge required to take these tools back to their institutions and apply them to their own big data problems.


T03: Analysis of Human Interactive Behavior for Improving Health IT Usability and Minimizing Patient Safety Risks

T. Kannampallil, University of Illinois; K. Zheng, University of Michigan; V. Patel, The New York Academy of Medicine

Most clinical environments resemble a paradigmatic complex system with its dynamic and interactive collaborative work, non-linear and interdependent activities, and uncertainty. Addition of new organizational and systemic interventions, such as health information technologies (IT), can cause considerable cascading effects in the clinical processes, workflow, and consequently, on throughput and efficiency. A 2011 IOM report [1] called for a socio-technical approach for designing and incorporating health IT in clinical settings. One of the critical aspects of a socio-technical approach is to understand the progression and evolution of human interactions within a socio-technical context. In other words, a better understanding of human interactions in a clinical setting with technology, peers, and other artifacts is necessary for a successful and effective socio-technical approach.

In this tutorial, we will discuss a set of convergent methodologies for analyzing human interactive behavior both with technology and with other humans or artifacts. These methodologies can help in capturing underlying patterns of human interactive behavior, and provide a mechanism to develop integrative, longitudinal metrics for clinical activities for sustained interactive episodes that evolve over time (for e.g., metrics related to performance, or errors). Such analysis of interactive behavior can also provide significant input to patient safety outcomes through the design of safe and efficient health IT.

In this tutorial, we will (a) identify challenges to capturing and analyzing human interaction from complex clinical contexts; (b) discuss new approaches for capturing and analyzing sequences of human interaction in clinical settings using sequential analysis and network-theoretic, time-series based and probabilistic methods; (c) utilize one or more of these techniques to demonstrate their effectiveness as a viable mechanism for developing insights on clinical work activities through hands-on sessions; (d) provide participants hands-on experience in using data collection and data analysis tools; and (e) discuss the implications of these techniques for the design of health IT and patient safety initiatives.


T04: An Introduction to Natural Language Processing Methods in Clinical Research

O. Patterson, P. Alba, S. DuVall, VA Salt Lake City Health Care System/University of Utah

As the use of natural language processing (NLP) methods in preparing data for research continues to increase, researchers should understand the benefits and limitations of such a tool. While NLP is not a “solved” science, there are many tasks that NLP can do reliably. Extracting concepts (symptoms, diseases, medications) and values (lab values, vital signs) that are stored in the text is one example. More complex tasks, such as determining what caused an event of interest or why a patient discontinued a medication can also be addressed using the right tools. This tutorial will provide attendees with a general overview of NLP tools and methods used in health research and patient care. Participants will be introduced to NLP, the types of problems that can be addressed with NLP, and how to effectively plan and execute an NLP task using patient medical records. Synthetic clinical notes will be provided along with open-source tools that will allow participants to implement a working NLP system. The eHOST annotation application, Unstructured Information Management Architecture Asynchronous Scaleout (UIMA AS) the Leo NLP libraries, tools developed and used in the Department of Veterans Affairs (VA) and built on existing community standards, will be introduced and used to illustrate the complete life cycle of an NLP project, from design to human annotation / chart review to NLP system creation to evaluation. The tutorial will be presented by three instructors involved in the design and development of these NLP tools who have completed more than 100 NLP tasks in the VA and other health care institutions. Attendees will experience the process of completing an NLP task and leave the tutorial with concrete examples of how NLP can be used at their institutions to benefit research studies or patient care.


T05: The Evolution of Quality Measurement: New Horizons in Standards, Specifications, and Reporting

M. Dardis, M. Biglari, P. Craig, The Joint Commission

Electronic clinical quality measures (eCQMs) hold promise for minimizing manual data collection and reporting burden on providers while accurately reflecting the quality of care delivered. Delivering on this vision requires multi-stakeholder involvement in the development of standards, tools, and specifications that support eCQM reporting. In this session, Joint Commission eCQM experts will describe the eCQM lifecycle, from standards generation, to measure specification, to data reporting, and facilitate a discussion of limitations, successes, and future opportunities that will support and improve relevant and useful eCQM reporting.

Beginning with an introduction to quality measurement theory and the unique data needs for quality measurement, the instructors will discuss quality data standards and their evolution, including QRDA, HQMF, QDM, and CQL, and will describe how these standards are applied in measure development and reporting, using Joint Commission measure examples to demonstrate these principles. Moving step by step through the eCQM lifecycle, instructors will review quality standards development, eCQM measure specification, and data submission, describing the processes and tools used in each step. Finally, instructors will discuss current eCQM challenges related to standards and tools, as well as lessons learned, and facilitate discussion of further opportunities to improve eCQM standards and the eCQM lifecycle.


WG01: Effective Change Management Techniques for Health IT Implementation and Optimization (sponsored by the AMIA People and Organizational Issues Working Group)

R. Murphy, University of Texas Health Science Center at Houston; C. Shea, R. Kitzmiller, University of North Carolina

Leadership and management of change is one of the four core content areas of the clinical informatics subspecialty and is frequently referenced as a key determinant of success in transformational efforts.

This tutorial will provide attendees with practical guidance and tools for leading technology-driven change efforts to improve quality in their organizations. These change efforts may involve implementation of new technology or optimization of existing technology and may be inspired by internally derived quality improvement goals, external reporting demands and/or reimbursement requirements.

The instructors will begin by demonstrating the importance of aligning technology-driven change efforts to an organization’s strategic goals as well as establishing the business case for managing such change efforts effectively. Using an interactive, case-oriented approach as well as Poll Everywhere technology and small-group discussions to facilitate participation, the instructors will then guide attendees through key aspects of leading implementation and optimization, including governance and stakeholder analysis, organizational readiness assessment, workflow modeling, and communication strategy. Through small group discussion of these topics, attendees will (1) identify barriers to implementation and optimization; (2) apply tools useful for leading their organizations through these barriers; and (3) gain insight into various methods for evaluating the level of success of an implementation or optimization effort.

Content for the tutorial will be informed by the complementary fields of informatics, innovation theory, implementation science, and change management. The instructors will also draw upon examples from their experience with efforts to implement and optimize health information systems in both inpatient and ambulatory care settings. The combination of cases, polling technology, and small-group discussions (e.g., 4-5 attendees) will enable attendees to reflect individually upon the content, learn from the experiences and insights of fellow attendees, and apply practical tools to specific implementation and optimization scenarios.

Saturday, Nov. 12, 1:00 p.m. - 4:30 p.m. (Half-day sessions)

T08: You Had Me at Hello: Telling your Biomedical Informatics Story Through Film

K. Johnson, Vanderbilt University Medical Center

Communication of Informatics results can be challenging. The byproduct of scholarly work traditionally comes in many forms, including peer-reviewed publications, posters, presentations, and books or monographs. In recent years, low cost, high quality software has made it easy to deliver information using multimedia formats, such as documentaries and short films. Over the past 8 years, the presenter planned and completed a feature length film about health information exchange, based on a multiyear program evaluation project. There were specific reasons for this choice, and for many of the editorial decisions used to convey that story. This project leveraged a number of traditional skills used to communicate science, while also taking the presenter on a journey into a process rarely explored and previously underutilized by experts in clinical informatics.

This session will focus on lessons learned making of the film “No Matter Where,” and will shed light on the process of filmmaking. The author will discuss the entire project, including conception of the vision, fund-raising, team development, script construction, film-making strategies, post-production work, film festival submissions, and distribution. The tutorial will be highly interactive, and will make use of documentary content to illustrate some key points about the process of film making. The tutorial will utilize never-before-seen footage from the making of No Matter Where (http://www.nomatterwherethemovie). The tutorial will inform Informaticians interested in using this type of media for the communication of their research results or to focus attention on an important Informatics topic. This will be a most fun and informative AMIA tutorial.


T10: Resources for Analyzing Drug Prescription Datasets

O. Bodenreider, V. Huser, C. Reich, IMS Health

Large prescription datasets have become increasingly available to researchers (e.g., claims data from Medicare and private insurance companies, pharmacy data from clinical institutions, feeds from health information networks, such as Surescripts). Prescription data are generally recorded at a level that is very detailed (e.g., with National Drug Codes (NDCs) that include packaging information), and often need to be aggregated for meaningful clinical analysis (e.g., at the level of the ingredient or drug class).

Resources such as RxNorm, the standard terminology for drugs in the U.S., can help map NDCs to RxNorm concepts for clinical drugs. RxNorm also supports aggregation by linking clinical drugs to their ingredients, and to drug classes from ATC, MeSH, NDF-RT and DailyMed. The RxNorm and RxClass APIs facilitates the use of RxNorm for aggregation purposes.

The first part of this tutorial presents basic information about prescription datasets and resources for analyzing them, with emphasis on RxNorm. In the second part, we demonstrate an application of these resources to common use cases, including the comparison of prescribed vs. defined daily doses for drugs and the identification of potentially inappropriate medications (e.g., during pregnancy, for the elderly). Finally, we present the experience of the OHDSI (Observational Health Data Sciences and Informatics) community in integrating various kinds of drug data in a large clinical data warehouse compliant with the OMOP clinical data model.


T11: Computational Phenotyping Methods

Y. Luo, J. Sun, X. Jiang, F. Wang, Northwestern University

In medicine, the term phenotype refers to observable properties of a cohort of patients based on the interaction of their genotypes and the environment. Precision medicine initiatives come amid the rapid growth in quantity and variety of biomedical data, which often exceeds the capacity of manual phenotyping of physicians and researchers. The task of computational phenotyping is to extract a phenotype from complex and heterogeneous data sources and/or to predict clinically important phenotypes before they are observed. Depending on the nature of the task, computational phenotyping can be formulated as either an unsupervised or supervised learning problem. The models are further complicated by heterogeneous inputs such as structured data (including laboratory values, medication prescriptions, vital sign records, and procedure codes) and unstructured data (like clinical narratives). Different categories of data call for different processing pipelines, which can be customized based on the phenotypes of interest. An additional concern of computation phenotyping is associated with the patient privacy when pooling multi-institutional data to improve the system generalizability. This tutorial will introduce to the medical informatics community various modules and associated methods of phenotyping pipelines, and the best practice in integrating these modules and designing supervised or unsupervised learning objectives for phenotyping.


WG07: Aligning Consumer Health Informatics Tools with Patient Work: Translating Research Findings into Technology Design (sponsored by the AMIA Consumer and Pervasive Health Informatics, Evaluation, and People and Organizational Issues Working Groups)

L. Lovett Novak, Vanderbilt University; R. Valdez, T. Veinot, University of Virginia; R. Holden, Indiana University

Consumer health informatics (CHI) tools such as remote monitoring systems, personal health records, decision support systems, and online health communities are increasingly being created and deployed to support patients as they manage health conditions in everyday life. Research suggests that design and implementation approaches facilitating integration of CHI tools into users’ daily routines may lead to more extensive adoption of these technologies. This tutorial will engage participants in the analysis of research data and translation of findings into concrete design elements for CHI tools and interventions. First, the session will engage participants in a collaborative discussion to establish the rationale for a contextually-informed, participatory approach to design. Next, the faculty will facilitate as participants work through a sample case, analyzing findings and developing both descriptive and prescriptive design guidance, with scenarios of use, personas, and concrete design elements as products. Then the participants will work in small groups on cases assigned to them, going through the process of examining findings and using a participatory approach to develop mock-ups of CHI designs. Finally, the faculty will facilitate a discussion of each group’s design results, and the insights produced by the tutorial. Upon completing the session, participants will be able to evaluate different approaches to generating design guidance from field data and determine which approach is most suitable for a given design challenge, translate results from qualitative and quantitative analyses into descriptive guidance that takes the form of personas and scenarios, and articulate how to engage with patients and their formal and informal caregivers to move from descriptive to prescriptive design guidance.


Saturday, Nov. 12, 8:30 a.m. - 4:30 p.m. (Full-day sessions)

T06: AMIA 2016 CMIO Workshop

P. Fu, Harbor-UCLA Medical Center; R. Schreiber, Holy Spirit Hospital; J. Hollberg, Emory Healthcare; J. Kannry, Mount Sinai Medical Center

With the arrival of clinical informatics board certification for physicians, AMIA support for the applied clinical informatics communities has become more important than ever. A major part of that support is outreach to Chief Medical Information Officers (CMIOs) and those in similar roles (such as Medical Directors for Information Systems), who are charged with leading informatics change within their organizations, both large and small. AMIA is uniquely positioned to serve as the professional “home” for the CMIO community, because it can provide a combination of personal experience and anecdote with firm grounding in evidence-based biomedical informatics literature, informatics theory, foundational knowledge, and proven best practices, in a thoughtful and coherent educational setting. More than 150 individuals have attended the CMIO Workshop since its inception in 2011, more than 60 individuals participated in 2015, ranging from seasoned CMIOs of large systems to those who are just beginning their applied clinical informatics career. The goal of the 2016 CMIO Workshop is to focus on the introduction of new topics that will attract repeat attendees, while providing up-to-date content for those who are exploring or new to the field


T07: Beyond the RCT: Practical Study Designs for Evaluating Informatics in the Learning Health System

J. Ancker, Weill Cornell Medical College; C. Friedman, University of Michigan

Biomedicine accepts the randomized controlled trial (RCT) as the gold standard for research and program evaluation designs, but RCTs have limitations that mean they are sometimes difficult or impractical to apply in informatics. Researchers and operational informaticists may not be fully aware of the range of high-quality alternatives to the RCT and when it is appropriate to use them. This tutorial will introduce a range of quantitative study approaches that are alternatives to the RCT, including methods that enable efficient cycles of development and evaluation, as well as methods for pragmatic evaluation of innovations in practice.


T18: 3rd International Workshop on Genome Privacy and Security (GenoPri'16)

Over the past several decades, genome sequencing technologies have evolved from slow and expensive systems that were limited in access to a select few scientists and forensics investigators to high-throughput, relatively low-cost tools that are available to consumers. A consequence of such technical progress is that genomics has become one of the next major challenges for privacy and security because (1) genetic diseases can be unveiled, (2) the propensity to develop specific diseases (such as Alzheimer’s) can be revealed, (3) a volunteer, accepting to have his genomic code made public, can leak substantial information about his ethnic heritage and the genomic data of his relatives (possibly against their will), and (4) complex privacy issues can arise if DNA analysis is used for criminal investigations and medical purposes.

As genomics is increasingly integrated into healthcare and "recreational" services (e.g., ancestry testing), the risk of DNA data leakage is serious for both individuals and their relatives. Failure to adequately protect such information could lead to a serious backlash, impeding genomic research, that could affect the well-being of our society as a whole. This prompts the need for research and innovation in all aspects of genome privacy and security.

The goal of GenoPri’16 is to foster research aimed to understand and address all privacy and security issues in genomics. It brings together a highly interdisciplinary community involved in all aspects of genome privacy and security research. For additional information please visit genopri.org


WG02: Workshop on Visual Analytics in Healthcare

J. Caban, NICoE, Walter Reed National Military Medical Center; A. Perer, IBM Research; U. Backonja, University of Washington School of Medicine ; J. Warner, Vanderbilt University; S. Fischer, RAND Corporation

Abstract: As medical organizations increasingly embrace health information technology (HIT), public health organizations implement informatics solutions for disease surveillance, and consumer health informatics (CHI) tools become more ubiquitous, the amount of data available to healthcare providers/health practitioners and individuals in the community are growing at an unprecedented rate. This vast amount of health-related data poses cognitive and comprehension challenges for medical professionals to understand patients' medical histories and conditions at point of care, researchers to investigate individual- and population-level exposure and outcome data, and individuals to make sense of data they gather in their everyday lives. Visual analytics and information visualization have potential to support cognitive processes, understanding, and decision-making for anyone gathering and using vast amounts of health data. Given the strong turnout at the AMIA 2013 and 2014 workshops and the great participation at the 2015 AMIA tutorial on “Introduction to Visual Analytics in Healthcare” , we now propose to host a follow-up workshop at AMIA 2016. We believe that hosting this workshop at AMIA will give technical scientists the opportunity to discuss their latest clinical data visualization techniques with the clinical informatics community.


WG03: Graduate Student Consortium and ‘Hackathon’ (sponsored by the AMIA Natural Language Processing Working Group)

S. Meystre, University of Utah; H. Liu, Mayo Clinic; R. Xu, Case Western Reserve University; S. Arabandi, Ontopro; K. Wagholikar, Massachusetts General Hospital; J. Patrick, Health Language Analytics; G. Savova, Boston Children’s Hospital; C. Weng, Columbia University; P. Zweigenbaum, LIMSI-CNRS; D. Demner-Fushman, National Library of Medicine; O. Uzuner, SUNY; H. Xu, University of Texas Health Science Center at Houston

The application of Natural Language Processing (NLP) methods and resources to clinical and biomedical text has received growing attention over the past years, but progress has been limited by difficulties to access shared tools and resources, partially caused by patient privacy and data confidentiality constraints. Efforts to increase sharing and interoperability of the few existing resources are needed to facilitate the progress observed in the general NLP domain. To answer this need, the AMIA NLP working group pre-symposium continues the tradition since its inception in 2012 to provide a unique platform for close interactions among students, scholars, and industry professionals who are interested in clinical NLP. The event will consist of two sections: 1) a graduate student consortium, where students can present their work and get feedback from experienced researchers in the field; and 2) a ‘hackathon’ of NLP tools, where developers of these tools will present them to users and help these users implement their tools to work on practical NLP tasks in groups.

https://sites.google.com/site/nlpwgpresymposium2016


WG04: Primary Care Informatics in the Second Decade of Health Information Technology: Challenges, Lessons Learned and Work Remaining to be Done (sponsored by the AMIA Primary Care Informatics Working Group)

S. Morgan, Partners Healthcare Inc.; A. Zuckerman, Georgetown University; T. Agresta, University of Connecticut Medical School; R. Hausam, Hausam Consulting, LLC; D. Pandita, Park Nicollet Health System; S. Kooienga, University of Wyoming; M. Jenkins, Northwestern University; L. Hogan, University of Pittsburgh; M. Peifer, Lehigh Valley Health Network

The first decade of Health Information Technology focused on the development and implementation of Electronic Health Record systems. As we enter into the second decade of Health Information Technology, Primary Care providers are still faced with a number of challenges to daily practice. Some issues have been solved by the implementation of technologies; some have been made worse; where others have yet to even be addressed. Primary Care Informatics has its roots in the end of the first decade and the beginning of the second, with the aim of guiding providers through this new era. The goals of this session are to explore some of these challenges faced by primary care providers and offer some creative solutions. The faculty hope to answer what have we learned from the first decade of HIT and now that most primary care providers will be using EHR systems, what is needed in the second decade of HIT to achieve a sustainable, efficient, learning healthcare system?


WG05: Data Mining for Medical Informatics (DMMI) – Learning Health (sponsored by the AMIA Knowledge Discovery and Data Mining Working Group)

F. Wang, University of Connecticut; G. Stiglic, University of Maribor; M. van der Schaar, UCLA; D. Sontag, NYU; C. Yang, Drexel University

The life and biomedical sciences are massively contributing to the big data revolution, due to advances in genome sequencing technology and digital imaging, growth of clinical data warehouses, increased role of the patient in managing their own health information and rapid accumulation of biomedical knowledge. Under this context, data mining and machine learning techniques, with the goal of knowledge discovery and deriving data driven insights from various data sources, has played a more and more important role in medical informatics. Effective data mining approaches have been applied in many medical problems including drug development, personalized medicine, disease modeling, cohort study, comparative effectiveness research, etc. The main theme of the workshop this year is learning health, which aims to derive actionable and timely insights based on the real-world experience of millions of patients, and make them useful to clinicians, patients and all other healthcare stakeholders. This topic has received a lot of interests and debating recently. We would like to invite the researchers from both academia and industry who are interested in this topic to participate in this workshop, share their opinions and experience, as well as discuss future directions.


WG06: Designing Next Generation of Clinical Decision Support for Nursing from Hospital to Homecare (sponsored by the AMIA Nursing Informatics Group)

M. Topaz, Harvard Medical School/ Brigham and Women's Health; S. Collins, Brigham and Women's Health/Harvard Medical School/Partners Healthcare Systems

Clinical decision support (CDS) interventions within and beyond Electronic Health Records (EHRs) are playing an increasingly important role to enhance and support nurse decision making across the care continuum. Individuals at clinical sites looking to seize the opportunity to design, implement, and test CDS interventions raise a number of questions: 1) What is the CDS lifecycle and what are the unique applications of the CDS lifecycle to nursing?, 2) What existing tools are used to support nurses’ decision making?, 3) What is the path towards commercializing nursing clinical decision support interventions to drive adoption and use?, 4) What are the best methods to design CDS interventions for seamless integration into nursing practice?, and 5) How can artificial intelligence support nurses in making patient centered decisions? In this all-day NIWG tutorial, we will address these 5 areas related to opportunities for nursing CDS through a series of presentations, interactive demonstrations, and hands-on sessions.

Sunday, Nov. 13, 8:30 a.m. - 12:00 p.m. (Half-day Sessions)

T09: Clinical Decision Support: A Practical Guide to Developing your Program to Improve Outcomes

R. Jenders, Charles Drew University/UCLA; J. Osheroff, TMIT Consulting, LLC; J. Teich, Elsevier; D. Sittig, R. Murphy, University of Texas Health Science Center at Houston

This tutorial will provide attendees with a practical approach to developing and deploying clinical decision support (CDS) interventions that measurably improve outcomes of interest to a health care delivery organization. The instructors initially will examine in detail the key building blocks of a CDS program, including creating and enhancing organizational structure for CDS success; identifying information systems for providing the data that drive CDS interventions; leveraging clinical workflow to optimize CDS interventions; processes and systems for measuring the outcomes of these interventions; and knowledge management to acquire and maintain the expert clinical and scientific knowledge that informs these interventions. The instructors then will show how to leverage these building blocks to address key steps in developing, implementing, managing and evaluating CDS interventions, including how to select interventions to deliver targeted improvements in health care; configuring those interventions in specific environments; putting the interventions into action; measuring the results of the CDS interventions and in turn refining the program based on the results.

Additional discussion will touch on the role of national programs relevant to CDS, especially the profound shift by both public and private health care payers from paying for volume to paying for value, including health care quality improvements enabled by CDS. Other topics will include knowledge sharing, such as the impact of the FHIR standard and application programming interfaces (APIs) for CDS in commercial EHR systems; structured guidelines; and special considerations for CDS for small clinical practices, for hospitals and health systems, for patients and for vendors. Further, following interactive presentations by the instructors, attendees will divide into small groups and participate in a highly interactive exercise in planning and designing a CDS project to address a specific clinical target, facilitated by the instructors.

Overall, this systematic approach to CDS implementation will be presented in an interactive, case-oriented fashion, incorporating examples provided by tutorial leaders and participants’ experiences. The course content is drawn from the tutorial leaders' popular and award-winning guidebook series on improving outcomes with clinical decision support, the last two volumes of which (in 2009 and 2012) were co-published by AMIA.


T12: Innovations in Interoperability & Standards Implementation: HL7 FHIR & the Argonaut Project

C. Jaffe, Health Level 7 International; S. Huff, Intermountain Health; M. Tripathi, Massachusetts eHealth Collaborative; J. Mandel, Boston Children's Hospital; W. Hammond, Duke University

Traditional standards development processes are too slow and inefficient. Moreover, the means for exchanging data has not facilitated data reuse for a broad range of purposes, including quality evaluation, decision support, clinical research, primary medical science application, public health, and comparative effectiveness. The JASON Task Force defined an achievable pathway. At HL7, we are applying innovative approaches to realizing these goals. The Argonaut Project and the FHIR Foundation provide resource for the implementation community.


T13: Agile Clinical Decision Support

D. Willett, V. Kannan, University of Texas Southwestern Health System

Designing effective Clinical Decision Support (CDS) tools in an Electronic Health Record (EHR) can prove challenging, due to complex real-world scenarios and newly-discovered requirements. As such, deploying new CDS EHR tools shares much in common with new product development, where “agile” principles and practices consistently prove more effective than traditional project management. Typical agile principles and practices can thus prove helpful on CDS projects, including time-boxed “sprints” and lightweight requirements gathering with User Stories and acceptance criteria. Modeling CDS behavior removes ambiguity and promotes shared understanding of desired behavior, but risks analysis paralysis: an Agile Modeling approach can foster effective rapid-cycle CDS design and optimization. The agile practice of automated testing for test-driven design and regression testing can be applied to CDS development in EHRs using open-source tools. Ongoing monitoring of CDS behavior once released to production can identify anomalies and prompt rapid-cycle redesign to further enhance CDS effectiveness. The tutorial participant will learn about these topics in interactive didactic sessions, with time for practicing the techniques taught.


T14: Building Successful Natural Language Processing Applications in Clinical Research and Healthcare Operations

Y. Huang, Data Wise Health Inc./Witspring Health; H. Xu, University of Texas Health Science Center at Houston; J. Denny, Vanderbilt University Medical Center

Over the last few decades, growing adoption of Electronic Health Record (EHR) systems has made massive clinical data available electronically. However, over 80% of clinical data are unstructured (e.g., narrative clinical documents) and are not directly assessable for computerized clinical applications. Therefore, natural language processing (NLP) technologies, which can unlock information embedded in clinical narratives, have received great attentions in the medical domain. Many NLP methods and systems have been developed in the medical domain. However, it is still challenging for new users to decide which NLP methods or tools to pick for their specific applications. In fact, there is a lack of best practices for building successful NLP applications in the medical domain.

In tutorial, we would like to propose some best practices in using clinical NLP to resolve real-world problems. We will start with an introduction of basic NLP concepts and available tools, and then focus on two important applications of NLP: 1) to extract phenotypic information from EHRs to support clinical research; and 2) to facilitate real-time decision support systems in clinical operations. We plan to use lectures, demonstrations and hands-on exercises to cover the basic knowledge/tools and use case studies to illustrate important trade-offs in the design and implementation of clinical NLP applications. Each of the three instructors has over 10 years of experience in clinical NLP research and application. Case studies will borrow heavily from their experience as a clinician, a researcher and an application developer to share their recommendations in building successful NLP applications in healthcare research and operations.


T15: Large Scale Clinical Text Processing and Process Optimization

S. DuVall, P. Alba, O. Patterson, VA Salt Lake City Health Care System/University of Utah

This tutorial outlines the benefits and challenges of processing large volumes of clinical text with natural language processing (NLP). As NLP becomes more available and is able to tackle more complex problems, the ability to scale to millions of clinical notes must be considered. The Department of Veterans Affairs (VA) has more than 2 billion clinical notes and we have developed NLP libraries to be able to approach projects of that scale. Participants will be introduced to existing tools and resources for large-scale NLP tasks, including Unstructured Information Management Architecture Asynchronous Scaleout (UIMA AS), the VA Leo NLP libraries, and the JMX Analysis Module (JAM) monitoring tool. The methods of computational performance analysis will be described and process optimization solutions will be demonstrated. Participants will be walked through a scenario of creating and launching an asynchronous NLP pipeline, monitoring it for performance metrics and identifying bottlenecks, and redeploying the pipeline with an optimal configuration. The tutorial will be presented by three instructors involved in the development and use of Leo and JAM in the VA.


T16: Social Media Research in the Health Domain

L. Fernandez-Luque, Qatar Computing Research Institute; F. Martin-Sanchez, Weill Cornell Medicine/Cornell University; I. Weber, Qatar Computing Research Institute; E. Yom-Tov, Microsoft Research; C. Petersen, Mayo Clinic

The use of social media in the health domain started in the late 90s with the appearance of e-mailing lists and online forums. Today, the Internet is being used by millions of people to discuss, share and learn about their health concerns. Just in the USA, there are hundreds of hospitals engaging with their patients using social media. Public health interventions and surveillance is increasingly relying on social media to engage with the public. Furthermore, mobile health apps are being integrated with social media to increase peer support and engagement. This tutorial is aiming at providing an overview of the research foundations regarding the use social media in the health domain. The participants will learn i) the evolution of the research in the area, ii) the different types of research that can be done to address a wide range of health challenge, iii) ethical and privacy challenges to avoid socio-ethical iv) the future trends of research in this area, in particular the use of social media in Precision Medicine, Health Big Data and Patient Empowerment.


T17: Disseminating Informatics Knowledge: Peer-review and Scholarly Publications

M. Chiang, Oregon Health & Science University; L. Ohno-Machado, University of California San Diego

Peer-review and scholarly publications are pillars of the scientific process. Reviewing and writing require a combination of expert knowledge, writing skills, critical thinking, and decision making. Reviewing the informatics literature is a complex task, particularly given the breadth of the field and the wide diversity of topics and authors’ educational or professional backgrounds. It is helpful to understand what reviewers are expecting in a manuscript, and this understanding is particularly important for those who are starting their careers. We will cover the ABCs of reviewing and the scientific publication process, utilizing practical examples and testimonials from editors, authors, reviewers, and readers. We will also explain the ethical imperatives of peer-reviewing and editing, and address issues such as duplication of materials, plagiarism and self-plagiarism, fabrication or falsification of results, and responsible conduct of research related to scholarly publication. Finally, we will give practical hints on how to prepare a CV and personal statement for consideration by committees selecting candidates for editorial boards and related panels. We encourage anyone interested in writing, reviewing, and disseminating informatics papers to join this interactive tutorial. Many elements of what we will discuss also apply to grant reviews and positioning oneself to compete for academic jobs.


WG08: Ethical, Legal and Social Issues: A 20th Anniversary (sponsored by the AMIA Ethical, Legal and Social Issues Working Group)

T. Solomonides, NorthShore University HealthSystem

AMIA’s Ethics Committee celebrates its 20th anniversary in 2016. This occasion for celebration comes at a time when ethical, legal and social (ELS) issues are raising concerns throughout the biomedical informatics community. This pre-symposium will offer both a refresher review of ELS issues and a discussion by domain experts on current issues in the community. These include the large-scale sharing of de-identified data for research; the sale of personal health information for use by for-profit corporations and others; the ethical dilemmas posed by biobanks and sharing of genetic data; the largely ungoverned collection of biodata for personal use (quantified self) and corresponding loyalty data by businesses; recent changes in the Common Rule and the discussion around those proposed changes; changes in IRB structures and modes of operation. After a brief opening to mark the occasion, the program will continue with a review by Professor Kenneth Goodman. There will then follow a number of brief expert expositions, position statements and critical analyses of relevant topics.


WG09: The Emerging Role of the Chief Research Informatics Officer in Academic Medical Centers (sponsored by the AMIA Clinical Research Informatics Working Group)

L. Sanchez-Pinto, The University of Chicago; K. Fultz Hollis, Oregon Health & Science University; A. Mosa, University of Missouri; J. Logan, Oregon Health & Science University; T. Solomonides, NorthShore Healthcare System

The role of the chief research informatics officer (CRIO) is emerging in academic medical centers to address the challenges faced by the research informatics enterprise in our rapidly evolving, data-intensive healthcare system. Most CRIOs are the first officers at their institutions to hold that role and are posed to help advance the field of biomedical research in the era of digital healthcare. During this pre-symposium, three invited CRIOs will review the history and emergence of the CRIO role, the challenges and opportunities they face today, and the future of the CRIO and research informatics in academic medical centers.


WG10: Patient-Generated Health Data in Action (sponsored by the AMIA Consumer and Pervasive Health Informatics Working Group)

R. Austin, University of Minnesota ; A. Lai, Ohio State University; P. Hsueh, IBM T.J. Watson Research Center

The focus of this pre-symposia activity is designed to inform, educate, and update colleagues on patient-generated health data (PGHD) in action. This pre-symposium will explore three areas: (1) presentations and panel discussion from individuals currently working with PGHD across three major areas: PGHD for population health management, PGHD for prevention, PGHD for participatory health; (2) updates on the state-of-the-science of PGHD; and (3) an opportunity to have a deeper discussion on potential barriers of implementing the use of PGHD and allow for creating innovation solutions to the barriers of using PGHD. Participants will learn from panelists lessons learned for implementing and using PGHD and how this work is increasing in recognition and usage across health systems. The current state-of-the-science related to PGHD will be explored. Participants will engage in deeper discussion and dialogue with fellow participants and will be able to discuss barriers within their own organizations along with create strategies to overcome these barriers.


WG11: Quantitative Imaging and Imaging Informatics in the Era of Precision Medicine (sponsored by the AMIA Biomedical Imaging Informatics Working Group)

L. Cooper, Emory University/Georgia Tech; J. Kalpathy-Cramer, Harvard Medical School/Massachusetts General Hospital; W. Hsu, UCLA; A. Sharma, Emory University

Biomedical Imaging spans the scale from microscopic and molecular to whole body visualization and encompasses many areas of medicine, such as radiology, pathology, dermatology, and ophthalmology. It plays a vital role in patient care and is routinely used in diagnosing disease, and assessing treatment. Image interpretation has historically been qualitative in nature. However, with improvements in image acquisition, and advancements in computational capabilities, we are now at the early stages of a transition from analyses based on human observers to the use of advanced computing platforms and software to automatically extract large sets of quantitative image features relevant to prognosis or treatment response. These feature sets can be used to infer phenotypes or correlate with gene–protein signatures, and will help incorporate imaging into precision medicine.

The objective of this pre-symposium is to assemble an interdisciplinary group of experts to share methods and experiences in biomedical imaging informatics towards effectively integrating imaging and image-derived data with other clinical data to enable precision medicine. This year’s overarching theme focuses on the opportunities and challenges of bridging phenotypic information from images with clinical and molecular characterizations for accurate diagnosis and treatment selection. The event will touch upon topics such as extracting and retrieving semantic content from large imaging archives, utilizing imaging features as part of electronic health record-based phenotyping, applying machine learning to discover uncover correlations between images and other biological scales, characterizing the significance of evolutionary features derived from images, and translating multi-scale disease models into practice. These topics synergize tightly with the broader informatics interests of the AMIA attendees and will raise their awareness of the opportunities and relevance of imaging informatics research to other biomedical informatics activities.


WG12: Mind the Communications Gap: Communicating about Biomedical Informatics with the Public (sponsored by the AMIA People and Organization Issues, Evaluation, Genomics and Translational Bioinformatics, and Consumer and Pervasive Health Informatics Working Groups)

K. Unertl, Vanderbilt University

Members of AMIA excel at communicating with our biomedical informatics peers, as demonstrated by highly successful AMIA meetings and symposia. Despite this, biomedical informatics as a field is often missing out on the opportunity to share information about our research with non-scientific and non-technical audiences. As the scope of biomedical informatics continues to expand beyond the healthcare system and into homes, communities, and schools, biomedical informatics professionals need to learn how to translate research and informatics concepts to new audiences. During this tutorial, participants will learn about the importance of engaging with non-scientific audiences and about core principles of communicating complex scientific concepts to the public. Attendees will also participate in multiple hands-on activities designed to help focus their messages, overcome the use of jargon, and interact with diverse audience needs. Participants will leave the tutorial with a personalized toolbox of scientific communication skills.


WG13: Mining Large-scale Cancer Genomics Data Using Cloud-based Bioinformatics Approaches (sponsored by the AMIA Genomics and Translational Bioinformatics Working Group)

S. Volchenboum, J. Andrade, R. Bao, K. Hernandez, University of Chicago

Next-generation sequencing (NGS) is now routine in cancer research, genetic testing, and the application of precision medicine. Despite the proliferation of tools and the commoditization of high-performance computing, it remains a challenge to transform the enormous amounts of sequencing data into meaningful biological information. While a single run of ultra-high throughput sequencing could generate 6 billion reads covering 20 whole human genomes, analysis of these data requires advanced computational skills, access to high-performance computing infrastructure, costly commercial software, and specialized consultants.

In this AMIA Genomics and Translational Bioinformatics Working Group pre-symposium, we provide guidelines and project-oriented hands-on training on the key components of analyzing multi-dimensional cancer genomics data. Experienced bioinformaticians and computational biologists will guide the participants through the analysis of RNA-Seq and ChIP-Seq data and the visualization of results using Amazon’s AWS EC2 as a virtual infrastructure. Participants will gain knowledge in experimental design and sample collection and experience with popular bioinformatics tools for data preprocessing, alignment, quantification, differential analysis, and visualization. Participants will leave the workshop with the skills to start analyzing their own datasets using cloud infrastructure.