Skip to main content

More than algorithms: an analysis of safety events involving ML-enabled medical devices reported to the FDA

Read the abstract
 

Watch the Recording

 

Presenters

Dr. David Lyell
Australian Institute of Health Innovation

Dr. David Lyell is a Postdoctoral Research Fellow at the Australian Institute of Health Innovation, Macquarie University. David studies the use of AI/ML technologies in healthcare, in particular its effects on clinical decision-making and impact on care delivery. His most recent work focuses on AI/ML-enabled medical devices as artefacts of real-world use of AI/ML technologies in healthcare.

Farah Magrabi
Australian Institute of Health Innovation

Farah Magrabi is a Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University. She has a background in Electrical and Biomedical Engineering and is an expert in the safety and effectiveness of digital health and AI.

Professor Magrabi leads the NHMRC Centre of Research Excellence in Digital Health's Safety research stream; is co-chair of the Australian AI Alliance’s Working Group on safety, quality, and ethics; and is one of Australia’s representatives on the OECD Global Partnership for AI (GPAI).

Moderator

Katherine Brown, PhD
Postdoctoral Research Fellow Trainee
Vanderbilt University Medical Center

Statement of Purpose

As machine learning (ML) enabled technologies are introduced into clinical care, it is crucial to evaluate their safety and efficacy. While the benefits are well documented in the literature, far less is known about safety. Initial safety research centers on the limitations of machine learning such as its black box nature and susceptibility to data biases, and case studies of particular events.

The present study sought to extend the safety research with a systematic analysis of adverse events involving ML-enabled medical devices captured as part of the Food and Drug Administration’s post-market surveillance program. By analyzing adverse events involving ML devices, as artefacts of the real-world implementation of ML, we demonstrate the need to broaden our perspective of safety beyond algorithms. Most safety events involved data used by medical devices, while problems with the use of ML devices were four times more likely to result in harm.

Learning Objectives

  • Understand the types of safety problems associated with the use of machine learning (ML)-based decision support systems and their consequences.
  • Identify how and where safety problems can occur when ML systems are used by clinicians and consumers.
  • Formulate approaches to improve safe design, implementation and use of ML systems.
Dates and Times: -
Type: Webinar
Course Format(s): On Demand
Price: Free
Share