Regulating algorithms in healthcare

12 January 2018

As the fundamental basis of all computerised tasks, the use of algorithms in the diagnosis and treatment of patients is advancing across the delivery of healthcare services. What regulation governs their use? Is this regulation adequate in light of advances in machine learning for health? Does this regulatory framework strike a fair balance between the need for medical innovation and patient safety?

Healthcare technology is changing. The use of algorithms for increasingly important tasks is spreading across the healthcare sector. On the horizon are a new generation of machine learning algorithms that promise to inform diagnosis and assist in treatment.

Is our system of law and regulation ready for the challenge this technological shift presents?  

Governing algorithms - why it matters

As the use of algorithms becomes embedded in medicine and AI is introduced into routine clinical practice, there is an urgent need to revisit how algorithms are regulated in healthcare. Why? First, concern over the quality and safety of some health apps and algorithms being released onto the direct to consumer market. Second, with the introduction of the EU Medical Devices Regulation and EU In Vitro Diagnostic Devices Regulation, software developers could be caught out by new forms of regulation. Third, while there is much talk about ‘regulation of software in health’ or ‘regulation of AI’, little actual work has been done to provide a holistic understanding of the regulation that covers these tools.

With this project, we seek to address these concerns.

 

Johan Ordish talking about the project

Our objectives

This work on regulating algorithms in healthcare aims to clarify

  • How algorithms in healthcare are regulated
  • How algorithms in healthcare should be regulated

Regulating algorithms in healthcare considers how algorithms in healthcare are regulated, from the data that is used to train an algorithm to the question of who is liable if something goes wrong. We consider the following general spheres of regulation:

  • Algorithms as data (the General Data Protection Regulation and the Data Protection Act 2018)
  • Algorithms as medical devices (the Medical Devices Regulation and In Vitro Diagnostic Medical Devices Regulation)
  • Algorithms as intellectual property (including patent, copyright, and trade secret protections)
  • Algorithms as liability (clinical negligence, product liability, statutory compensation schemes)

Timeline

  • 2019

    Dissemination event

    Early 2019 - reports released

  • September 2018

    Workshop 2: Intellectual property and liability - Agenda

    Briefing note Legal liability for machine learning in healthcare

  • March 2018

    Workshop 1: The GDPR and IVDR in practice - Agenda

    Briefing note What is the GDPR?

    Briefing noteWhat is the IVDR?

 

What we are doing

Working with the Centre for Law, Medicine and Life Sciences at the University of Cambridge, we’ve convened two workshops bringing together academics, legal practitioners, regulators, developers, and clinicians. 

In early 2019, reports detailing our findings on each sphere of regulation will be released, following which a dissemination event is planned to provide more detail of the findings to invited stakeholders.

Workshop 1 – Regulating algorithms in healthcare – the GDPR and IVDR
in practice

Algorithms as data

  • Does the GDPR contain a right to explanation?
  • Might counterfactual explanation satisfy such a right?

Algorithms as medical devices

  • How does the MDR and IVDR intended purpose test compare with the FDA’s approach?
  • Is the MDR/IVDR intended purpose test flexible enough to regulate apps and algorithms used both for healthcare and wellbeing testing?
  • How are artificial intelligence applications  validated and kept under surveillance under the MDR/IVDR?

The following briefing notes were released to inform the debate:

Workshop 2 – Regulating algorithms in healthcare – liability and
intellectual property

Co-organised with the Centre for Advanced Studies in Biomedical Innovation Law at the University of Copenhagen and the Centre for Law, Medicine and Life Sciences at the University of Cambridge, this workshop covered the following topics.

Algorithms as intellectual property

  • Patent protection of computer-implemented inventions
  • Free and open source software in healthcare

Algorithms as liability

  • What predictive analytics promises to do for healthcare
  • How AI might work with medical malpractice

The following briefing note was released to inform the debate:

This blog provides more details about the day.

If you would like to know more about this project, please contact Johan Ordish or  Alison Hall

 

(Page updated 15 November 2018)