Regulating algorithms in healthcare: IP and liability

Johan Ordish

31 October 2018

 

The last six months have seen an explosion of interest in AI. The House of Lords released their AI in the UK: ready, willing and able? report, the Department for Digital, Culture, Media and Sport set up the fledgling Centre for Data Ethics and Innovation, and Theresa May outlined the capability of AI to better diagnose patients. This ‘AI spring’ has not gone unnoticed in the healthcare sector, Innovate UK announcing substantial funds to develop digital pathology centres and various private enterprises such as Microsoft Research’s InnerEye continuing to gain momentum. In short, 2018 is the year the UK has consolidated its developing AI healthcare industry.

The emergence of an industry should give us pause to consider whether our healthcare system and its associated regulatory scheme are ready for such a change. It is heartening to see the Department for Health and Social Care have already started to consider such issues with their recently released Initial code of conduct for data-driven health and social care technology. The PHG Foundation are also examining the impact of AI and algorithms on healthcare. Specifically, we’ve set out to map the regulatory context for algorithms in healthcare with our Regulating algorithms in healthcare project to better understand the challenges and opportunities faced by innovators and developers.

The project

The Regulating Algorithms project has two aims. First, to understand how algorithms in healthcare are regulated, from the data that trains them to who is responsible if something goes wrong. Second, to evaluate how these algorithms should be regulated, that is, what regulation should be modified, added, or subtracted. The project includes two workshops and one dissemination event. The first workshop considers the General Data Protection Regulation (GDPR) and medical device law. The second workshop considers intellectual property and liability. A third dissemination event for invited developers and industry representatives is scheduled for 2019.

These workshops consider the regulation of algorithms in healthcare from the following perspectives.

  • Algorithms as data (GDPR)
  • Algorithms as medical devices (Medical and In Vitro Diagnostic Device Regulations)
  • Algorithms as intellectual property (patent, copyright, and trade secret protection)
  • Algorithms and liability (clinical negligence, strict liability regimes, including product liability)

Focusing on algorithms as intellectual property and as liability, the second workshop was held on 6 September at Emmanuel College Cambridge. Co-organised by the Centre for Advanced Studies in Biomedical Innovation Law at the University of Copenhagen (CeBIL), the Centre for Law, Medicine and Life Sciences at the University of Cambridge (LML) and the PHG Foundation, the workshop drew together a multidisciplinary audience of academics, industry representatives, and legal practitioners.

IP speakers included:

  • Prof Mateo Aboy (Cambridge) on the Patentability of algorithms
  • Dr Enrico Bonadio (City University, London) on Free and open source software and IP

IP panel included:

  • Prof Timo Minssen (CeBIL)
  • Iain Mitchell QC (Tanfield Chambers)
  • Andrew Katz (Moorcrofts)

Liability speakers included:

  • Prof Glenn Cohen (Harvard) on Liability for AI
  • Prof Nicholson Price (Michigan) on Medical malpractice and black-box medicine

Liability panel included:

  • John Buyers (Osborne Clarke)
  • Dr Alberto Gutierrez (formerly of the FDA)
  • Dr Danielle Belgrave (Microsoft)

What follows is a brief description of selected highlights from the workshop.

Intellectual property – striking a balance

The discussion across speakers, panellists, and delegates included an impressively wide range of topics related to IP and software. Across the breadth of ideas, the discussion centred on the balance between IP facilitating innovation and IP encouraging opaque practices in the healthcare sector.

Mateo Aboy compared EU and US positions on whether algorithms or ‘computer-implemented inventions’ are patentable (eligible) subject matter. His meticulous analysis of the US Supreme Court case Alice v CLS Bank concluded that, after Alice, algorithms may or may not be patentable, depending on the process they use and whether this process has further ‘technical effect.’

Enrico Bonadio gave a lively and applied description of free and open source software (FOSS) and the various IP issues this kind of license might face. Bonadio’s contentious thought that closed-source, proprietary software might be increasingly unethical in a health system that should be encouraging transparency and third-party scrutiny of software, generated heated debate.

Other issues explored on the day include:

  • The impact of other rights on IP including the European Commission’s proposed ‘data producer’s right’
  • Alternatives to patent law protection, including free and open source software (FOSS), and trade secrecy and the possible interactions between these different forms of protection
  • The ways in which free and open source software could be expanded and the extent to which this is desirable

Algorithms and liability

The workshop considered liability from multiple perspectives including: the liability of clinicians, what scheme of liability should be imposed upon AI, the FDA’s approach to liability, and the view of liability from a machine learning researcher’s perspective. As one perceptive delegate reminded us, potential patient benefit is the ultimate goal of using AI for health. This theme underscored the discussion on liability for AI.

Glenn Cohen painted a cogent picture of what predictive analytics promises to do for healthcare. In particular, Glenn outlined scenarios of how AI might be implemented into the healthcare system including what scheme of liability might apply to AI for health. He set out a range of possible systems for dealing with liability, drawing parallels with vaccine compensation schemes.

Nicholson Price gave a considered description of the current state of clinical decision support software and medical malpractice law. He developed a visionary outline of how AI might fit into medical malpractice, arguing that medical malpractice law typically does a poor job of adapting to new technology.

The liability panel applied their considerable expertise to a wide range of liability issues, including:

  • Differentiating between various kinds of machine learning
  • Outlining how the FDA might treat AI for healthcare
  • Exploring whose responsibility it should be to ensure machine learning is ethical, safe, and directed toward patient-centred care

Overall, the workshop generated many ideas that might inform the dynamic policy and regulatory landscape on AI for healthcare. We are extremely grateful to our co-organisers, speakers, panellists, and delegates for providing such engaging and thought-provoking discussion. Reports providing a fuller analysis of the first and second workshops will be released early 2019.

The PHG Foundation would like to acknowledge the significant financial contribution of CeBIL and their benefactor Novo Nordisk to hold this workshop. The workshop would not have been possible without their generous assistance.

Share this content
Related categories