AI for health: Is there a regulatory gap?

Johan Ordish

11 July 2018

 

This article first appeared in Digital Health Legal June 2018 edition.

Artificial intelligence (‘AI’) has the potential to transform our health sector, it promises to advance how we research, how we diagnose, and how we ultimately treat patients. What of its accompanying regulation? Can the existing law of agency, liability, and data protection accommodate such a technological change? The recent House of Lords report, ‘AI in the UK: ready, willing and able?’ touches on such questions. Coming to no definitive conclusion, the report digests conflicting opinion: some think AI will constitute a new paradigm for health law, others envision a series of limited, iterative changes. Who’s right? Johan Ordish, Senior Policy Analyst (Law & Regulation) at PHG Foundation, a health policy think tank part of the University of Cambridge, argues in this article that the most pressing concern is not any unique challenge AI might pose but rather the gaps that already exist in our laws of liability and medical devices.

AI for health is now a reality. On the horizon are drug repurposing algorithms that find new tricks for old drugs, natural language processing algorithms that triage NHS 111 calls, and image analysis software to assist radiologists by automatically identifying anatomical features1, to name but a few. Whilst talk of AI applications is prone to hype, AI is undoubtedly coming to healthcare, and soon. It is clear that healthcare systems and their associated regulation will have to be ready for such transformative change.

The House of Lords Select Committee on Artificial Intelligence report ‘AI in the UK: ready, willing and able?’ (‘the report’) addresses this issue, in part. While the report considers the development and place of AI in the UK in general, it also examines the healthcare sector, and legal difficulties the technology might exacerbate. What follows considers the report’s conclusions and the evidence that underpins it to briefly discuss whether our law of liability, medical devices, and data protection can accommodate such a shift.

AI exceptionalism?

‘AI for health’ often conjures ideas of robot doctors and self-aware algorithms. Despite this, for now, AI is rather more ordinary. Instead of algorithms that possess broad cognitive ability (artificial general intelligence), most near-use applications are a form of machine learning2. That is, AI possessing only task-specific intelligence. There are three lessons from this. First, there is no clearly defined, bright-line test to distinguish between AI and non-AI algorithms – AI is not categorically different. Second, near-use machine learning does not pose the kind of existential risks to humanity that books like Superintelligence warn about3. On the contrary, they use narrow methods to complete specific tasks. Third, to quote a pithy Tweet from Hugh Harvey: “The biggest impact of AI in medicine won’t come from making machines do human-like tasks, but from removing machine-like tasks from humans.”

It is one question to ask whether regulation can accommodate AI, another to consider whether we need overarching regulation for AI in particular. The report spends some time considering this latter question; many of those submitting evidence to the original inquiry argued that AI does not need overarching regulation, nor an overarching regulator. While there have been some calls for such a regulator, most of these thoughts concern the existential risks that near-use machine learning simply does not pose. The most nuanced approaches recognise that regulation ought not to regulate according to broad categories, but according to risk4. Just as it would make no sense to regulate cats and tigers the same because both belong to the biological family Felidae, we should not saddle all of AI with the same degree of regulation just because it is AI. In short, we should regulate specific AI uses, being sensitive to their risk, not AI in broad brushstrokes. With this in mind, is our law in the areas of liability and medical devices law flexible enough to accommodate these new AI uses?

Product liability and planes

Many contributors to the report put great faith in product liability regimes to capture and compensate anyone harmed by AI. Evidence from Brad Love of The Alan Turing Institute, singles out product liability as an example of current law that might flex to cover AI:

Existing laws and regulations may adequately cover AI. For example, currently, an aircraft manufacturer would be liable for a malfunctioning autopilot system that led to a loss of life. If, instead, the autopilot were ‘artificially intelligent,’ which systems developed decades ago could be considered, the same responsibilities would hold5.”

The broad idea here is that the mere addition of ‘AI’ to the mix ought not change the way in which an algorithm is regulated. While the spirit of the idea is certainly correct, the thought does infer that consumer protection law is currently fit for purpose when protecting consumers from defective software. This is not a safe assumption, especially with regard to healthcare software.

The position of regular software in the regime of strict liability is already uncertain. Specifically, it is questionable whether software itself counts as a ‘product’ under the Consumer Protection Act 1987 and its parent Directive6. The standard example to illustrate this uncertainty happens to be autopilot software, with legal opinion differing on whether software that incorrectly registers a mountain’s height would count as a product itself7. Apart from this, under the Sales of Goods Act 1979, software does not count as ‘goods’ but a disk containing software does – an awkward situation, out of step with increasing digitisation. It seems the foundations of product liability for software in general are already under strain.

The worry regarding AI and product liability is less about AI creating new regulatory gaps, more that it might widen cracks that already exist

The worry regarding AI and product liability is less about AI creating new regulatory gaps, more that it might widen cracks that already exist. It is clear that the current system of product liability is already jerry-rigged to include software. This is even more of a problem for many AI algorithms like risk prediction tools and image analysis algorithms that have no obvious relationship with a physical good, but nevertheless have the potential to cause significant loss8. If the foundations of product liability for software are shaky, it is likely that AI algorithms will only exacerbate such uncertainty. Readiness for AI here may be more remedial in nature, repairing the foundations of software product liability in general.

Clinical negligence and the black box

In the short term, most AI for health will be assistive – it is AI to help rather than replace clinicians. Given this, clinicians remain the primary target for clinical negligence. This has been the position for clinical assistance software designed to detect drug contraindications (conflicts)9. The attitude is that the final decision still lies with the clinician, and so liability must follow suit. Nevertheless, as algorithms that assist in clinical decision-making become more prominent and the way they work becomes ever more inscrutable, we ought to question whether the clinician should be the primary source of liability.

Legal academics such as Nicholson Price raise concerns about the practice of ‘black box medicine10.’ Black box medicine is ‘the use of opaque computational models to make decisions related to health care.’ This is not a problem unique to machine learning. Still, machine learning as a method of computation is often opaque, because machine learning algorithms really do learn. Consider a histopathology image analysis algorithm created to identify potentially cancerous samples.

These algorithms are trained by being fed labelled data, in this case, samples where histopathologists have already identified areas of concern11. While the histopathologists know why they label sections of slide the way they do, what the algorithm finds significant in all this data is largely mysterious. In this way, machine learning differs from traditional programming where the developer manipulates a defined set of variables. The idea of black box medicine should make us rethink the borders of clinical liability. Machine learning algorithms may indeed be more accurate than clinicians in some respects, but their outputs come with limited reasoning attached12. Hence, human pathologists can typically explain why they reached their professional conclusion, but machine learning has a limited ability to do the same. It is not difficult to imagine a future where clinicians follow the advice of these kind of algorithms, but have little idea why they diagnose or recommend particular interventions to patients. If this is to come to pass, keeping clinicians as the primary source of liability is to require them to be omniscient in an increasingly opaque job.

Inscrutable medical devices

Depending on their risk classification, medical devices entering the US and EU markets are subject to validation requirements. AI algorithms are not exempt from such requirements. In fact, the recent EU Medical Devices Regulation and In Vitro Diagnostic Medical Devices Regulation make clear that software by itself may count as a medical device13. Yet, how can bodies like the MHRA and FDA provide assurances that the device is safe when so much of its reasoning is hidden14?

Broadly, the problem of validating AI algorithms is a problem in three parts. First, as mentioned, AI is often opaque, so it is difficult to fully examine its outputs.

If we are to validate such devices, much of the validation may have to hang on its accuracy alone. Second, future uses might include highly personalised predictions or recommendations. In this way, the regular method of clinical trials may not be feasible, as the patient population becomes increasingly smaller. Third, the accuracy and reliability of these algorithms shifts according to the data it swallows. Given this, AI may often constitute more of a moving target to validate than more static algorithms. All considered, these problems are certainly not unique to AI, yet it is clear that the growing use of AI for health might challenge traditional methods of testing medical technology. In part, this has been recognised by the FDA, with Commissioner Gottlieb recently announcing that the FDA is developing a new regulatory framework to promote innovation and support the use of AI-based technologies15.

Explanation and the GDPR

Machine learning is fuelled by data. This data is often in turn regulated by the new General Data Protection Regulation (‘GDPR’) now in force. One aspect of this new Regulation is the supposed right to explanation. The very existence of this right is controversial, the right being constructed from reading provisions on automated processing and the right to be informed together16. If it does exist, it will require (in certain circumstances) that developers explain the outputs of their algorithms – how the algorithm came to the conclusion it did.

The existence of a right to explanation seems incompatible with black box algorithms. If machine learning is necessarily opaque, it may be hard to reconcile the demands of the right with the limitations of the technology. Indeed, if the right to explanation requires that developers provide information on the ‘logic and significance’ of their algorithm, the right may prove especially hard for machine learning algorithms to satisfy17. While this is not necessarily the case (see Counterfactual Explanation tools), the legal uncertainty leaves developers in a tricky position18.

Less a problem of missing regulation, more an issue of legal interpretation, uncertainty over developers’ core responsibilities to explain their algorithms might stifle the burgeoning industry. In short, if machine learning algorithms cannot process data, there is no machine learning. This challenge must be met if the AI industry is to flourish.

Finding gaps without exceptionalism

AI is not necessarily different from other algorithms. The mere fact that an algorithm uses AI does not mean it ought to be regulated differently. Nevertheless, we don’t have to be AI exceptionalists to find that our current law has gaps that might widen with the introduction of such technology. Instead, it is best to acknowledge the fragility of some of our law before any technological upheaval occurs. As demonstrated, our medical device law, and laws regarding liability and data protection require such attention. In this way, preparation for AI might simply mean addressing the weaknesses and irregularities already present in our regulation.

1. See Healx, Babylon Health, InnerEye

2. House of Lords, ‘AI in the UK: ready, willing and able?’ 2018.15.

3. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

4. Yeung, Karen. Corrected oral evidence: Artificial intelligence.

5. The Alan Turing Institute – Written Evidence (AIC0139)

6. Peel, Edwin, Goudkamp, James. Winfield and Jolowicz on Tort, Sweet & Maxwell, 2014.

7. Stapleton, Jane. Product Liability. Cambridge University Press, 1994. 333-4.

8. For example, InnerEye.

9. Miller, Randolph A., and Sarah M. Miller. ‘Legal and regulatory issues related to the use of clinical software in health care delivery.’ Clinical Decision Support. 2007. 423-444.

10. Price, Nicholson. ‘Black-box medicine.’ Harv.JL & Tech. 28 (2014): 419.

11. Veta, Mitko, et al. ‘Breast cancer histopathology image analysis: A review.’ IEEE Transactions on Biomedical Engineering 61.5 (2014): 1400-1411.

12. Wachter, Sandra, Brent Mittelstadt, and Chris Russell. ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.’ 2017.

13. Article 2(1) Medical Devices Regulation, Article 2(2) In Vitro Diagnostic Medical Devices Regulation.

14. Price, Nicholson. ‘Artificial Intelligence in Health Care: Applications and Legal Implications.’ SciTech Lawyer 14 (2017).

15. Gottlieb, Scott. Transforming FDA’s Approach to Digital Health. Academy Health’s 2018 Health Datapalooza. 26 April 2018.

16. Selbst, Andrew D., and Julia Powles. ‘Meaningful information and the right to explanation.’ International Data Privacy Law. 7.4 (2017): 233-242.

17. If the right does exist, one way forward might be Wachter’s counterfactual explanation.

18. Wachter, Counterfactual Explanations without Opening the Black Box.

Share this content
Related categories