15 June 2020
A series of new reports on 'black box medicine' and the interpretability of machine learning in healthcare has been released.
The research behind this new legal, regulatory and ethical analysis was funded by Wellcome.
Continued improvements in computing and AI, especially machine learning, are beginning to offer benefits for health services thanks to their ability to make sense of highly detailed information. From supporting healthcare professionals in making diagnoses and determining risk, to optimising treatment decisions and patient management, further expansion of the use of machine learning in healthcare seems assured.
Like other complex algorithms, machine learning can be ‘black box medicine’ - where conclusions (that may influence decisions related to care) are made without patients and health professionals understanding why. Though such systems can help medical research and clinical practice in many ways, the amount of data they use and their complexity may mean that how they make decisions cannot be explicitly understood or even adequately explained. This raises a number of legal and ethical issues.
To further understanding of the black box medicine problem, the PHG Foundation was awarded seed funding from the Wellcome Trust to examine interpretability in the context of healthcare and relevant regulation. In clarifying the requirements for transparency and explanation, we aim to improve patient and public trust in these technologies and better ensure that the benefits for healthcare are realised for all.
As part of this project the PHG Foundation has produced a detailed set of reports for legal, regulatory and health policy audiences, examining many of the issues of black box medicine. We have also created a dedicated Interpretability by design framework for developers of machine learning models for healthcare. The framework is based on our extensive research for this project and sets out a clear process for reviewing and optimising interpretability.