The House of Lords Select Committee on Artificial Intelligence (AI) was appointed in June 2017 to consider the economic, ethical and social implications of advances in artificial intelligence. Its report AI in the UK: ready, willing, and able? published last week sets out no fewer than 74 recommendations drawn from the committee’s synthesis of over 223 pieces of written evidence, 22 oral evidence sessions, and several visits to centres which develop or study AI.
The committee set out to address five key questions around the effect of AI on everyday lives, potential opportunities for the UK, possible risks and implications, engaging the public, and ethical issues presented by the development and use of AI. The inquiry was unsurprisingly wide-ranging given AI is likely to pervade many aspects of our lives and different industries and sectors. In recognition that many issues presented by AI when deployed in healthcare are representative of wider issues with the use of this technology, an evidence session to which PHG Foundation contributed, and an entire chapter of the report have been dedicated to artificial intelligence in healthcare.
AI: Is the UK ready, willing, and able?
Yes – according to the Lords Select Committee – the UK is in a strong position to be among the world leaders in the development of artificial intelligence. However achieving this requires careful planning, coordination of activity, efforts to address areas of uncertainty, and the establishment of guiding principles for a shared ethical AI framework. Crucially, the report notes that ‘the UK must seek to actively shape AI’s development and utilisation, or risk passively acquiescing to its many likely consequences’. The same applies to those UK sectors, including healthcare, on which AI is likely to impact. The time to consider and shape the delivery of AI for positive societal impact is ripe.
AI – good for health?
The potential for AI to be used for the public good in healthcare are significant and compelling. This potential is acknowledged in the House of Lords report, and by Dame Wendy Hall’s independent review to the Government, as well as the Life Sciences industrial strategy. AI could assist in accelerating and improving disease diagnosis, supporting medical image analysis, facilitating drug discovery, enabling more efficient models of care, and supporting a more ‘personalised’ approach for managing the health of patients and individuals.
Challenges related to the emerging use of AI are magnified in healthcare. One is the need to build public trust in the use of potentially sensitive and personal health data often required to train and optimise AI algorithms. Linked to this is the need to mitigate against the risk of unintended ‘algorithmic bias’. This is where AI systems could make unfair decisions which reflect wider societal prejudices, or make inaccurate predictions as a result of poorly representative datasets being used to develop AI algorithms. Clearly the consequences of algorithmic bias will vary depending context, but in healthcare especially, erroneous decisions could risk causing harm. The committee’s view was that where they could have substantial impact on an individual’s life - transparency and intelligibility of decisions made by AI systems is fundamental.
Making AI work for health and health data work for AI
The anticipated benefits of AI for healthcare cannot be realised without the relevant and appropriate datasets to develop and improve the AI algorithms. As one witness told the committee 'Data is everything in machine learning [a form of AI]'.
On the topic of data it was encouraging to see several points raised in PHG Foundation’s written and presented evidence echoed in the committee’s recommendations - around building public trust, digitising health records, and the value of national datasets. Their healthcare chapter underscores the importance of maintaining public trust in the safe and secure use of data. There is also a recognition that data held by the NHS is ‘a unique source of value for the nation’ and when it is shared it should be in a manner that allows for its value to be recouped.
It is right to be optimistic about the potential of AI for health in this country, but at the same time pragmatic about the pace and extent of progress that can be made in the face of both technical and societal challenges ahead
A view reiterated by several witnesses was that NHS records are a valuable, arguably unique resource. If harnessed effectively these datasets could be immensely powerful for accelerating the benefits of AI for health.
The committee called for NHS Digital and the National Data Guardian for Health and Care to publish a framework for the sharing of NHS data. They request the framework set out the considerations for sharing data in an ‘appropriately anonymised form’ and take account of the need to ensure patients are ‘made aware of the use of their data and given the option to opt-out’. On the surface these are reasonable suggestions but data opt-outs could have potentially serious implications on the data that goes into datasets and therefore on algorithmic bias. It will also be important to consider those many circumstances where anonymisation is not feasible or significantly diminishes the utility of the datasets. Cross-sector expertise is essential for establishing the frameworks around AI for healthcare that can support the development of robust tools and also safeguard public trust and confidence.
It is right to be optimistic about the potential of AI for health in this country, but at the same time pragmatic about the pace and extent of progress that can be made in the face of both technical and societal challenges ahead. This comprehensive and considered report by the House of Lords AI Select Committee, clearly sets out the priorities for addressing the challenges and seizing the benefits of AI.
The PHG Foundation project, My Healthy Future, is delving into the implications of emerging technologies, including AI, for the next generation of personalised healthcare and prevention. Find out more