Moving forward the debate on AI regulation

Alison Hall

24 May 2018

 

In a momentous week in which the EU General Data Protection Regulation comes into force and the UK Data Protection Act receives Royal Assent, the publication of the Algorithms in Decision-Making report from the House of Commons Science and Technology Committee seems a timely addition.

Hot on the heels of the Theresa May attesting to the importance of the artificial intelligence in driving the Industrial Strategy, the fundamental challenge highlighted by the Committee in their report is to secure a regulatory framework ‘which facilitates and encourages innovation but which also maintains vital public trust and confidence.’ The extent to which such a policy can simultaneously meet these two objectives is a tension that lies at the heart of the report, and is something which the PHG Foundation is exploring in some detail in our Regulating Algorithms in Healthcare project.

The new report sets out four categories of recommendations intended to build trust and confidence in the regulatory framework for AI.

Enhancing transparency and providing an explanation

Transparency is often seen as a means of ensuring accountability. The report cites evidence from a number of contributors who stress the importance of transparency, both in promoting acceptability of algorithmic decisions and in refuting bias. Rightly, the Committee acknowledges that ‘transparency can take different forms – how an algorithmic decision is arrived at, or visibility of the workings inside the ‘black box’. However, the Committee does not seem to have explored how this distinction impacts on the obligations of developers and data processors. For example, in machine learning if it is not possible to provide an explanation of the workings inside the black box, then does this prohibit certain types of data processing from taking place at all? This could affect automated decision-making, where consent or other exemption is absent, and the decision is based solely on automated processing and produces a legal effect on an individual as regulated by Article 22 of the General Data Protection Regulation?

However, the Committee does not seem to have explored how this distinction impacts on the obligations of developers and data processors.

Even if it is possible to provide an explanation of black box workings, the report does not address whether providing such an explanation or description does anything to promote trust. Nor does it establish whether providing more information necessarily increases trust: indeed, in the context of providing information in other settings, such as to support the consent process in healthcare, more information may even be counterproductive. Perhaps a more useful approach would be to ascertain whether the information is ‘meaningful’ (although this may be difficult to define) and the information giver regarded as ‘trustworthy’.

Building trustworthiness

If building trustworthiness is preferable to enhancing trust, the report does make various suggestions for mechanisms to achieve this. The proposed Centre for Data Ethics and Innovation will play a central role – and they will be kept busy, with a remit to ‘evaluate accountability tools – principles and ‘codes’, audits of algorithms, certification of algorithm developers and charging ethics boards with the oversight of algorithmic decisions’. However, as the name implies, its proposed function is to not only create best practice standards that promote high data quality and demonstrably lack bias – but also to streamline and promote investment and innovation. That the Centre is responsible both for ethics and for investment potentially creates tension. The terms of reference and independence of this body will be crucial to enhancing its credibility.

Streamlining innovation and securing public benefit

The PHG Foundation is a strong advocate of the use of technological innovation to improve health, so the report’s proposals to realise the great value captured in databases held for public authorities such as the NHS are welcome. Creating more consistent approaches and procurement models will ensure that public services are seen to benefit from these partnerships.

Central Government leading by example

The report acknowledges the key role that Government has in developing policy and also in acting as an exemplar for the development of good practice.  The statement: ‘The Government should play its part in the algorithms revolution in two ways. It should continue to make public sector datasets available, not just for ‘big data’ developers but also algorithm developers. We welcome the Government’s proposals for a ‘data trusts’ approach to mirror its existing ‘open data’ initiatives‘ is highly positive. However, the following recommendation raises some concerns: ‘Secondly the Government should produce, publish, and maintain a list of where algorithms with significant impacts are being used within Central Government, along with projects underway or planned for public service algorithms, to aid not just private sector involvement but also transparency’.

Algorithms – as mathematical tools for sorting and ordering information – are ubiquitous across multiple applications, and it is difficult to see how such a requirement would be either feasible or useful. This policy seems redolent of the unreasonable ‘genetic exceptionalism’ attitude that emerged in response to novel genetic tests becoming available – a mechanism for resolving public apprehension by treating genetic information differently from other types of personal or medical information and creating arguably disproportionate legal protections on this basis.

This new suggestion smacks of ‘algorithmic exceptionalism’ – an approach that could easily result in laws that are not future proofed and quickly become outdated as technologies are integrated into practice.

One major omission from the Science and Technology Committee report is the need to acknowledge the role of developers and commercial companies in delivering the rapid technological changes that will drive improvements in safety and public services. This will require proportionate incentives and opportunities to work collaboratively with regulators and policy makers and data subjects for mutual benefit. The next phase of our Regulating Algorithms in Healthcare project, which encompasses intellectual property and liability, should offer some constructive solutions.

Find out more about how we are working to promote understanding of the complex regulatory landscape for algorithms in healthcare

Share this content
Related categories