Consultation response: stakeholders consultation on draft AI Ethics Guidelines

4 February 2019

The PHG Foundation is supportive of the aims of the High Level Expert Group on AI in formulating draft guidance that sets a benchmark for high standards which can be adopted throughout Europe. However, we have some concerns that this approach is predicated upon an exceptionalist view of AI.

Our experience of the regulation of genetic and genomic tests is that there are a lot of parallels between the proposed uses of AI and genomics: with both technologies – there is potential to generate potentially predictive and sensitive data which could be used in discriminatory ways. On the other hand, many genetic and genomic tests are uninformative, are routine, and do not yield sensitive data. Regulating all genetic/genomic tests on the basis that they are sensitive does not adequately distinguish between the different uses to which genetic/genomic tests might be put.

The same arguments can be made in relation to AI. Many applications of AI technologies pose no prospect of harm or benefit. We have some concerns that the tone of the ethics guidance is that AI is necessarily exceptional. We would like to see more consideration of the view that some applications of AI may be routine and may yield uninformative data. In such cases it might be neither proportionate or rationale to seek to impose an exceptionalist regulatory framework. 

There are, of course, some applications which require extreme levels of oversight; multidisciplinary expertise, and careful transparency. Mandating the same levels of oversight to all AI applications might risk burdening the sector with excessive levels of regulation.

To read more, please download the consultation.

Genomics and policy news