This article was taken from: https://www.theguardian.com/commentisfree/2018/jul/26/tech-healthcare-ethics-artifical-intelligence-doctors-patients
By Ivana Bartoletti
It perhaps shouldn’t come as a surprise that Matt Hancock, the new health and social care secretary, made technology the theme of his first big speech in the new job. The former culture secretary is a renowned tech enthusiast and was the first MP to launch his own app.
Hancock is right that technology has great potential to improve the quality of our healthcare – and save money into the bargain. But it won’t be a panacea, and it raises a number of issues our society must deal with now.
Take artificial intelligence: there are already numerous examples of how it is enhancing the medical profession. Examples include robot-assisted surgery, virtual nursing assistants that could reduce unnecessary hospital visits and lessen the burden on medical professionals, and technologies that enable independent living by identifying changes in usual behaviour which need medical assistance. But AI also poses clear ethical challenges.
Two examples are worth focusing on in particular: the way in which changing approaches to medical knowledge could affect the doctor-patient relationship, and the ethics of how patients’ data gets used.
Until recently, patients would go to a doctor, explain their symptoms and the doctor would attempt to provide a diagnosis. But increasingly, patients now arrive having done their research online, all set to suggest (even to insist) on a diagnosis, to which the medic has to respond. Doctors tell me that this game of catch-up and partial role-reversal is already skewing the relationship of trust. In addition to this, we now have algorithm-based diagnostics. This means medical knowledge is no longer based on what the doctor themselves has studied and learned.
Algorithms can support decision-making by medical professionals, and often outperform the doctor. We are seeing this with cancer detection, and other fields where close observation of the patient data can create much more precise and personalised medicine, and provide earlier diagnosis. For example, the analysis of an individual’s touch strokes on their mobile phone could show up Parkinson’s because their texting speed decreases over time.
As we start to see these possibilities as fantastic rather than fantastical, we must also be aware of unintended consequences. What impact would doctors increasingly coming to rely on algorithms have on the body of medical knowledge? And how do we mitigate the risk that algorithms may not be sufficiently sensitive to everything going on in a patient’s life? For example, a patient with a high level of anxiety and stress may suffer an impact that no machine is able to capture. Algorithms will also have to be assessed to ensure they are not biased against certain groups, especially as they make decisions which may have very long-lasting consequences on individuals.
There are also ethical issues around the use of patient data. This allows us to study what we haven’t yet noticed, and deal with prevention and disease management in a very different way. We will be able to identify medical conditions way before we do now by collecting a huge amount of data, including on people’s habits, and thus be able to put in place prevention mechanisms for children and family members.
But patients must have a say in how their data is used. The fact that something is possible from a technical perspective does not mean we must do it. Ultimately, patients will need to decide if and to what extent they want to be observed and predicted, and how they want their personal information to be used. A tick-box exercise will not suffice, as compliance won’t be enough when it comes to confidence and trust in the machine.
There are lots of challenges ahead for AI. The trickiest is getting the ethics right. Machines are machines and we must not humanise them. When we bring them in, it must be to enhance our humanity – and this can only be done if both patients and doctors are engaged to help shape the future of medicine.
• Ivana Bartoletti is a privacy and data protection professional, and chairs the Fabian Women’s Network