Features

Artificial intelligence, algorithms, autonomous devices and ethics

Published

on

Group photo

From May 14 to 15, 2024, the Advanced Course in Bioethics titled; “ethics a critical but supportive friend” was conducted by the Institute for Research & Development in Health & Social Care (IRD) in collaboration with the National Science Foundation (NSF) and was held at NSF auditorium Colombo. As a part of it, the event explored the ethical issues, problems and dilemmas posed by Artificial Intelligence (AI) especially in relation to modern medicine.

The speakers were Prof. Athula Sumathipala, Director of the IRD; Prof. Jonathan Ives, Professor of Empirical Bioethics at the University of Bristol, a member of the UK’s National Institute for Health and Care Excellence Highly Specialised Technology Evaluation Committee; Dr Janaka Pushpakumara, Dean/Senior Lecturer in Family Medicine, Family Physician, Faculty of Medicine and Allied Sciences of Rajarata University of Sri Lanka and Dr Buddhika Fernando Lead Bioethics IRD, Master Trainer in Bioethics ETTC of the UNESCO Bioethics programme.

Considering the current focus and interests in AI, we wish to share the summarised contents of the speech delivered by Prof Jonathan Ives who has expertise and carried out research into its ethical aspects.

AI, described as technology that emulates human intelligence through reasoning and learning, was the core topic. AI systems can autonomously process information and produce outputs like recommendations or actions. Prof. Ives outlined AI’s varying autonomy levels, from strictly following human instructions to evolving functionality, where systems modify their goals autonomously.

Professor Ives provided an overview of AI’s potential healthcare application in triage, diagnostics, treatment recommendations, and gatekeeping access. A case study was used to illustrate each. Each use case illustrated AI’s potential benefits and ethical challenges.

AI in Triage

One prominent example of AI in triage was the now defunct Babylon chatbot used in the UK. This AI system acted as a chatbot symptom checker, and allowed users to input their symptoms and receive advice on next steps. The potential benefits of this kind of technology are significant, especially in contexts where access to primary healthcare is limited. It could provide timely advice and potentially prevent serious health issues from escalating.

AI in Diagnostics

AI’s capabilities in pattern recognition make it particularly effective in diagnostics. In some studies, for instance, AI systems have demonstrated superior performance in interpreting medical images compared to human radiologists. This advancement may be crucial in an overstretched healthcare system where the demand for diagnostic services often exceeds supply. AI can speed up the diagnostic process and reduce the burden on human radiologists, ensuring quicker and more accurate diagnoses. This not only improves patient outcomes but also optimises the use of healthcare resources.

AI in Treatment Recommendations

AI can also assist in recommending treatment pathways. IBM’s Watson for Oncology is a prime example. This AI system can analyse a vast array of variables, including patient demographics, tumour characteristics, treatments, and outcomes, to discover patterns that might be invisible to human oncologists. Watson can stay updated with the latest medical research, a task impossible for any individual doctor due to the sheer volume of new information. This enables Watson to suggest treatment options that are tailored to the individual patient’s condition. Such AI systems might be particularly valuable in areas with limited access to specialised medical professionals, as they can provide a level of expertise that might otherwise be unavailable.

AI in Access Control

AI is also being used to address the problem use of prescription drugs in the USA. Systems like Narxcare analyse data on drug prescriptions to identify patients who might be at risk of drug misuse. By tracking patterns across multiple states, Narxcare can flag individuals who exhibit behaviour indicative of prescription drug misuse. This information is then used by clinicians to make decisions about access to controlled substances, potentially reducing the incidence of drug misuse and its associated health risks. This application of AI demonstrates how technology can be leveraged to manage public health issues effectively.

Ethical Issues in AI Use

Ethical issues, problems and dilemmas are three different things. The ethical decision- making process for an ethical issue may lead to an easy resolution, since there is no conflict between principles. Typically, an ethical dilemma, on the other hand, exists when two or more ethical principles or standards are conflicting with each other.

Professor Ives highlighted several ethical issues arising from the use of AI in healthcare: bias, control, liability, and erosion of human skills and care.

Bias

AI systems can inherit biases from their training datasets. For example, if an AI system learns from predominantly white, male health data, its recommendations might not be as accurate for other demographics. The speaker cited a case where the Babylon app provided different medical advice for the same symptoms based on the user’s gender, highlighting the critical need for diverse, accurate and representative training data. Therefore one must ensure the implementation of measures to ensure diversity in training data and the use of techniques to detect and correct biases in AI systems.

“So this is basically the garbage in garbage out principle. We have to make sure our training datasets are good enough. Not just good enough but actually the best possible”

Control

The increasing reliability of AI systems can lead to decreased human oversight. For instance, in the case of Narxcare, clinicians might rely heavily on AI scores to make decisions about prescribing drugs, potentially leading to AI making de facto decisions. This erosion of human control raises concerns about accountability and the quality of care. AI should augment, not replace, human expertise. Prof. Ives states the importance of continuous education and training for healthcare professionals to effectively use AI tools without losing their critical thinking skills

“As they become more familiar, as they become a less vigilant user, so that the AI itself is actually making the decision and the clinician is just signing off on it. So the AI then ends up having control, it ends up making the decisions.”

Liability

The opaque nature of AI decision-making complicates accountability. Clinicians must decide whether to trust AI recommendations without fully understanding the underlying reasoning. If an AI’s recommendation leads to harm, determining who is responsible—the clinician or the AI developer—becomes challenging. Prof. Ives stresses the need for robust legal frameworks to govern the use of AI in healthcare. This includes standards for AI validation, deployment, and use, as well as guidelines to address liability and ensure patient safety.

“There’s always an inevitable atrophy of vigilance in the human overseer. And that means, if we can foresee it, we might want to take steps to try and minimise it”

Erosion skills and the erosion of Care

Quoting another scholar, Prof Ives then introduced the problem of crumple zones …”the crumple zone in a car is meant to protect the human driver. The moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. The concept is both a challenge to and an opportunity for the design and regulation of human robot systems.” Prof. Ives went on to say that when technology companies put an AI in a clinical space and say ‘this is advisory only, you’re the doctor, you have control’, they are putting the clinician in a position where they’re at risk, They are using the overseeing clinician as a moral crumple zone.

The potential for AI to replace human oversight in some areas raises concerns about the erosion of human care. AI might offer efficiency, but it lacks the empathetic and nuanced understanding a human professional can provide. It cannot replace the empathy and communication skills of human clinicians. Ensuring that AI tools do not detract from the clinician’s ability to connect with patients is essential.

In conclusion, Prof. Ives highlighted the dual nature of AI in healthcare—offering significant benefits while posing serious ethical challenges. As AI technology continues to evolve, ongoing ethical scrutiny and robust safeguards are essential to ensure that its integration into healthcare systems enhances rather than undermines patient care and trust. The session encouraged participants to critically engage about these issues, to help understand the complexities of AI and ethics in healthcare.

For the full speech please visit the IRD’s you tube channel – https://www.youtube.com/watch?v=8F9tN6Ere8U

This report was compiled by Dr. R.P. Wakishta Arachchi, research assistant at IRD

Click to comment

Trending

Exit mobile version