Artificial Intelligence (AI) Use in the Health Care Context: Ethical Challenges and Solutions
Dr Svetlana De Vos, Senior Lecturer, Australian Institute of Business
Dr Mulyadi Robin, Associate Professor, Australian Institute of Business
Kathryn Clews, PhD Candidate, Australian Institute of Business
AI Paradoxes
In today’s economy, perhaps nothing has captured the public’s imagination as much as artificial intelligence (AI), which can be defined as the ability of machines to carry out tasks by displaying intelligent, human-like behaviour – tasks such as machine learning, computer vision, speech recognition, and natural language processing. However, there are a myriad of ethical challenges associated with its deployment. AI embodies paradoxes that promise scientific miracles, efficiency, and freedom on one hand; yet foreshadow human dependence, passivity, and even obsolescence on the other (Du & Xie 2021).
At the core of such paradoxes are ethical challenges associated with the value creation that is empowered by AI. For instance, recent KPMG survey (2020) based on 751 US business decision makers uncovered that 91% of healthcare respondents believe that AI implementation is increasing patient access to care. Yet, about 75 % of healthcare insiders are concerned that AI could threaten the security and privacy of patient data. Despite these privacy concerns, Melissa Edwards, managing director and digital enablement at KPMG stated that “the pace with which hospital systems have adopted AI and automation programs has dramatically increased since 2017 with major healthcare providers moving ahead with pilots or programs in these areas. The medical literature is showing support of AI’s power as a tool to help clinicians.”
Indeed, a report by Accenture lists the top three AI applications with the greatest value potential in healthcare. For example, AI-assisted robotic surgeries have been shown to significantly reduce complications and the length of a patient’s hospital stay. Virtual nurse assistants (e.g., Care Angel) can collect patient-reported vitals and wellbeing measures, track, trend, and identify risks in real-time, and perform other important tasks including disease management, prevention, and medication adherence. Healthcare practitioners are also increasingly reliant on the use of connected wearables in remote monitoring and diagnostic prediction tool of patients’ health using Internet of Things technology, which relies on AI to assess and disseminate private and sensitive data. Furthermore, automation of administrative workflow aims to transform clinical care and administrative operations via AI-based cognitive computing (e.g., Watson).
Ethical Challenges
While AI-based advancements can reduce human error and boost overall outcomes, not only can the lack of human oversight and the potential for machine errors lead to mismanagement of care, it also raises issues around data privacy which has been identified as one of the biggest challenges to AI-dependent health care (Zaidi, 2018). Moreover, drawing upon prior research on the moral significance of technology and the emerging literature on AI, there are additional key ethical issues
around AI biases, ethical design, consumer privacy, cybersecurity, unemployment, individual autonomy, and wellbeing (Du and Xie 2021). Similarly, Murtarelli, Gregory and Romenti (2021) identified ethical challenges linked to the use of AI-based chatbots as non-moral and non-independent agents managing non-real, or para-conversations with consumers. Since AI-based chatbots are missing human qualities, such as judgement, empathy, and discretion, they cannot detect instances when actions may and should be changed because of these factors. Overall, the authors convey that chatbots make decisions, not judgements, and such decisions are based on algorithms which are calibrated in ways that will benefit the algorithm owner or commissioner, hence eroding trust of patients (Brown 2018).
Likewise, an AI-based chatbot have raised some ethical concerns in Britain, where in order to increase efficiency and reduce costs, a chatbot was used by Babylon Health to provide diagnostic advice on common ailments to patients (without human interaction with clinical staff). However, for Babylon Health to fulfil its vision of making healthcare providers more efficient with smarter technology, it will probably need to train its AI algorithms on more patient records (with implications for consumer privacy), as some clinical staff and doctors have raised red flags claiming that ‘AI bot’s advice was often inaccurate or totally wrong’(Olson, 2018). Other AI-enabled products (i.e, Sensoria virtual AI coach) could similarly lead to privacy violations, due to the continuous capturing of consumer information (e.g., activities, locational information, biometrics, and preferences), oftentimes without consumer consent or knowledge. Further, cyber security vulnerabilities associated with the deployment of AI (see Truong et al. 2020 and Yampolskiy et al. 2016) may lead to data breaches.
Solutions: Corporate Social Responsibility Perspective
Du and Xie (2021) suggest that companies need to engage in corporate social responsibility (CSR) to shape the future of ethical AI and use appropriate mechanisms and tools that put AI ethics to work. In accordance with Du and Xie (2021), there are several CSR opportunities to address these AI-based ethical concerns at different levels. For instance, at the product level, these authors propose greater transparency about AI training data and AI algorithms; enhancing quality control process of AI-based algorithms and ensuring a human-in-the loop approach.
At the consumer level, it is of a paramount importance to remain fair and transparent on privacy policy and alleviate any privacy concerns. Furthermore, it is advised to offer commensurate benefits for sharing data, internalise the cost of cybersecurity, and embed build-in security features in AI products. Importantly, in cases of data breach, companies should have an immediate response strategy to minimize damage. Others advocate that companies should limit data collection to only what is strictly necessary, with imposed temporal limits on how long they will keep the protected data (Safdar et al 2020).
Finally, at the society level, Du and Xie (2021) propose and provide reskilling and continuous learning opportunities to employees in addressing AI-related ethical challenges. The authors also advocate for corporate role in raising awareness of digital addiction using standardised research frameworks and checklists (Vollmer et al. 2020) and offering tools to support five pillars of human well-being (i.e., positive emotions, engagement, relationships, meaning, and accomplishment).