What ChatGPT means for healthcare and medical research
Innovation, robotics, digital technology and improved diagnostics, prevention and therapeutics can change healthcare for the better.
The sanctity of the doctor-patient relationship is the cornerstone of the healthcare profession. This protected space is steeped in tradition – the Hippocratic oath, medical ethics, professional codes of conduct and legislation. But all of these are poised for disruption by digitisation, emerging technologies and “artificial” intelligence (AI). They also raise ethical, legal and social challenges.
Since the floodgates were opened on ChatGPT (Generative Pertaining Transformer) in 2022, bioethicists have been contemplating the role this new “chatbot” could play in healthcare and health research.
Early adopters have started using ChatGPT to assist with mundane tasks like writing sick certificates, patient letters and letters asking medical insurers to pay for specific expensive medications for patients. In other words, it is like having a high-level personal assistant to speed up bureaucratic tasks and increase time for patient interaction.
But it could also assist in more serious medical activities such as triage (choosing which patients can get access to kidney dialysis or intensive care beds), which is critical in settings where resources are limited. And it could be used to enrol participants in clinical trials.
Incorporating this sophisticated chatbot in patient care and medical research raises a number of ethical concerns. Using it could lead to unintended and unwelcome consequences. These concerns relate to confidentiality, consent, quality of care, reliability and inequity.
It is too early to know all the ethical implications of the adoption of ChatGPT in healthcare and research. The more this technology is used, the clearer the implications will get. But questions regarding potential risks and governance of ChatGPT in medicine will inevitably be part of future conversations.
Potential ethical risks
Use of ChatGPT runs the risk of committing privacy breaches. Successful and efficient AI depends on machine learning. This requires that data are constantly fed back into the neural networks of chatbots. If identifiable patient information is fed into ChatGPT, it forms part of the information that the chatbot uses in future. In other words, sensitive information is “out there” and vulnerable to disclosure to third parties. The extent to which such information can be protected is not clear.
Confidentiality of patient information forms the basis of trust in the doctor-patient relationship. ChatGPT threatens this privacy – a risk that vulnerable patients may not fully understand. Consent to AI assisted healthcare could be suboptimal. Patients might not understand what they are consenting to. Some may not even be asked for consent. Therefore medical practitioners and institutions may expose themselves to litigation.
Another bioethics concern relates to the provision of high quality healthcare. This is traditionally based on robust scientific evidence. Using ChatGPT to generate evidence has the potential to accelerate research and scientific publications. However, ChatGPT in its current format is static – there is an end date to its database. It does not provide the latest references in real time. At this stage, “human” researchers are doing a more accurate job of generating evidence. More worrying are reports that it fabricates references, compromising the integrity of the evidence-based approach to good healthcare. Inaccurate information could compromise the safety of healthcare.
Good quality evidence is the foundation of medical treatment and medical advice. In the era of democratised healthcare, providers and patients use various platforms to access information that guides their decision-making. But ChatGPT may not be adequately resourced or configured at this point in its development to provide accurate and unbiased information.
Technology that uses biased information based on under-represented data from people of colour, women and children is harmful. Inaccurate readings from some brands of pulse oximeters used to measure oxygen levels during the recent COVID-19 pandemic taught us this.
It is also worth thinking about what ChatGPT might mean for low- and middle-income countries. The issue of access is the most obvious. The benefits and risks of emerging technologies tend to be unevenly distributed between countries.
Currently, access to ChatGPT is free, but this will not last. Monetised access to advanced versions of this language chatbot is a potential threat to resource-poor environments. It could entrench the digital divide and global health inequalities.
Governance of AI
Unequal access, potential for exploitation and possible harm-by-data underlines the importance of having specific regulations to govern the health uses of ChatGPT in low- and middle-income countries.
Global guidelines are emerging to ensure governance in AI. But many low- and middle-income countries are yet to adapt and contextualise these frameworks. Furthermore, many countries lack laws that apply specifically to AI.
The global south needs locally relevant conversations about the ethical and legal implications of adopting this new technology to ensure that its benefits are enjoyed and fairly distributed. Source: The Conversation
Did you know?
Imitate
Chat GPT is a language model that has been trained on massive volumes of internet texts and attempts to imitate human text.
STATS
Advantages of AI
• Reducing human error
• Allows automating repetitive tasks
• Easily handles big data
• Faster decision-making with continuous availability
• AI-powered digital assistants
• Mitigates risks
Since the floodgates were opened on ChatGPT (Generative Pertaining Transformer) in 2022, bioethicists have been contemplating the role this new “chatbot” could play in healthcare and health research.
Early adopters have started using ChatGPT to assist with mundane tasks like writing sick certificates, patient letters and letters asking medical insurers to pay for specific expensive medications for patients. In other words, it is like having a high-level personal assistant to speed up bureaucratic tasks and increase time for patient interaction.
But it could also assist in more serious medical activities such as triage (choosing which patients can get access to kidney dialysis or intensive care beds), which is critical in settings where resources are limited. And it could be used to enrol participants in clinical trials.
Incorporating this sophisticated chatbot in patient care and medical research raises a number of ethical concerns. Using it could lead to unintended and unwelcome consequences. These concerns relate to confidentiality, consent, quality of care, reliability and inequity.
It is too early to know all the ethical implications of the adoption of ChatGPT in healthcare and research. The more this technology is used, the clearer the implications will get. But questions regarding potential risks and governance of ChatGPT in medicine will inevitably be part of future conversations.
Potential ethical risks
Use of ChatGPT runs the risk of committing privacy breaches. Successful and efficient AI depends on machine learning. This requires that data are constantly fed back into the neural networks of chatbots. If identifiable patient information is fed into ChatGPT, it forms part of the information that the chatbot uses in future. In other words, sensitive information is “out there” and vulnerable to disclosure to third parties. The extent to which such information can be protected is not clear.
Confidentiality of patient information forms the basis of trust in the doctor-patient relationship. ChatGPT threatens this privacy – a risk that vulnerable patients may not fully understand. Consent to AI assisted healthcare could be suboptimal. Patients might not understand what they are consenting to. Some may not even be asked for consent. Therefore medical practitioners and institutions may expose themselves to litigation.
Another bioethics concern relates to the provision of high quality healthcare. This is traditionally based on robust scientific evidence. Using ChatGPT to generate evidence has the potential to accelerate research and scientific publications. However, ChatGPT in its current format is static – there is an end date to its database. It does not provide the latest references in real time. At this stage, “human” researchers are doing a more accurate job of generating evidence. More worrying are reports that it fabricates references, compromising the integrity of the evidence-based approach to good healthcare. Inaccurate information could compromise the safety of healthcare.
Good quality evidence is the foundation of medical treatment and medical advice. In the era of democratised healthcare, providers and patients use various platforms to access information that guides their decision-making. But ChatGPT may not be adequately resourced or configured at this point in its development to provide accurate and unbiased information.
Technology that uses biased information based on under-represented data from people of colour, women and children is harmful. Inaccurate readings from some brands of pulse oximeters used to measure oxygen levels during the recent COVID-19 pandemic taught us this.
It is also worth thinking about what ChatGPT might mean for low- and middle-income countries. The issue of access is the most obvious. The benefits and risks of emerging technologies tend to be unevenly distributed between countries.
Currently, access to ChatGPT is free, but this will not last. Monetised access to advanced versions of this language chatbot is a potential threat to resource-poor environments. It could entrench the digital divide and global health inequalities.
Governance of AI
Unequal access, potential for exploitation and possible harm-by-data underlines the importance of having specific regulations to govern the health uses of ChatGPT in low- and middle-income countries.
Global guidelines are emerging to ensure governance in AI. But many low- and middle-income countries are yet to adapt and contextualise these frameworks. Furthermore, many countries lack laws that apply specifically to AI.
The global south needs locally relevant conversations about the ethical and legal implications of adopting this new technology to ensure that its benefits are enjoyed and fairly distributed. Source: The Conversation
Did you know?
Imitate
Chat GPT is a language model that has been trained on massive volumes of internet texts and attempts to imitate human text.
STATS
Advantages of AI
• Reducing human error
• Allows automating repetitive tasks
• Easily handles big data
• Faster decision-making with continuous availability
• AI-powered digital assistants
• Mitigates risks
Comments
Namibian Sun
No comments have been left on this article