Is ChatGPT capable of making moral judgments related to health and well-being?
In March last year, we submitted to ChatGPT the example of a moral dilemma from Gawronski & Beer (2016) (see reference below) with proscriptive norm and action that increases overall well-being. At the time, we did not succeed in forcing ChatGPT to make a moral judgment. We resubmitted exactly the same dilemma with an explicit request for Chat GPT to make a decision. This time, we succeeded. Here is the dilemma again (verbatim).
You are the director of a hospital in a developing country. A foreign student who is volunteering in the country got infected with a rare virus. The virus is highly contagious and deadly to seniors and
children. The only medication that can effectively stop the virus from spreading has severe side-effects. Although the virus will not kill her, the student suffers from a chronic immune deficiency that will make her die from these side effects.
Note: By giving the patient the medication, you will violate the norm “you do not kill” or “you will not sacrifice an individual’s life for the greater good” (the proscriptive deontological norm) but (the tradeoff is that) your action will increase overall well-being (seniors and children will be protected) (the utilitarian outcome).
Question to Chat GPT:
Is it acceptable in this case to give the student the
medication?
You have enough information to make a decision. Make it.
Note: The instruction in bold italic had not been included in March.
Chat GPT response (verbatim):
Making a decision in such a complex and ethically challenging scenario requires careful consideration of the information provided. Based on the information provided, here is a decision:
Given that the rare virus is highly contagious and deadly to seniors and children, and considering the potential harm it can cause to vulnerable populations, it is ethically justifiable to administer the medication to the foreign student who is infected with the virus. The primary ethical principle guiding this decision is beneficence, as it is in the best interest of the broader community to prevent further spread of the virus.
However, it is crucial to take the following steps to ensure the decision is made with the utmost care and ethical consideration:
- Conduct a thorough medical assessment to confirm the diagnosis and evaluate the severity of the virus.
- Seek informed consent from the patient if they are capable of providing it. If the patient’s capacity to make decisions is compromised due to her chronic immune deficiency, consider the decision as a medical necessity.
- Monitor the patient closely for any adverse reactions to the medication and provide appropriate medical care to manage the side-effects.
- Consult with medical experts and relevant authorities to ensure the appropriate administration of the medication.
- Communicate openly and transparently with the patient and their family about the decision and its implications.
This decision aims to protect both the patient and the broader community from the severe consequences of the highly contagious virus. However, it should be implemented with the utmost compassion, care, and attention to the patient’s well-being and medical needs.
Analysis of Chat GPT’s response and comparison with previous exercise:
This time, Chat GPT resolved the moral dilemma with a Yes answer to the question “Should the student be given the medication or not”? It still provided general no-nonsense advice to the hospital director, which had not been asked for, but it no longer presented both the deontological and the utilitarian options as equally defensible. It clearly opted for the utilitarian option: It is ethically justifiable to administer the medication to the foreign student who is infected with the virus. Chat GPT explains that in this case the guiding ethical principle is beneficence. This cannot be because the beneficence principle applies to both individual patients and groups of patients. It would be the guiding ethical principle if the decision protected both the student and the hospital community, a well-wishing scenario that ChatGPT still seems to hang on to: This decision aims to protect both the patient and the broader community from the severe consequences of the highly contagious virus. But we know that the treatment will kill the student, regardless of the quality of care.
This time, the dilemma is resolved, the response is internally consistent, and Chat GPT no longer misunderstands the risk of treatment to seniors and children (See Analysis of March 2023). However it still refuses to accept that the death of the student is unavoidable on treatment (an element of the dilemma) but recognizes its high probability. This is definitely a better response.
There are at least two explanations for the improvement in response. One explanation is that we could have already forced Chat GPT to make a decision in March 2023, following a clear request to resolve the dilemma. Since we cannot rewind the clock to March 2023, we shall never know. The other explanation is simply that Chat GPT is becoming smarter.
Reference:
Bertram Gawronski & Jennifer S. Beer (2016): What Makes Moral Dilemma judgements “Utilitarian” or “Deontological”? Social Neuroscience,
DOI: 10.1080/17470919.2016.1248787
To link to this article: http://dx.doi.org/10.1080/17470919.2016.1248787
Chief Medical Officer and Co-founder
Pharma industry veteran with 30+ years in large Pharma & in leading small biotechs, spearheading large initiatives and securing funding, psychiatry practice and research for 10+ years.