ChatGPT’s inconsistent moral advice influences users’ judgment
Abstract ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-04-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-023-31341-0 |
_version_ | 1797850135139450880 |
---|---|
author | Sebastian Krügel Andreas Ostermaier Matthias Uhl |
author_facet | Sebastian Krügel Andreas Ostermaier Matthias Uhl |
author_sort | Sebastian Krügel |
collection | DOAJ |
description | Abstract ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI. |
first_indexed | 2024-04-09T18:55:28Z |
format | Article |
id | doaj.art-eafcf882053c4235ac7f8366f4283e85 |
institution | Directory Open Access Journal |
issn | 2045-2322 |
language | English |
last_indexed | 2024-04-09T18:55:28Z |
publishDate | 2023-04-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj.art-eafcf882053c4235ac7f8366f4283e852023-04-09T11:14:26ZengNature PortfolioScientific Reports2045-23222023-04-011311510.1038/s41598-023-31341-0ChatGPT’s inconsistent moral advice influences users’ judgmentSebastian Krügel0Andreas Ostermaier1Matthias Uhl2Faculty of Computer Science, Technische Hochschule IngolstadtDepartment of Business and Management, University of Southern DenmarkFaculty of Computer Science, Technische Hochschule IngolstadtAbstract ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.https://doi.org/10.1038/s41598-023-31341-0 |
spellingShingle | Sebastian Krügel Andreas Ostermaier Matthias Uhl ChatGPT’s inconsistent moral advice influences users’ judgment Scientific Reports |
title | ChatGPT’s inconsistent moral advice influences users’ judgment |
title_full | ChatGPT’s inconsistent moral advice influences users’ judgment |
title_fullStr | ChatGPT’s inconsistent moral advice influences users’ judgment |
title_full_unstemmed | ChatGPT’s inconsistent moral advice influences users’ judgment |
title_short | ChatGPT’s inconsistent moral advice influences users’ judgment |
title_sort | chatgpt s inconsistent moral advice influences users judgment |
url | https://doi.org/10.1038/s41598-023-31341-0 |
work_keys_str_mv | AT sebastiankrugel chatgptsinconsistentmoraladviceinfluencesusersjudgment AT andreasostermaier chatgptsinconsistentmoraladviceinfluencesusersjudgment AT matthiasuhl chatgptsinconsistentmoraladviceinfluencesusersjudgment |