Believing the bot: examining what makes us trust large language models (LLMs) for political information

Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural langua...

Full description

Bibliographic Details
Main Authors: Deng, Nicholas Yi Dar, Ong, Faith Jia Xuan, Lau, Dora Zi Cheng
Other Authors: Saifuddin Ahmed
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174384
_version_ 1811690277021155328
author Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
author2 Saifuddin Ahmed
author_facet Saifuddin Ahmed
Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
author_sort Deng, Nicholas Yi Dar
collection NTU
description Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural language, to foster consensus between Republicans and Democrats. Despite their growing usage, academic focus on user engagement with LLMs for political purposes is scarce. Employing an online survey experiment, we exposed participants to stimuli explaining opposing political views and how the chatbot generated responses. Our study measured participants' trust in the chatbot and their levels of affective polarisation. The results suggest that explanations increase trust among weak Democrats but decrease it among weak Republicans and strong Democrats. Transparency only diminished trust among strong Republicans. Notably, perceived bias in ChatGPT was a mediating factor in the relationship between partisanship strength and trust for both parties and between partisanship strength and affective polarisation for Republicans. Additionally, the strength of issue involvement was a significant moderator in the bias-trust relationship. These findings indicate that LLMs are most effective when addressing issues of strong personal relevance and emphasise the chatbot's political neutrality to users.
first_indexed 2024-10-01T06:01:26Z
format Final Year Project (FYP)
id ntu-10356/174384
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:01:26Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1743842024-03-31T15:35:12Z Believing the bot: examining what makes us trust large language models (LLMs) for political information Deng, Nicholas Yi Dar Ong, Faith Jia Xuan Lau, Dora Zi Cheng Saifuddin Ahmed Wee Kim Wee School of Communication and Information sahmed@ntu.edu.sg Arts and Humanities Transparency Trust LLM ChatGPT Polarisation AI Republican Democrat Justification Politics Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural language, to foster consensus between Republicans and Democrats. Despite their growing usage, academic focus on user engagement with LLMs for political purposes is scarce. Employing an online survey experiment, we exposed participants to stimuli explaining opposing political views and how the chatbot generated responses. Our study measured participants' trust in the chatbot and their levels of affective polarisation. The results suggest that explanations increase trust among weak Democrats but decrease it among weak Republicans and strong Democrats. Transparency only diminished trust among strong Republicans. Notably, perceived bias in ChatGPT was a mediating factor in the relationship between partisanship strength and trust for both parties and between partisanship strength and affective polarisation for Republicans. Additionally, the strength of issue involvement was a significant moderator in the bias-trust relationship. These findings indicate that LLMs are most effective when addressing issues of strong personal relevance and emphasise the chatbot's political neutrality to users. Bachelor's degree 2024-03-28T08:45:39Z 2024-03-28T08:45:39Z 2024 Final Year Project (FYP) Deng, N. Y. D., Ong, F. J. X. & Lau, D. Z. C. (2024). Believing the bot: examining what makes us trust large language models (LLMs) for political information. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174384 https://hdl.handle.net/10356/174384 en CS/23/038 application/pdf application/pdf Nanyang Technological University
spellingShingle Arts and Humanities
Transparency
Trust
LLM
ChatGPT
Polarisation
AI
Republican
Democrat
Justification
Politics
Deng, Nicholas Yi Dar
Ong, Faith Jia Xuan
Lau, Dora Zi Cheng
Believing the bot: examining what makes us trust large language models (LLMs) for political information
title Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_full Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_fullStr Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_full_unstemmed Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_short Believing the bot: examining what makes us trust large language models (LLMs) for political information
title_sort believing the bot examining what makes us trust large language models llms for political information
topic Arts and Humanities
Transparency
Trust
LLM
ChatGPT
Polarisation
AI
Republican
Democrat
Justification
Politics
url https://hdl.handle.net/10356/174384
work_keys_str_mv AT dengnicholasyidar believingthebotexaminingwhatmakesustrustlargelanguagemodelsllmsforpoliticalinformation
AT ongfaithjiaxuan believingthebotexaminingwhatmakesustrustlargelanguagemodelsllmsforpoliticalinformation
AT laudorazicheng believingthebotexaminingwhatmakesustrustlargelanguagemodelsllmsforpoliticalinformation