Believing the bot: examining what makes us trust large language models (LLMs) for political information

Affective polarisation, the measure of hostility towards members of opposing political parties, has been widening divisions among Americans. Our research investigates the potential of Large Language Models (LLMs), with their unique ability to tailor responses to users' prompts in natural langua...

Full description

Bibliographic Details
Main Authors: Deng, Nicholas Yi Dar, Ong, Faith Jia Xuan, Lau, Dora Zi Cheng
Other Authors: Saifuddin Ahmed
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174384