Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks

Abstract Large language models (LLMs) are becoming a significant source of content generation in social networks, which is a typical complex adaptive system (CAS). However, due to their hallucinatory nature, LLMs produce false information that can spread through social networks, which will impact th...

Full description

Bibliographic Details
Main Authors: Guozhi Hao, Jun Wu, Qianqian Pan, Rosario Morello
Format: Article
Language:English
Published: Nature Portfolio 2024-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-66708-4