The perils and promises of fact-checking with large language models
Automated fact-checking, using machine learning to verify claims, has grown vital as misinformation spreads beyond human fact-checking capacity. Large language models (LLMs) like GPT-4 are increasingly trusted to write academic papers, lawsuits, and news articles and to verify information, emphasizi...
Main Authors: | Dorian Quelle, Alexandre Bovet |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2024-02-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2024.1341697/full |
Similar Items
-
Checking the Fact-Checkers: Analyzing the Content of Fact-Checking Organizations as Initiatives for Hoax Eradication in Indonesia
by: Detta Rahmawan, et al.
Published: (2023-07-01) -
Combatting rumors around the French election: the memorability and effectiveness of fact-checking articles
by: Lisa K. Fazio, et al.
Published: (2023-07-01) -
Fact-checking platforms in Spanish. Features, organisation and method
by: Ángel Vizoso, et al.
Published: (2019-03-01) -
Using NLP for Fact Checking: A Survey
by: Eric Lazarski, et al.
Published: (2021-07-01) -
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
by: Shailza Jolly, et al.
Published: (2022-10-01)