ThoughtSource: A central hub for large language model reasoning data
Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there a...
Main Authors: | Simon Ott, Konstantin Hebenstreit, Valentin Liévin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, Matthias Samwald |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-08-01
|
Series: | Scientific Data |
Online Access: | https://doi.org/10.1038/s41597-023-02433-3 |
Similar Items
-
Dissociating Language and Thought in Human Reasoning
by: John P. Coetzee, et al.
Published: (2022-12-01) -
Irreconcilable relation of "Reason" and "Faith" in Kierkegaard's Thought
by: mohammad asghari
Published: (2009-09-01) -
The status of reason in Shia thought and understanding of Islam
by: Alizade Amanalah
Published: (2020-01-01) -
Differences of the Four Sunni Schools of Thoughts & Their Reasons
by: SZIC KU
Published: (2019-02-01) -
Stop reasoning! When multimodal LLMs with chain-of-thought reasoning meets adversarial images
by: Wang, Z, et al.
Published: (2024)