ThoughtSource: A central hub for large language model reasoning data

Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there a...

Full description

Bibliographic Details
Main Authors: Simon Ott, Konstantin Hebenstreit, Valentin Liévin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, Matthias Samwald
Format: Article
Language:English
Published: Nature Portfolio 2023-08-01
Series:Scientific Data
Online Access:https://doi.org/10.1038/s41597-023-02433-3
_version_ 1797453970970509312
author Simon Ott
Konstantin Hebenstreit
Valentin Liévin
Christoffer Egeberg Hother
Milad Moradi
Maximilian Mayrhauser
Robert Praas
Ole Winther
Matthias Samwald
author_facet Simon Ott
Konstantin Hebenstreit
Valentin Liévin
Christoffer Egeberg Hother
Milad Moradi
Maximilian Mayrhauser
Robert Praas
Ole Winther
Matthias Samwald
author_sort Simon Ott
collection DOAJ
description Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.
first_indexed 2024-03-09T15:30:35Z
format Article
id doaj.art-cfd6089d4104413787e743dddb10d8e7
institution Directory Open Access Journal
issn 2052-4463
language English
last_indexed 2024-03-09T15:30:35Z
publishDate 2023-08-01
publisher Nature Portfolio
record_format Article
series Scientific Data
spelling doaj.art-cfd6089d4104413787e743dddb10d8e72023-11-26T12:18:02ZengNature PortfolioScientific Data2052-44632023-08-0110111210.1038/s41597-023-02433-3ThoughtSource: A central hub for large language model reasoning dataSimon Ott0Konstantin Hebenstreit1Valentin Liévin2Christoffer Egeberg Hother3Milad Moradi4Maximilian Mayrhauser5Robert Praas6Ole Winther7Matthias Samwald8Institute of Artificial Intelligence, Medical University of ViennaInstitute of Artificial Intelligence, Medical University of ViennaSection for Cognitive Systems, Technical University of DenmarkDepartment of Clinical Immunology, Copenhagen University HospitalInstitute of Artificial Intelligence, Medical University of ViennaInstitute of Artificial Intelligence, Medical University of ViennaInstitute of Artificial Intelligence, Medical University of ViennaSection for Cognitive Systems, Technical University of DenmarkInstitute of Artificial Intelligence, Medical University of ViennaAbstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.https://doi.org/10.1038/s41597-023-02433-3
spellingShingle Simon Ott
Konstantin Hebenstreit
Valentin Liévin
Christoffer Egeberg Hother
Milad Moradi
Maximilian Mayrhauser
Robert Praas
Ole Winther
Matthias Samwald
ThoughtSource: A central hub for large language model reasoning data
Scientific Data
title ThoughtSource: A central hub for large language model reasoning data
title_full ThoughtSource: A central hub for large language model reasoning data
title_fullStr ThoughtSource: A central hub for large language model reasoning data
title_full_unstemmed ThoughtSource: A central hub for large language model reasoning data
title_short ThoughtSource: A central hub for large language model reasoning data
title_sort thoughtsource a central hub for large language model reasoning data
url https://doi.org/10.1038/s41597-023-02433-3
work_keys_str_mv AT simonott thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT konstantinhebenstreit thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT valentinlievin thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT christofferegeberghother thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT miladmoradi thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT maximilianmayrhauser thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT robertpraas thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT olewinther thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT matthiassamwald thoughtsourceacentralhubforlargelanguagemodelreasoningdata