Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI

This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-as...

Full description

Bibliographic Details
Main Authors: Aniket Kumar Singh, Bishal Lamichhane, Suman Devkota, Uttam Dhakal, Chandra Dhakal
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/15/2/92