Detecting hallucinations in large language models using semantic entropy
Large language model (LLM) systems, such as ChatGPT1 or Gemini2, can show impressive reasoning and question-answering capabilities but often ‘hallucinate’ false outputs and unsubstantiated answers3, 4. Answering unreliably or without the necessary information prevents adoption in diverse fields, wit...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Nature Research
2024
|