Framework to evaluate and test defences against hallucination in large language model
The recent advancement of AI, particularly the large language models (LLMs) has en- abled unprecedented capabilities in natural language processing (NLP) tasks, including things such as content generation, translation, and question answering (QA). However, just like any new technology, LLMs faced...
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180892 |