Demystifying AI: bridging the explainability gap in LLMs
This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (...
Main Author: | Chan, Darren Inn Siew |
---|---|
Other Authors: | Erik Cambria |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175340 |
Similar Items
-
Towards explainable artificial intelligence in the banking sector
by: Jew, Clarissa Bella
Published: (2024) -
Toward conversational interpretations of neural networks: data collection
by: Yeow, Ming Xuan
Published: (2024) -
Transferring a deep learning model from healthy subjects to stroke patients in a motor imagery brain-computer interface
by: Nagarajan, Aarthy, et al.
Published: (2024) -
Comment to U.S Copyright Office on Data Provenance and Copyright
by: Mahari, Robert, et al.
Published: (2024) -
Study: Transparency is Often Lacking in Datasets Used to Train Large Language Models
by: Zewe, Adam
Published: (2024)