Demystifying AI: bridging the explainability gap in LLMs

This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (...

Full description

Bibliographic Details
Main Author: Chan, Darren Inn Siew
Other Authors: Erik Cambria
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175340
_version_ 1824453754699644928
author Chan, Darren Inn Siew
author2 Erik Cambria
author_facet Erik Cambria
Chan, Darren Inn Siew
author_sort Chan, Darren Inn Siew
collection NTU
description This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (PoC) web application that combines RAG with LLMs to result in more explainable and understandable AI output. The web application ingests the sustainability reports, which then processes them to answer audit-related queries and highlights relevant material in the documents to show the source of the responses. The implementation involves a technology stack of Python, LlamaIndex, Streamlit and pdf processing libraries. This project demonstrates the web application's ability to ingest, process, and derive responses from a sustainability report to effectively illustrative how RAG and LLMs can be used in the enhancement of explainability and reliability of AI systems in specialised domains. This PoC lays the foundation for further research and development toward better explainability of AI systems that puts forward the possibility of more explainable and, therefore, trustworthy AI applications.
first_indexed 2025-02-19T03:11:27Z
format Final Year Project (FYP)
id ntu-10356/175340
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:11:27Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1753402024-04-26T15:42:37Z Demystifying AI: bridging the explainability gap in LLMs Chan, Darren Inn Siew Erik Cambria School of Computer Science and Engineering cambria@ntu.edu.sg Computer and Information Science Retrieval augmented generation Large language models Explainability of AI RAG LLM XAI Sustainability reports auditing Explainable AI This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (PoC) web application that combines RAG with LLMs to result in more explainable and understandable AI output. The web application ingests the sustainability reports, which then processes them to answer audit-related queries and highlights relevant material in the documents to show the source of the responses. The implementation involves a technology stack of Python, LlamaIndex, Streamlit and pdf processing libraries. This project demonstrates the web application's ability to ingest, process, and derive responses from a sustainability report to effectively illustrative how RAG and LLMs can be used in the enhancement of explainability and reliability of AI systems in specialised domains. This PoC lays the foundation for further research and development toward better explainability of AI systems that puts forward the possibility of more explainable and, therefore, trustworthy AI applications. Bachelor's degree 2024-04-23T12:00:20Z 2024-04-23T12:00:20Z 2024 Final Year Project (FYP) Chan, D. I. S. (2024). Demystifying AI: bridging the explainability gap in LLMs. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175340 https://hdl.handle.net/10356/175340 en SCSE23-0150 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Retrieval augmented generation
Large language models
Explainability of AI
RAG
LLM
XAI
Sustainability reports auditing
Explainable AI
Chan, Darren Inn Siew
Demystifying AI: bridging the explainability gap in LLMs
title Demystifying AI: bridging the explainability gap in LLMs
title_full Demystifying AI: bridging the explainability gap in LLMs
title_fullStr Demystifying AI: bridging the explainability gap in LLMs
title_full_unstemmed Demystifying AI: bridging the explainability gap in LLMs
title_short Demystifying AI: bridging the explainability gap in LLMs
title_sort demystifying ai bridging the explainability gap in llms
topic Computer and Information Science
Retrieval augmented generation
Large language models
Explainability of AI
RAG
LLM
XAI
Sustainability reports auditing
Explainable AI
url https://hdl.handle.net/10356/175340
work_keys_str_mv AT chandarreninnsiew demystifyingaibridgingtheexplainabilitygapinllms