Bias problems in large language models and how to mitigate them

Pretrained Language Models (PLMs) like ChatGPT have become integral to various industries, revolutionising applications from customer service to software development. However, these PLMs are often trained on vast, unmoderated datasets, which may contain social biases that can be propagated in the m...

Full description

Bibliographic Details
Main Author: Ong, Adrian Zhi Ying
Other Authors: Luu Anh Tuan
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181163