A Testing Framework for AI Linguistic Systems (<i>testFAILS</i>)

This paper presents an innovative testing framework, <i>testFAILS</i>, designed for the rigorous evaluation of AI Linguistic Systems (AILS), with particular emphasis on the various iterations of ChatGPT. Leveraging orthogonal array coverage, this framework provides a robust mechanism for...

Full description

Bibliographic Details
Main Authors: Yulia Kumar, Patricia Morreale, Peter Sorial, Justin Delgado, J. Jenny Li, Patrick Martins
Format: Article
Language:English
Published: MDPI AG 2023-07-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/14/3095
Description
Summary:This paper presents an innovative testing framework, <i>testFAILS</i>, designed for the rigorous evaluation of AI Linguistic Systems (AILS), with particular emphasis on the various iterations of ChatGPT. Leveraging orthogonal array coverage, this framework provides a robust mechanism for assessing AI systems, addressing the critical question, “How should AI be evaluated?” While the Turing test has traditionally been the benchmark for AI evaluation, it is argued that current, publicly available chatbots, despite their rapid advancements, have yet to meet this standard. However, the pace of progress suggests that achieving Turing-test-level performance may be imminent. In the interim, the need for effective AI evaluation and testing methodologies remains paramount. Ongoing research has already validated several versions of ChatGPT, and comprehensive testing on the latest models, including ChatGPT-4, Bard, Bing Bot, and the LLaMA and PaLM 2 models, is currently being conducted. The <i>testFAILS</i> framework is designed to be adaptable, ready to evaluate new chatbot versions as they are released. Additionally, available chatbot APIs have been tested and applications have been developed, one of them being <i>AIDoctor</i>, presented in this paper, which utilizes the ChatGPT-4 model and Microsoft Azure AI technologies.
ISSN:2079-9292