Faster Feedback with AI? A Test Prioritization Study

‹Programming›Companion ’24, March 11–15, 2024, Lund, Sweden

Bibliographic Details
Main Authors: Mattis, Toni, B?hme, Lukas, Krebs, Eva, Rinard, Martin C., Hirschfeld, Robert
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: ACM|Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of Programming 2024
Online Access:https://hdl.handle.net/1721.1/155934
_version_ 1824457878198550528
author Mattis, Toni
B?hme, Lukas
Krebs, Eva
Rinard, Martin C.
Hirschfeld, Robert
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Mattis, Toni
B?hme, Lukas
Krebs, Eva
Rinard, Martin C.
Hirschfeld, Robert
author_sort Mattis, Toni
collection MIT
description ‹Programming›Companion ’24, March 11–15, 2024, Lund, Sweden
first_indexed 2024-09-23T08:19:42Z
format Article
id mit-1721.1/155934
institution Massachusetts Institute of Technology
language English
last_indexed 2025-02-19T04:16:59Z
publishDate 2024
publisher ACM|Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of Programming
record_format dspace
spelling mit-1721.1/1559342024-12-23T06:22:04Z Faster Feedback with AI? A Test Prioritization Study Mattis, Toni B?hme, Lukas Krebs, Eva Rinard, Martin C. Hirschfeld, Robert Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory ‹Programming›Companion ’24, March 11–15, 2024, Lund, Sweden Feedback during programming is desirable, but its usefulness depends on immediacy and relevance to the task. Unit and regression testing are practices to ensure programmers can obtain feedback on their changes; however, running a large test suite is rarely fast, and only a few results are relevant. Identifying tests relevant to a change can help programmers in two ways: upcoming issues can be detected earlier during programming, and relevant tests can serve as examples to help programmers understand the code they are editing. In this work, we describe an approach to evaluate how well large language models (LLMs) and embedding models can judge the relevance of a test to a change. We construct a dataset by applying faulty variations of real-world code changes and measuring whether the model could nominate the failing tests beforehand. We found that, while embedding models perform best on such a task, even simple information retrieval models are surprisingly competitive. In contrast, pre-trained LLMs are of limited use as they focus on confounding aspects like coding styles. We argue that the high computational cost of AI models is not always justified, and tool developers should also consider non-AI models for code-related retrieval and recommendation tasks. Lastly, we generalize from unit tests to live examples and outline how our approach can benefit live programming environments. 2024-08-05T17:05:39Z 2024-08-05T17:05:39Z 2024-03-11 2024-08-01T07:49:57Z Article http://purl.org/eprint/type/ConferencePaper 979-8-4007-0634-9 https://hdl.handle.net/1721.1/155934 Mattis, Toni, B?hme, Lukas, Krebs, Eva, Rinard, Martin C. and Hirschfeld, Robert. 2024. "Faster Feedback with AI? A Test Prioritization Study." PUBLISHER_CC en 10.1145/3660829.3660837 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The author(s) application/pdf ACM|Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of Programming Association for Computing Machinery
spellingShingle Mattis, Toni
B?hme, Lukas
Krebs, Eva
Rinard, Martin C.
Hirschfeld, Robert
Faster Feedback with AI? A Test Prioritization Study
title Faster Feedback with AI? A Test Prioritization Study
title_full Faster Feedback with AI? A Test Prioritization Study
title_fullStr Faster Feedback with AI? A Test Prioritization Study
title_full_unstemmed Faster Feedback with AI? A Test Prioritization Study
title_short Faster Feedback with AI? A Test Prioritization Study
title_sort faster feedback with ai a test prioritization study
url https://hdl.handle.net/1721.1/155934
work_keys_str_mv AT mattistoni fasterfeedbackwithaiatestprioritizationstudy
AT bhmelukas fasterfeedbackwithaiatestprioritizationstudy
AT krebseva fasterfeedbackwithaiatestprioritizationstudy
AT rinardmartinc fasterfeedbackwithaiatestprioritizationstudy
AT hirschfeldrobert fasterfeedbackwithaiatestprioritizationstudy