Generating Python Mutants From Bug Fixes Using Neural Machine Translation

Due to the fast-paced development of technology, the software has become a crucial aspect of modern life, facilitating the operation and management of hardware devices. Nevertheless, using substandard software can result in severe complications for users, putting human lives at risk. This underscore...

Full description

Bibliographic Details
Main Authors: Sergen Asik, Ugur Yayan
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10210051/
Description
Summary:Due to the fast-paced development of technology, the software has become a crucial aspect of modern life, facilitating the operation and management of hardware devices. Nevertheless, using substandard software can result in severe complications for users, putting human lives at risk. This underscores the significance of error-free and premium-quality software. Verification and validation are essential in ensuring high-quality software development; software testing is integral to this process. Although code coverage is a prevalent method for assessing the efficacy of test suites, it has some limitations. Therefore, mutation testing is proposed as a remedy to tackle these limitations. Furthermore, mutation testing is recognized as a method for directing test case creation and evaluating the effectiveness of test suites. Our proposed method involves autonomously learning mutations from faults in real-world software applications. Firstly, our approach involves extracting bug fixes at the method-level, classifying them according to mutation types, and performing code abstraction. Subsequently, the approach utilizes a deep learning technique based on neural machine translation to develop mutation models. Our method has been trained and assessed using approximately <inline-formula> <tex-math notation="LaTeX">$\sim $ </tex-math></inline-formula>588k bug fix commits extracted from GitHub. The results of our experimental assessment indicate that our models can forecast mutations resembling resolved bugs in 6&#x0025; to 35&#x0025; of instances. The models effectively revert fixed code to its original buggy version, reproducing the original bug and generating various other buggy codes with up to 94&#x0025; accuracy. More than 96&#x0025; of the generated mutants also demonstrate lexical and syntactic accuracy.
ISSN:2169-3536