Improving the reliability and validity of test data adequacy in programming assessments

Automatic Programming Assessment (or APA) has recently become a notable method in assisting educators of programming courses to automatically assess and grade students’ programming exercises as its counterpart; the typical manual tasks are prone to errors and lead to inconsistency. Practically, this...

Full description

Bibliographic Details
Main Authors: Romli, Rohaida, Shahimi, Shahida, Zamli, Kamal Zuhairi
Format: Article
Language:English
Published: Penerbit UTM Press 2015
Subjects:
Online Access:https://repo.uum.edu.my/id/eprint/18700/1/JT%2077%209%20%202015%20149-163.pdf
_version_ 1825804074368368640
author Romli, Rohaida
Shahimi, Shahida
Zamli, Kamal Zuhairi
author_facet Romli, Rohaida
Shahimi, Shahida
Zamli, Kamal Zuhairi
author_sort Romli, Rohaida
collection UUM
description Automatic Programming Assessment (or APA) has recently become a notable method in assisting educators of programming courses to automatically assess and grade students’ programming exercises as its counterpart; the typical manual tasks are prone to errors and lead to inconsistency. Practically, this method also provides an alternative means of reducing the educators’ workload effectively. By default, test data generation process plays an important role to perform a dynamic testing on students’ programs. Dynamic testing involves the execution of a program against different inputs or test data and the comparison of the results with the expected output, which must conform to the program specifications.In the software testing field, there have been diverse automated methods for test data generation.Unfortunately, APA rarely adopts these methods. Limited studies have attempted to integrate APA and test data generation to include more useful features and to provide a precise and thorough quality program testing. Thus, we propose a framework of test data generation known as FaSt-Gen covering both the functional and structural testing of a program for APA.Functional testing is a testing that relies on specified functional requirements and focuses the output generated in response to the selected test data and execution, Meanwhile, structural testing looks at the specific program logic to verify how it works. Overall, FaSt-Gen contributes as a means to educators of programming courses to furnish an adequate set of test data to assess students’ programming solutions regardless of having the optimal expertise in the particular knowledge of test cases design. FaSt-Gen integrates the positive and negative testing criteria or so-called reliable and valid test adequacy criteria to derive desired test data and test set schema. As for the functional testing, the integration of specification-derived test and simplified boundary value analysis techniques covering both the criteria. Path coverage criterion guides the test data selection for structural testing.The findings from the conducted controlled experiment and comparative study evaluation show that FaSt-Gen improves the reliability and validity of test data adequacy in programming assessments.
first_indexed 2024-07-04T06:08:25Z
format Article
id uum-18700
institution Universiti Utara Malaysia
language English
last_indexed 2024-07-04T06:08:25Z
publishDate 2015
publisher Penerbit UTM Press
record_format eprints
spelling uum-187002016-10-03T01:42:26Z https://repo.uum.edu.my/id/eprint/18700/ Improving the reliability and validity of test data adequacy in programming assessments Romli, Rohaida Shahimi, Shahida Zamli, Kamal Zuhairi QA75 Electronic computers. Computer science Automatic Programming Assessment (or APA) has recently become a notable method in assisting educators of programming courses to automatically assess and grade students’ programming exercises as its counterpart; the typical manual tasks are prone to errors and lead to inconsistency. Practically, this method also provides an alternative means of reducing the educators’ workload effectively. By default, test data generation process plays an important role to perform a dynamic testing on students’ programs. Dynamic testing involves the execution of a program against different inputs or test data and the comparison of the results with the expected output, which must conform to the program specifications.In the software testing field, there have been diverse automated methods for test data generation.Unfortunately, APA rarely adopts these methods. Limited studies have attempted to integrate APA and test data generation to include more useful features and to provide a precise and thorough quality program testing. Thus, we propose a framework of test data generation known as FaSt-Gen covering both the functional and structural testing of a program for APA.Functional testing is a testing that relies on specified functional requirements and focuses the output generated in response to the selected test data and execution, Meanwhile, structural testing looks at the specific program logic to verify how it works. Overall, FaSt-Gen contributes as a means to educators of programming courses to furnish an adequate set of test data to assess students’ programming solutions regardless of having the optimal expertise in the particular knowledge of test cases design. FaSt-Gen integrates the positive and negative testing criteria or so-called reliable and valid test adequacy criteria to derive desired test data and test set schema. As for the functional testing, the integration of specification-derived test and simplified boundary value analysis techniques covering both the criteria. Path coverage criterion guides the test data selection for structural testing.The findings from the conducted controlled experiment and comparative study evaluation show that FaSt-Gen improves the reliability and validity of test data adequacy in programming assessments. Penerbit UTM Press 2015 Article PeerReviewed application/pdf en https://repo.uum.edu.my/id/eprint/18700/1/JT%2077%209%20%202015%20149-163.pdf Romli, Rohaida and Shahimi, Shahida and Zamli, Kamal Zuhairi (2015) Improving the reliability and validity of test data adequacy in programming assessments. Jurnal Teknologi, 77 (9). pp. 149-163. ISSN 0127-9696 http://www.jurnalteknologi.utm.my/index.php/jurnalteknologi/article/view/6201
spellingShingle QA75 Electronic computers. Computer science
Romli, Rohaida
Shahimi, Shahida
Zamli, Kamal Zuhairi
Improving the reliability and validity of test data adequacy in programming assessments
title Improving the reliability and validity of test data adequacy in programming assessments
title_full Improving the reliability and validity of test data adequacy in programming assessments
title_fullStr Improving the reliability and validity of test data adequacy in programming assessments
title_full_unstemmed Improving the reliability and validity of test data adequacy in programming assessments
title_short Improving the reliability and validity of test data adequacy in programming assessments
title_sort improving the reliability and validity of test data adequacy in programming assessments
topic QA75 Electronic computers. Computer science
url https://repo.uum.edu.my/id/eprint/18700/1/JT%2077%209%20%202015%20149-163.pdf
work_keys_str_mv AT romlirohaida improvingthereliabilityandvalidityoftestdataadequacyinprogrammingassessments
AT shahimishahida improvingthereliabilityandvalidityoftestdataadequacyinprogrammingassessments
AT zamlikamalzuhairi improvingthereliabilityandvalidityoftestdataadequacyinprogrammingassessments