Scaling Automated Programming Assessment Systems

The first automated assessment of student programs was reported more than 60 years ago, but this topic remains relevant and highly topical among computer science researchers and teachers. In the last decade, several factors have contributed to the popularity of this approach, such as the development...

Full description

Bibliographic Details
Main Authors: Igor Mekterović, Ljiljana Brkić, Marko Horvat
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/4/942
_version_ 1797621282763702272
author Igor Mekterović
Ljiljana Brkić
Marko Horvat
author_facet Igor Mekterović
Ljiljana Brkić
Marko Horvat
author_sort Igor Mekterović
collection DOAJ
description The first automated assessment of student programs was reported more than 60 years ago, but this topic remains relevant and highly topical among computer science researchers and teachers. In the last decade, several factors have contributed to the popularity of this approach, such as the development of massive online courses, where large numbers of students can hardly be assessed manually, the COVID-19 pandemic with a strong online presence and physical relocation of students, and the ever-increasing shortage of personnel in the field CS. Modern Automated Programming Assessment Systems (APASs) are nowadays implemented as web applications. For such web applications, especially those that support immediate (on-demand) program assessments and feedback, it can be quite a challenge to implement the various system modules in a secure and scalable manner. Over the past six years, we have developed and actively deployed “Edgar”—a state-of-the-art APAS that enables immediate program evaluation and feedback in any programming language (SQL, C, Java, etc.). In this article, we look at the APAS web application architecture with a focus on scalability issues. We review fundamental features such as dynamic analysis and untrusted code execution, as well as more complex cases such as static analysis and plagiarism detection, and we summarize the lessons learned over the previous six years of research. We identify scalability challenges, show how they have been addressed in APAS Edgar, and then propose general architectural solutions, building blocks and patterns to address those challenges.
first_indexed 2024-03-11T08:53:38Z
format Article
id doaj.art-3a730404608c42aa839e51fa39346b24
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-11T08:53:38Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-3a730404608c42aa839e51fa39346b242023-11-16T20:12:26ZengMDPI AGElectronics2079-92922023-02-0112494210.3390/electronics12040942Scaling Automated Programming Assessment SystemsIgor Mekterović0Ljiljana Brkić1Marko Horvat2Department of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, CroatiaDepartment of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, CroatiaDepartment of Applied Computing, Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, HR-10000 Zagreb, CroatiaThe first automated assessment of student programs was reported more than 60 years ago, but this topic remains relevant and highly topical among computer science researchers and teachers. In the last decade, several factors have contributed to the popularity of this approach, such as the development of massive online courses, where large numbers of students can hardly be assessed manually, the COVID-19 pandemic with a strong online presence and physical relocation of students, and the ever-increasing shortage of personnel in the field CS. Modern Automated Programming Assessment Systems (APASs) are nowadays implemented as web applications. For such web applications, especially those that support immediate (on-demand) program assessments and feedback, it can be quite a challenge to implement the various system modules in a secure and scalable manner. Over the past six years, we have developed and actively deployed “Edgar”—a state-of-the-art APAS that enables immediate program evaluation and feedback in any programming language (SQL, C, Java, etc.). In this article, we look at the APAS web application architecture with a focus on scalability issues. We review fundamental features such as dynamic analysis and untrusted code execution, as well as more complex cases such as static analysis and plagiarism detection, and we summarize the lessons learned over the previous six years of research. We identify scalability challenges, show how they have been addressed in APAS Edgar, and then propose general architectural solutions, building blocks and patterns to address those challenges.https://www.mdpi.com/2079-9292/12/4/942web applicationscalabilityAPASautomated programming assessment
spellingShingle Igor Mekterović
Ljiljana Brkić
Marko Horvat
Scaling Automated Programming Assessment Systems
Electronics
web application
scalability
APAS
automated programming assessment
title Scaling Automated Programming Assessment Systems
title_full Scaling Automated Programming Assessment Systems
title_fullStr Scaling Automated Programming Assessment Systems
title_full_unstemmed Scaling Automated Programming Assessment Systems
title_short Scaling Automated Programming Assessment Systems
title_sort scaling automated programming assessment systems
topic web application
scalability
APAS
automated programming assessment
url https://www.mdpi.com/2079-9292/12/4/942
work_keys_str_mv AT igormekterovic scalingautomatedprogrammingassessmentsystems
AT ljiljanabrkic scalingautomatedprogrammingassessmentsystems
AT markohorvat scalingautomatedprogrammingassessmentsystems