Is JavaScript Call Graph Extraction Solved Yet? A Comparative Study of Static and Dynamic Tools

Code analysis is more important than ever because JavaScript is increasingly popular and actively used, both on the client and server sides. Most algorithms for analyzing vulnerabilities, finding coding issues, or inferring type depend on the call graph representation of the underlying program. Luck...

Full description

Bibliographic Details
Main Authors: Gabor Antal, Peter Hegedus, Zoltan Herczeg, Gabor Loki, Rudolf Ferenc
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10066273/
Description
Summary:Code analysis is more important than ever because JavaScript is increasingly popular and actively used, both on the client and server sides. Most algorithms for analyzing vulnerabilities, finding coding issues, or inferring type depend on the call graph representation of the underlying program. Luckily, there are quite a few tools to get this job done already. However, their performance in vitro and especially in vivo has not yet been extensively compared and evaluated. In this paper, we compare several approaches for building JavaScript call graphs, namely five static and two dynamic approaches on 26 WebKit SunSpider programs, and two static and two dynamic approaches on 12 real-world Node.js programs. The tools under examination using static techniques were npm call graph, IBM WALA, Google Closure Compiler, Approximate Call Graph, and Type Analyzer for JavaScript. We performed dynamic analyzes relying on the nodejs-cg tool (a customized Node.js runtime) and the NodeProf instrumentation and profiling framework. We provide a quantitative evaluation of the results, and a result quality analysis based on 941 manually validated call edges. On the SunSpider programs, which do not take any inputs, so dynamic extraction could be complete, all the static tools also performed well. For example, TAJS found 93% of all edges while having a 97% precision compared to the precise dynamic call graph. When it comes to real-world Node.js modules, our evaluation shows that static tools struggle with parsing the code and fail to detect a significant amount of call edges that dynamic approaches can capture. Nonetheless, a significant number of edges not detected by dynamic approaches are also reported. Among these, however, there are also edges that are real, but for some reason the unit tests did not execute the branches in which these calls were included.
ISSN:2169-3536