Distilled and Contextualized Neural Models Benchmarked for Vulnerable Function Detection
Detecting vulnerabilities in programs is an important yet challenging problem in cybersecurity. The recent advancement in techniques of natural language understanding enables the data-driven research on automated code analysis to embrace Pre-trained Contextualized Models (PCMs). These models are pre...
Main Authors: | Guanjun Lin, Heming Jia, Di Wu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-11-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/10/23/4482 |
Similar Items
-
A Context-Aware Neural Embedding for Function-Level Vulnerability Detection
by: Hongwei Wei, et al.
Published: (2021-11-01) -
Progressive Network Grafting With Local Features Embedding for Few-Shot Knowledge Distillation
by: Weidong Du
Published: (2022-01-01) -
Knowledge Distillation With Feature Self Attention
by: Sin-Gu Park, et al.
Published: (2023-01-01) -
AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation
by: Qinhong Zhou, et al.
Published: (2023-01-01) -
A Mongolian-Chinese Neural Machine Translation Model Based on Soft Target Templates and Contextual Knowledge
by: Qing-Dao-Er-Ji Ren, et al.
Published: (2023-10-01)