Weighted automata extraction and explanation of recurrent neural networks for natural language tasks

Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and e...

Full description

Bibliographic Details
Main Authors: Wei, Z, Zhang, X, Zhang, Y, Sun, M
Format: Journal article
Language:English
Published: Elsevier 2023
_version_ 1826314494209425408
author Wei, Z
Zhang, X
Zhang, Y
Sun, M
author_facet Wei, Z
Zhang, X
Zhang, Y
Sun, M
author_sort Wei, Z
collection OXFORD
description Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and explanation. However, existing approaches like exact learning and compositional approaches for model extraction have limitations in either scalability or precision. In this paper, we propose a novel framework of Weighted Finite Automata (WFA) extraction and explanation to tackle the limitations for natural language tasks. First, to address the transition sparsity and context loss problems we identified in WFA extraction for natural language tasks, we propose an empirical method to complement missing rules in the transition diagram, and adjust transition matrices to enhance the context-awareness of the WFA. We also propose two data augmentation tactics to track more dynamic behaviours of RNN, which further allows us to improve the extraction precision. Based on the extracted model, we propose an explanation method for RNNs including a word embedding method – Transition Matrix Embeddings (TME) and TME-based task oriented explanation for the target RNN. Our evaluation demonstrates the advantage of our method in extraction precision than existing approaches, and the effectiveness of TME-based explanation method in applications to pretraining and adversarial example generation.
first_indexed 2024-03-07T08:27:32Z
format Journal article
id oxford-uuid:5ce80f26-1596-4767-b0ca-2f923a296691
institution University of Oxford
language English
last_indexed 2024-09-25T04:33:22Z
publishDate 2023
publisher Elsevier
record_format dspace
spelling oxford-uuid:5ce80f26-1596-4767-b0ca-2f923a2966912024-09-06T09:06:35ZWeighted automata extraction and explanation of recurrent neural networks for natural language tasksJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:5ce80f26-1596-4767-b0ca-2f923a296691EnglishSymplectic ElementsElsevier2023Wei, ZZhang, XZhang, YSun, MRecurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and explanation. However, existing approaches like exact learning and compositional approaches for model extraction have limitations in either scalability or precision. In this paper, we propose a novel framework of Weighted Finite Automata (WFA) extraction and explanation to tackle the limitations for natural language tasks. First, to address the transition sparsity and context loss problems we identified in WFA extraction for natural language tasks, we propose an empirical method to complement missing rules in the transition diagram, and adjust transition matrices to enhance the context-awareness of the WFA. We also propose two data augmentation tactics to track more dynamic behaviours of RNN, which further allows us to improve the extraction precision. Based on the extracted model, we propose an explanation method for RNNs including a word embedding method – Transition Matrix Embeddings (TME) and TME-based task oriented explanation for the target RNN. Our evaluation demonstrates the advantage of our method in extraction precision than existing approaches, and the effectiveness of TME-based explanation method in applications to pretraining and adversarial example generation.
spellingShingle Wei, Z
Zhang, X
Zhang, Y
Sun, M
Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title_full Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title_fullStr Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title_full_unstemmed Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title_short Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
title_sort weighted automata extraction and explanation of recurrent neural networks for natural language tasks
work_keys_str_mv AT weiz weightedautomataextractionandexplanationofrecurrentneuralnetworksfornaturallanguagetasks
AT zhangx weightedautomataextractionandexplanationofrecurrentneuralnetworksfornaturallanguagetasks
AT zhangy weightedautomataextractionandexplanationofrecurrentneuralnetworksfornaturallanguagetasks
AT sunm weightedautomataextractionandexplanationofrecurrentneuralnetworksfornaturallanguagetasks