Verification of Approximate Initial-State Opacity for Control Systems via Neural Augmented Barrier Certificates

In this paper, we propose an augmented barrier certificate-based method for formally verifying the approximate initial-state opacity property of discrete time control systems. The opacity verification problem is formulated as the safety verification of an augmented system and is then addressed by se...

Full description

Bibliographic Details
Main Authors: Shengpu Wang, Mi Ding, Wang Lin, Yubo Jia
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/10/14/2388
Description
Summary:In this paper, we propose an augmented barrier certificate-based method for formally verifying the approximate initial-state opacity property of discrete time control systems. The opacity verification problem is formulated as the safety verification of an augmented system and is then addressed by searching for augmented barrier certificates. A set of well-defined verification conditions is a prerequisite for successfully identifying augmented barrier certificates of a specific type. We first suggest a new type of augmented barrier certificate which produces a weaker sufficient condition for approximate initial-state opacity. Furthermore, we develop an algorithmic framework where a <i>learner</i> and a <i>verifier</i> interact to synthesize augmented barrier certificates in the form of neural networks. The <i>learner</i> trains neural certificates via the deep learning method, and the <i>verifier</i> solves several mixed integer linear programs to either ensure the validity of the candidate certificates or yield counterexamples, which are passed back to further guide the <i>learner.</i> The experimental results demonstrate that our approach is more scalable and effective than the existing sum of squares programming method.
ISSN:2227-7390