Learn Quasi-Stationary Distributions of Finite State Markov Chain
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by candidate distribution and true target dis...
Main Authors: | Zhiqiang Cai, Ling Lin, Xiang Zhou |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-01-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/24/1/133 |
Similar Items
-
Generalized Analysis of a Distribution Separation Method
by: Peng Zhang, et al.
Published: (2016-04-01) -
A First Approach to Closeness Distributions
by: Jesus Cerquides
Published: (2021-12-01) -
<i>α</i>-Geodesical Skew Divergence
by: Masanari Kimura, et al.
Published: (2021-04-01) -
Numerical Study on Parameters of the Airborne VLF Antenna by Quasi-Stationary Model
by: Jiangfeng Cheng, et al.
Published: (2022-12-01) -
STOCHASTIC FINITE ELEMENT MODEL UPDATING BASED ON POLYNOMIAL CHAOTIC EXPANSION AND KL DIVERGENCE
by: XU ZeWei, et al.
Published: (2021-01-01)