MULTI-OBJECTIVE MODEL CHECKING OF MARKOV DECISION PROCESSES

We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, M, and given multiple linear-time (ω-regular or LTL) properties φi, and probabilities ri ∈ [0,1], i=1,...,k, we ask whether there exists a strategy σ for the cont...

Full description

Bibliographic Details
Main Authors: Etessami, K, Kwiatkowska, M, Vardi, M, Yannakakis, M
Format: Journal article
Language:English
Published: 2008
Description
Summary:We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, M, and given multiple linear-time (ω-regular or LTL) properties φi, and probabilities ri ∈ [0,1], i=1,...,k, we ask whether there exists a strategy σ for the controller such that, for all i, the probability that a trajectory of M controlled by σ satisfies φi is at least ri. We provide an algorithm that decides whether there exists such a strategy and if so produces it, and which runs in time polynomial in the size of the MDP. Such a strategy may require the use of both randomization and memory. We also consider more general multi-objective ω-regular queries, which we motivate with an application to assume-guarantee compositional reasoning for probabilistic systems. Note that there can be trade-offs between different properties: satisfying property φ1 with high probability may necessitate satisfying φ2 with low probability. Viewing this as a multi-objective optimization problem, we want information about the "trade-off curve" or Pareto curve for maximizing the probabilities of different properties. We show that one can compute an approximate Pareto curve with respect to a set of ω-regular properties in time polynomial in the size of the MDP. Our quantitative upper bounds use LP methods. We also study qualitative multi-objective model checking problems, and we show that these can be analysed by purely graph-theoretic methods, even though the strategies may still require both randomization and memory. © K. Etessami, M. Kwiatkowska, M. Y. Vardi, and M. Yannakakis.