Qualitative analysis of probabilistic synchronizing systems

Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment. An MDP can be mathematically viewed as a one and half player stochastic game...

Full description

Bibliographic Details
Main Author: Shirmohammadi, Mahsa
Other Authors: Doyen, Laurent
Format: Doctoral Thesis
Language:en
Published: Universite Libre de Bruxelles 2014
Subjects:
Online Access:http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209188
id ndltd-ulb.ac.be-oai-dipot.ulb.ac.be-2013-209188
record_format oai_dc
collection NDLTD
language en
format Doctoral Thesis
sources NDLTD
topic Informatique générale
Sciences exactes et naturelles
Stochastic processes
Probabilistic automata
Processus stochastiques
Automates probabilistes
synchronising words
probabilistic automata
markov Decision Process
spellingShingle Informatique générale
Sciences exactes et naturelles
Stochastic processes
Probabilistic automata
Processus stochastiques
Automates probabilistes
synchronising words
probabilistic automata
markov Decision Process
Shirmohammadi, Mahsa
Qualitative analysis of probabilistic synchronizing systems
description Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment. An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action, and the environment chooses a successor according to a fixed probability distribution.<p><p>There are two incomparable views on the behavior of an MDP, when the strategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability. The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.<p><p>Recently, MDPs have been viewed as generators of sequences of probability distributions over states, called the distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes, which intuitively requires that the probability mass accumulates in some (group of) state(s), possibly in limit. A probability distribution is p-synchronizing if the probability mass is at least p in some state, and a sequence of probability distributions is always, eventually, weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.<p><p>For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.<p><p>We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirement for optimal winning strategies.<p><p>As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far. The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of the emptiness and universality problems for all synchronizing languages in all winning modes.<p><p>We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results for alternating finite automata over a one-letter alphabet. In addition, we study different variants of synchronization for timed and weighted automata, as two instances of infinite-state systems. === Doctorat en Sciences === info:eu-repo/semantics/nonPublished
author2 Doyen, Laurent
author_facet Doyen, Laurent
Shirmohammadi, Mahsa
author Shirmohammadi, Mahsa
author_sort Shirmohammadi, Mahsa
title Qualitative analysis of probabilistic synchronizing systems
title_short Qualitative analysis of probabilistic synchronizing systems
title_full Qualitative analysis of probabilistic synchronizing systems
title_fullStr Qualitative analysis of probabilistic synchronizing systems
title_full_unstemmed Qualitative analysis of probabilistic synchronizing systems
title_sort qualitative analysis of probabilistic synchronizing systems
publisher Universite Libre de Bruxelles
publishDate 2014
url http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209188
work_keys_str_mv AT shirmohammadimahsa qualitativeanalysisofprobabilisticsynchronizingsystems
AT shirmohammadimahsa analysequalitativedessystemesprobabilistessynchronisants
_version_ 1718628460479381504
spelling ndltd-ulb.ac.be-oai-dipot.ulb.ac.be-2013-2091882018-04-11T17:33:31Z info:eu-repo/semantics/doctoralThesis info:ulb-repo/semantics/doctoralThesis info:ulb-repo/semantics/openurl/vlink-dissertation Qualitative analysis of probabilistic synchronizing systems Analyse qualitative des systèmes probabilistes synchronisants Shirmohammadi, Mahsa Doyen, Laurent Massart, Thierry Raskin, Jean-François Bérard, Béatrice Filiot, Emmanuel Katoen, Joost-Pieter Kiefer, Stefan Sproston, Jeremy Universite Libre de Bruxelles Université libre de Bruxelles, Faculté des Sciences – Informatique, Bruxelles 2014-12-10 en Markov decision processes (MDPs) are finite-state probabilistic systems with both strategic and random choices, hence well-established to model the interactions between a controller and its randomly responding environment. An MDP can be mathematically viewed as a one and half player stochastic game played in rounds when the controller chooses an action, and the environment chooses a successor according to a fixed probability distribution.<p><p>There are two incomparable views on the behavior of an MDP, when the strategic choices are fixed. In the traditional view, an MDP is a generator of sequence of states, called the state-outcome; the winning condition of the player is thus expressed as a set of desired sequences of states that are visited during the game, e.g. Borel condition such as reachability. The computational complexity of related decision problems and memory requirement of winning strategies for the state-outcome conditions are well-studied.<p><p>Recently, MDPs have been viewed as generators of sequences of probability distributions over states, called the distribution-outcome. We introduce synchronizing conditions defined on distribution-outcomes, which intuitively requires that the probability mass accumulates in some (group of) state(s), possibly in limit. A probability distribution is p-synchronizing if the probability mass is at least p in some state, and a sequence of probability distributions is always, eventually, weakly, or strongly p-synchronizing if respectively all, some, infinitely many, or all but finitely many distributions in the sequence are p-synchronizing.<p><p>For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence.<p><p>We consider the problem of deciding whether an MDP is winning, for each synchronizing and winning mode: we establish matching upper and lower complexity bounds of the problems, as well as the memory requirement for optimal winning strategies.<p><p>As a further contribution, we study synchronization in probabilistic automata (PAs), that are kind of MDPs where controllers are restricted to use only word-strategies; i.e. no ability to observe the history of the system execution, but the number of choices made so far. The synchronizing languages of a PA is then the set of all synchronizing word-strategies: we establish the computational complexity of the emptiness and universality problems for all synchronizing languages in all winning modes.<p><p>We carry over results for synchronizing problems from MDPs and PAs to two-player turn-based games and non-deterministic finite state automata. Along with the main results, we establish new complexity results for alternating finite automata over a one-letter alphabet. In addition, we study different variants of synchronization for timed and weighted automata, as two instances of infinite-state systems. Informatique générale Sciences exactes et naturelles Stochastic processes Probabilistic automata Processus stochastiques Automates probabilistes synchronising words probabilistic automata markov Decision Process 1 v. (x, 192 p.) Doctorat en Sciences info:eu-repo/semantics/nonPublished local/bictel.ulb.ac.be:ULBetd-12092014-005606 local/ulbcat.ulb.ac.be:1086002 http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209188 No full-text files