Estimation of state-transition probability matrices in asynchronous population Markov processes

We address the problem of estimating the probability transition matrix of an asynchronous vector Markov process from aggregate (longitudinal) population observations. This problem is motivated by estimating phenotypic state transitions probabilities in populations of biological cells, but can be ext...

Full description

Bibliographic Details
Main Authors: Farahat, Waleed A. (Contributor), Asada, Harry (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Mechanical Engineering (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2013-02-06T18:58:23Z.
Subjects:
Online Access:Get fulltext
Description
Summary:We address the problem of estimating the probability transition matrix of an asynchronous vector Markov process from aggregate (longitudinal) population observations. This problem is motivated by estimating phenotypic state transitions probabilities in populations of biological cells, but can be extended to multiple contexts of populations of Markovian agents. We adopt a Bayesian estimation approach, which can be computationally expensive if exact marginalization is employed. To compute the posterior estimates efficiently, we use Monte Carlo simulations coupled with Gibb's sampling techniques that explicitly incorporate sampling constraints from the desired distributions. Such sampling techniques can attain significant computational advantages. Illustration of the algorithm is provided via simulation examples.