Summary: | In recent years Boolean Networks (BN) and Probabilistic Boolean Networks
(PBN) have become popular paradigms for modeling gene regulation. A PBN is a
collection of BNs in which the gene state vector transitions according to the rules
of one of the constituent BNs, and the network choice is governed by a selection
distribution.
Intervention in the context of PBNs was first proposed with an objective of avoid-
ing undesirable states, such as those associated with a disease. The early methods of
intervention were ad hoc, using concepts like mean first passage time and alteration
of rule based structure. Since then, the problem has been recognized and posed as
one of optimal control of a Markov Network, where the objective is to find optimal
strategies for manipulating external control variables to guide the network away from
the set of undesirable states towards the set of desirable states. This development
made it possible to use the elegant theory of Markov decision processes (MDP) to
solve an array of problems in the area of control in gene regulatory networks, the
main theme of this work.
We first introduce the optimal control problem in the context of PBN models
and review our solution using the dynamic programming approach. We next discuss
a case in which the network state is not observable but for which measurements that
are probabilistically related to the underlying state are available.
We then address the issue of terminal penalty assignment, considering long term prospective behavior and the special attractor structure of these networks.
We finally discuss our recent work on optimal intervention for the case of a family
of BNs. Here we consider simultaneously controlling a set of Boolean Models that
satisfy the constraints imposed by the underlying biology and the data. This situation
arises in a case where the data is assumed to arise by sampling the steady state of
the real biological network.
|