Mutual information based measures on complex interdependent networks of neuro data sets

We assume that even the simplest model of the brain is nonlinear and ‘causal’. Proceeding with the first assumption, we need a measure that is able to capture nonlinearity and hence Mutual Information whose variants includes Transfer Entropy is chosen. The second assumption of ‘causality’ is defined...

Full description

Bibliographic Details
Main Author: Abdul Razak, Fatimah
Other Authors: Jensen, Henrik ; Christensen, Kim
Published: Imperial College London 2013
Subjects:
510
Online Access:http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.576003
Description
Summary:We assume that even the simplest model of the brain is nonlinear and ‘causal’. Proceeding with the first assumption, we need a measure that is able to capture nonlinearity and hence Mutual Information whose variants includes Transfer Entropy is chosen. The second assumption of ‘causality’ is defined in relation to prediction ala Granger causality. Both these assumptions led us to Transfer Entropy. We take the simplest case of Transfer Entropy, redefine it for our purposes of detecting causal lag and proceed with a systematic investigation of this value. We start off with the Ising model and then moved on to created an amended Ising model where we attempted to replicate ‘causality’. We do the same for a toy model that can be calculated analytically and thus simulations can be compared to its theoretical value. Lastly, we tackle a very interesting EEG data set where Transfer Entropy shall be used on different frequency bands to display possible emergent property of ‘causality’ and detect possible candidates for causal lag on the data sets.