A Risk Profile for Information Fusion Algorithms

E.T. Jaynes, originator of the maximum entropy interpretation of statistical mechanics, emphasized that there is an inevitable trade-off between the conflicting requirements of robustness and accuracy for any inferencing algorithm. This is because robustness requires discarding of information in ord...

Full description

Bibliographic Details
Main Authors: Kenric P. Nelson, Herbert Landau, Brian J. Scannell
Format: Article
Language:English
Published: MDPI AG 2011-08-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/13/8/1518/
id doaj-4fc82f3c82ce44aabf496b5244ce07cf
record_format Article
spelling doaj-4fc82f3c82ce44aabf496b5244ce07cf2020-11-25T00:20:15ZengMDPI AGEntropy1099-43002011-08-011381518153210.3390/e13081518A Risk Profile for Information Fusion AlgorithmsKenric P. NelsonHerbert LandauBrian J. ScannellE.T. Jaynes, originator of the maximum entropy interpretation of statistical mechanics, emphasized that there is an inevitable trade-off between the conflicting requirements of robustness and accuracy for any inferencing algorithm. This is because robustness requires discarding of information in order to reduce the sensitivity to outliers. The principal of nonlinear statistical coupling, which is an interpretation of the Tsallis entropy generalization, can be used to quantify this trade-off. The coupled-surprisal, -lnκ(p)≡-(pκ-1)/κ , is a generalization of Shannon surprisal or the logarithmic scoring rule, given a forecast p of a true event by an inferencing algorithm. The coupling parameter κ=1-q, where q is the Tsallis entropy index, is the degree of nonlinear coupling between statistical states. Positive (negative) values of nonlinear coupling decrease (increase) the surprisal information metric and thereby biases the risk in favor of decisive (robust) algorithms relative to the Shannon surprisal (κ=0). We show that translating the average coupled-surprisal to an effective probability is equivalent to using the generalized mean of the true event probabilities as a scoring rule. The metric is used to assess the robustness, accuracy, and decisiveness of a fusion algorithm. We use a two-parameter fusion algorithm to combine input probabilities from N sources. The generalized mean parameter ‘alpha’ varies the degree of smoothing and raising to a power Νβ with β between 0 and 1 provides a model of correlation. http://www.mdpi.com/1099-4300/13/8/1518/Tsallis entropyproper scoring rulesinformation fusionmachine learning
collection DOAJ
language English
format Article
sources DOAJ
author Kenric P. Nelson
Herbert Landau
Brian J. Scannell
spellingShingle Kenric P. Nelson
Herbert Landau
Brian J. Scannell
A Risk Profile for Information Fusion Algorithms
Entropy
Tsallis entropy
proper scoring rules
information fusion
machine learning
author_facet Kenric P. Nelson
Herbert Landau
Brian J. Scannell
author_sort Kenric P. Nelson
title A Risk Profile for Information Fusion Algorithms
title_short A Risk Profile for Information Fusion Algorithms
title_full A Risk Profile for Information Fusion Algorithms
title_fullStr A Risk Profile for Information Fusion Algorithms
title_full_unstemmed A Risk Profile for Information Fusion Algorithms
title_sort risk profile for information fusion algorithms
publisher MDPI AG
series Entropy
issn 1099-4300
publishDate 2011-08-01
description E.T. Jaynes, originator of the maximum entropy interpretation of statistical mechanics, emphasized that there is an inevitable trade-off between the conflicting requirements of robustness and accuracy for any inferencing algorithm. This is because robustness requires discarding of information in order to reduce the sensitivity to outliers. The principal of nonlinear statistical coupling, which is an interpretation of the Tsallis entropy generalization, can be used to quantify this trade-off. The coupled-surprisal, -lnκ(p)≡-(pκ-1)/κ , is a generalization of Shannon surprisal or the logarithmic scoring rule, given a forecast p of a true event by an inferencing algorithm. The coupling parameter κ=1-q, where q is the Tsallis entropy index, is the degree of nonlinear coupling between statistical states. Positive (negative) values of nonlinear coupling decrease (increase) the surprisal information metric and thereby biases the risk in favor of decisive (robust) algorithms relative to the Shannon surprisal (κ=0). We show that translating the average coupled-surprisal to an effective probability is equivalent to using the generalized mean of the true event probabilities as a scoring rule. The metric is used to assess the robustness, accuracy, and decisiveness of a fusion algorithm. We use a two-parameter fusion algorithm to combine input probabilities from N sources. The generalized mean parameter ‘alpha’ varies the degree of smoothing and raising to a power Νβ with β between 0 and 1 provides a model of correlation.
topic Tsallis entropy
proper scoring rules
information fusion
machine learning
url http://www.mdpi.com/1099-4300/13/8/1518/
work_keys_str_mv AT kenricpnelson ariskprofileforinformationfusionalgorithms
AT herbertlandau ariskprofileforinformationfusionalgorithms
AT brianjscannell ariskprofileforinformationfusionalgorithms
AT kenricpnelson riskprofileforinformationfusionalgorithms
AT herbertlandau riskprofileforinformationfusionalgorithms
AT brianjscannell riskprofileforinformationfusionalgorithms
_version_ 1725369034930651136