Internal and collective interpretation for improving human interpretability of multi-layered neural networks
The present paper aims to propose a new type of information-theoretic method to interpret the inference mechanism of neural networks. We interpret the internal inference mechanism for itself without any external methods such as symbolic or fuzzy rules. In addition, we make interpretation processes a...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Universitas Ahmad Dahlan
2019-10-01
|
Series: | IJAIN (International Journal of Advances in Intelligent Informatics) |
Subjects: | |
Online Access: | http://ijain.org/index.php/IJAIN/article/view/420 |
id |
doaj-ddc09d66cbac49f1be1c3ff2cfeda755 |
---|---|
record_format |
Article |
spelling |
doaj-ddc09d66cbac49f1be1c3ff2cfeda7552020-11-25T00:30:27ZengUniversitas Ahmad DahlanIJAIN (International Journal of Advances in Intelligent Informatics)2442-65712548-31612019-10-015317919210.26555/ijain.v5i3.420123Internal and collective interpretation for improving human interpretability of multi-layered neural networksRyotaro Kamimura0Kumamoto Drone Technology and Development Foundation; IT Education Center, Tokai UniverisityThe present paper aims to propose a new type of information-theoretic method to interpret the inference mechanism of neural networks. We interpret the internal inference mechanism for itself without any external methods such as symbolic or fuzzy rules. In addition, we make interpretation processes as stable as possible. This means that we interpret the inference mechanism, considering all internal representations, created by those different conditions and patterns. To make the internal interpretation possible, we try to compress multi-layered neural networks into the simplest ones without hidden layers. Then, the natural information loss in the process of compression is complemented by the introduction of a mutual information augmentation component. The method was applied to two data sets, namely, the glass data set and the pregnancy data set. In both data sets, information augmentation and compression methods could improve generalization performance. In addition, compressed or collective weights from the multi-layered networks tended to produce weights, ironically, similar to the linear correlation coefficients between inputs and targets, while the conventional methods such as the logistic regression analysis failed to do so.http://ijain.org/index.php/IJAIN/article/view/420mutual informationinternal interpretationcollective interpretationinference mechanismgeneralization |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ryotaro Kamimura |
spellingShingle |
Ryotaro Kamimura Internal and collective interpretation for improving human interpretability of multi-layered neural networks IJAIN (International Journal of Advances in Intelligent Informatics) mutual information internal interpretation collective interpretation inference mechanism generalization |
author_facet |
Ryotaro Kamimura |
author_sort |
Ryotaro Kamimura |
title |
Internal and collective interpretation for improving human interpretability of multi-layered neural networks |
title_short |
Internal and collective interpretation for improving human interpretability of multi-layered neural networks |
title_full |
Internal and collective interpretation for improving human interpretability of multi-layered neural networks |
title_fullStr |
Internal and collective interpretation for improving human interpretability of multi-layered neural networks |
title_full_unstemmed |
Internal and collective interpretation for improving human interpretability of multi-layered neural networks |
title_sort |
internal and collective interpretation for improving human interpretability of multi-layered neural networks |
publisher |
Universitas Ahmad Dahlan |
series |
IJAIN (International Journal of Advances in Intelligent Informatics) |
issn |
2442-6571 2548-3161 |
publishDate |
2019-10-01 |
description |
The present paper aims to propose a new type of information-theoretic method to interpret the inference mechanism of neural networks. We interpret the internal inference mechanism for itself without any external methods such as symbolic or fuzzy rules. In addition, we make interpretation processes as stable as possible. This means that we interpret the inference mechanism, considering all internal representations, created by those different conditions and patterns. To make the internal interpretation possible, we try to compress multi-layered neural networks into the simplest ones without hidden layers. Then, the natural information loss in the process of compression is complemented by the introduction of a mutual information augmentation component. The method was applied to two data sets, namely, the glass data set and the pregnancy data set. In both data sets, information augmentation and compression methods could improve generalization performance. In addition, compressed or collective weights from the multi-layered networks tended to produce weights, ironically, similar to the linear correlation coefficients between inputs and targets, while the conventional methods such as the logistic regression analysis failed to do so. |
topic |
mutual information internal interpretation collective interpretation inference mechanism generalization |
url |
http://ijain.org/index.php/IJAIN/article/view/420 |
work_keys_str_mv |
AT ryotarokamimura internalandcollectiveinterpretationforimprovinghumaninterpretabilityofmultilayeredneuralnetworks |
_version_ |
1725326547279151104 |