The Research of Fault Tolerant network

博士 === 國立中山大學 === 電機工程學系 === 87 === An important property of neural networks is that they appear to function well in the presence of faults. Indeed, an examination of biological neural network from the nature suggests very dramatic fault tolerant capability. A fault tolerant system has the property...

Full description

Bibliographic Details
Main Authors: Buh-Yun Sher, 佘步雲
Other Authors: 謝文雄
Format: Others
Language:zh-TW
Published: 1999
Online Access:http://ndltd.ncl.edu.tw/handle/23845349462125597540
id ndltd-TW-087NSYSU442002
record_format oai_dc
spelling ndltd-TW-087NSYSU4420022016-07-11T04:13:19Z http://ndltd.ncl.edu.tw/handle/23845349462125597540 The Research of Fault Tolerant network 容錯類神經網路之研究 Buh-Yun Sher 佘步雲 博士 國立中山大學 電機工程學系 87 An important property of neural networks is that they appear to function well in the presence of faults. Indeed, an examination of biological neural network from the nature suggests very dramatic fault tolerant capability. A fault tolerant system has the property that under circumstance, the system can tolerate fault, or damage and continue to function. Fault tolerance is one of the key performance measures of artificial neural networks (ANN’s), and is often viewed as an inherent feature of ANN’s. But without precise designing, it is not able to guarantee the degree of fault tolerance. In this thesis, fault models are first developed for neural network. Fault models are an essential aid in determining the reliability of a neural network system. They describe types of faults, and where and how they will occur in a system. However, they become more difficult to formulate realistically as a system is viewed at an increasingly more abstract level. It will be shown below how sensible locations for faults can be defined in a neural network viewed at the abstract level, and then the complex problem of detailing the fault will be approached. Next, an extensively studies on the fault tolerant property of the feedforward neural networks are taken. We propose a constraint backpropagation (CBP) training method, which can guarantee a high degree of fault tolerance when one or two hidden nodes fail. In order to achieve the goal of fault tolerance, we define an energy term called constraint energy that measures the performance degradation when some hidden nodes fail. During training, both the normal energy and the constraint energy will be minimized. We also develop a technique called output node saturation (ONS), incorporating CBP with ONS we can find a network which maintains exactly the same performance as normal network under some hidden nodes fail. For our novel training method, since the finding of proper set of weights is similar to a kind of optimization problem which is usually defined in terms of the minimization of a scalar cost function of a number of variables. When the variables not constrained by an inequality or equality relationship, the optimization is said to be unconstrained. In this thesis a feedforward networks is considered, the global error in the weight space is the cost function. is the mean squared error between the network outputs and the desired training outputs. We say that is a normal energy term, which is also denoted as in contrast with another energy term , which will be defined later. We formulate the task of finding a fault tolerant neural network as a constrained optimization problem. The variables (weights) are not constrained by an inequality or equality relationship instead, the variables are constrained by another cost function . During training, both and will be minimized. Experimental results show that for the network trained by CBP can achieve higher fault tolerant property and also possess better generalization properties than that trained by normal backpropagation with less computing overhead. 謝文雄 1999 學位論文 ; thesis 0 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 博士 === 國立中山大學 === 電機工程學系 === 87 === An important property of neural networks is that they appear to function well in the presence of faults. Indeed, an examination of biological neural network from the nature suggests very dramatic fault tolerant capability. A fault tolerant system has the property that under circumstance, the system can tolerate fault, or damage and continue to function. Fault tolerance is one of the key performance measures of artificial neural networks (ANN’s), and is often viewed as an inherent feature of ANN’s. But without precise designing, it is not able to guarantee the degree of fault tolerance. In this thesis, fault models are first developed for neural network. Fault models are an essential aid in determining the reliability of a neural network system. They describe types of faults, and where and how they will occur in a system. However, they become more difficult to formulate realistically as a system is viewed at an increasingly more abstract level. It will be shown below how sensible locations for faults can be defined in a neural network viewed at the abstract level, and then the complex problem of detailing the fault will be approached. Next, an extensively studies on the fault tolerant property of the feedforward neural networks are taken. We propose a constraint backpropagation (CBP) training method, which can guarantee a high degree of fault tolerance when one or two hidden nodes fail. In order to achieve the goal of fault tolerance, we define an energy term called constraint energy that measures the performance degradation when some hidden nodes fail. During training, both the normal energy and the constraint energy will be minimized. We also develop a technique called output node saturation (ONS), incorporating CBP with ONS we can find a network which maintains exactly the same performance as normal network under some hidden nodes fail. For our novel training method, since the finding of proper set of weights is similar to a kind of optimization problem which is usually defined in terms of the minimization of a scalar cost function of a number of variables. When the variables not constrained by an inequality or equality relationship, the optimization is said to be unconstrained. In this thesis a feedforward networks is considered, the global error in the weight space is the cost function. is the mean squared error between the network outputs and the desired training outputs. We say that is a normal energy term, which is also denoted as in contrast with another energy term , which will be defined later. We formulate the task of finding a fault tolerant neural network as a constrained optimization problem. The variables (weights) are not constrained by an inequality or equality relationship instead, the variables are constrained by another cost function . During training, both and will be minimized. Experimental results show that for the network trained by CBP can achieve higher fault tolerant property and also possess better generalization properties than that trained by normal backpropagation with less computing overhead.
author2 謝文雄
author_facet 謝文雄
Buh-Yun Sher
佘步雲
author Buh-Yun Sher
佘步雲
spellingShingle Buh-Yun Sher
佘步雲
The Research of Fault Tolerant network
author_sort Buh-Yun Sher
title The Research of Fault Tolerant network
title_short The Research of Fault Tolerant network
title_full The Research of Fault Tolerant network
title_fullStr The Research of Fault Tolerant network
title_full_unstemmed The Research of Fault Tolerant network
title_sort research of fault tolerant network
publishDate 1999
url http://ndltd.ncl.edu.tw/handle/23845349462125597540
work_keys_str_mv AT buhyunsher theresearchoffaulttolerantnetwork
AT shébùyún theresearchoffaulttolerantnetwork
AT buhyunsher róngcuòlèishénjīngwǎnglùzhīyánjiū
AT shébùyún róngcuòlèishénjīngwǎnglùzhīyánjiū
AT buhyunsher researchoffaulttolerantnetwork
AT shébùyún researchoffaulttolerantnetwork
_version_ 1718342452942733312