Neural networks. A general framework for non-linear function approximation
The focus of this paper is on the neural network modelling approach that has gained increasing recognition in GIScience in recent years. The novelty about neural networks lies in their ability to model non-linear processes with few, if any, a priori assumptions about the nature of the data-generatin...
Main Author: | |
---|---|
Format: | Others |
Language: | de |
Published: |
Wiley-Blackwell
2006
|
Online Access: | http://epub.wu.ac.at/5493/1/NeuralNetworks.pdf http://dx.doi.org/10.1111/j.1467-9671.2006.01010.x |
Summary: | The focus of this paper is on the neural network modelling approach that has gained increasing recognition in GIScience in recent years. The novelty about neural networks lies in their ability to model non-linear processes with few, if any, a priori assumptions about the nature of the data-generating process. The paper discusses some important issues that are central for successful application development. The scope is limited to feedforward neural networks, the leading example of neural networks. It is argued that failures in applications can usually be attributed to inadequate learning and/or inadequate complexity of the network model. Parameter estimation and a suitably chosen number of hidden units are, thus, of crucial importance for the success of real world neural network applications. The paper views network learning as an optimization problem, reviews two alternative approaches to network learning, and provides insights into current best practice to optimize complexity so to perform well on generalization tasks. |
---|