Learning optimal linear filters for edge detection
Edge detection is important both for its practical applications to computer vision as well as its relationship to early processing in the visual cortex. We describe experiments in which the back-propagation learning algorithm was used to learn sets of linear filters for the task of determining the o...
Main Author: | |
---|---|
Language: | English |
Published: |
University of British Columbia
2010
|
Online Access: | http://hdl.handle.net/2429/30347 |
Summary: | Edge detection is important both for its practical applications to computer vision as well as its relationship to early processing in the visual cortex. We describe experiments in which the back-propagation learning algorithm was used to learn sets of linear filters for the task of determining the orientation and location of edges to sub-pixel accuracy. A model of edge formation was used to generate novel input-output pairs for each iteration of the training process. The desired output included determining the interpolated location and orientation of the edge. The linear filters that result from this optimization process bear a close resemblance to oriented Gabor or derivative-of-Gaussian filters that have been found in primary visual cortex. In addition, the edge detection results appear to be superior to the existing standard edge detectors and may prove to be of considerable practical value in computer vision. === Science, Faculty of === Computer Science, Department of === Graduate |
---|