On the Regression Function Estimation by Walsh Series

碩士 === 淡江大學 === 數學系 === 82 === When the probability density function of the population is unknown . We must use the sample datas that we have observed to estimate . There are many estimate methods about the probability density function . Fo...

Full description

Bibliographic Details
Main Authors: Cheng-Hwang Perng, 彭成煌
Other Authors: Tien-Wen Chen
Format: Others
Language:zh-TW
Published: 1994
Online Access:http://ndltd.ncl.edu.tw/handle/25981535172721019538
id ndltd-TW-082TKU00479015
record_format oai_dc
spelling ndltd-TW-082TKU004790152016-02-08T04:06:32Z http://ndltd.ncl.edu.tw/handle/25981535172721019538 On the Regression Function Estimation by Walsh Series 利用Walsh級數估計迴歸函數 Cheng-Hwang Perng 彭成煌 碩士 淡江大學 數學系 82 When the probability density function of the population is unknown . We must use the sample datas that we have observed to estimate . There are many estimate methods about the probability density function . For example , orthogonal series , kernel function and the nearest neighbor , etc . Among these estimate methods , Cencov (1962) first proposed to estimate probability density function by orthogonal series . After that , Grelicki and Pawlak (1985) use the method of orthogonal series to estimate probability density function on the regression function estimation . Assume that (X1,Y1) , (X2,Y2) , ... , (Xn,Yn) are independent and identically distributed random pairs of (X,Y) . Let f(x) be an unknown probability density function of X and h(x,y) be the joint probability density function of (X,Y) . Suppose that E.absolute.(Y) < .inf. , let the regression function of Y on X = x be r(x) = E[Y. lgvert.X=x] = [g(x)}/[f(x)] , where g(x) = .int. yh(x,y)dy . Watson and Nadaraya (1964) first proposed to estimate r(x) by .rnhat.(x) = [.gnhat.(x)]/[.fnhat.(x)] . In this paper , we want to estimate the regression function by Walsh series , where X .in. [0,1) , Y .in. R . Further assume that f(x) , g(x) .in. L2 . Thus , .rnhat.(x) = [▆YjKN(x,Xj)]/[▆KN(x,Xj)] , where KN(.) is the Walsh Kernel function , that is , KN(x,Xj) = ▆.PSI. K(x).PSI.(Xj) = ▆.PSI.K(x.plmin.Xj) , where .PSI.K(.) is the Walsh series of the Kth term . Under the following conditions : (i) N(n) .arrr. .inf. as n .arrr. .inf. , (ii) [. Square.(N(n))]/n .arrr. 0 as n .arrr. .inf. , (iii) .absolute.( Y) .ltoreq. Cr < .inf. . In chapter 3 , we can show that (1) .rnhat.(x) conver- gence in probability to r(x) , for all x .in. [0,1) (2) .rnhat. (x) is Mean Squared Error Consistency , that is , .liminj.E .Square.(.rnhat.(x) - r(x)) = 0 . Further the rate of convergence of mean square error of .rnhat.(x) is obtained . Tien-Wen Chen 陳天文 1994 學位論文 ; thesis 63 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 淡江大學 === 數學系 === 82 === When the probability density function of the population is unknown . We must use the sample datas that we have observed to estimate . There are many estimate methods about the probability density function . For example , orthogonal series , kernel function and the nearest neighbor , etc . Among these estimate methods , Cencov (1962) first proposed to estimate probability density function by orthogonal series . After that , Grelicki and Pawlak (1985) use the method of orthogonal series to estimate probability density function on the regression function estimation . Assume that (X1,Y1) , (X2,Y2) , ... , (Xn,Yn) are independent and identically distributed random pairs of (X,Y) . Let f(x) be an unknown probability density function of X and h(x,y) be the joint probability density function of (X,Y) . Suppose that E.absolute.(Y) < .inf. , let the regression function of Y on X = x be r(x) = E[Y. lgvert.X=x] = [g(x)}/[f(x)] , where g(x) = .int. yh(x,y)dy . Watson and Nadaraya (1964) first proposed to estimate r(x) by .rnhat.(x) = [.gnhat.(x)]/[.fnhat.(x)] . In this paper , we want to estimate the regression function by Walsh series , where X .in. [0,1) , Y .in. R . Further assume that f(x) , g(x) .in. L2 . Thus , .rnhat.(x) = [▆YjKN(x,Xj)]/[▆KN(x,Xj)] , where KN(.) is the Walsh Kernel function , that is , KN(x,Xj) = ▆.PSI. K(x).PSI.(Xj) = ▆.PSI.K(x.plmin.Xj) , where .PSI.K(.) is the Walsh series of the Kth term . Under the following conditions : (i) N(n) .arrr. .inf. as n .arrr. .inf. , (ii) [. Square.(N(n))]/n .arrr. 0 as n .arrr. .inf. , (iii) .absolute.( Y) .ltoreq. Cr < .inf. . In chapter 3 , we can show that (1) .rnhat.(x) conver- gence in probability to r(x) , for all x .in. [0,1) (2) .rnhat. (x) is Mean Squared Error Consistency , that is , .liminj.E .Square.(.rnhat.(x) - r(x)) = 0 . Further the rate of convergence of mean square error of .rnhat.(x) is obtained .
author2 Tien-Wen Chen
author_facet Tien-Wen Chen
Cheng-Hwang Perng
彭成煌
author Cheng-Hwang Perng
彭成煌
spellingShingle Cheng-Hwang Perng
彭成煌
On the Regression Function Estimation by Walsh Series
author_sort Cheng-Hwang Perng
title On the Regression Function Estimation by Walsh Series
title_short On the Regression Function Estimation by Walsh Series
title_full On the Regression Function Estimation by Walsh Series
title_fullStr On the Regression Function Estimation by Walsh Series
title_full_unstemmed On the Regression Function Estimation by Walsh Series
title_sort on the regression function estimation by walsh series
publishDate 1994
url http://ndltd.ncl.edu.tw/handle/25981535172721019538
work_keys_str_mv AT chenghwangperng ontheregressionfunctionestimationbywalshseries
AT péngchénghuáng ontheregressionfunctionestimationbywalshseries
AT chenghwangperng lìyòngwalshjíshùgūjìhuíguīhánshù
AT péngchénghuáng lìyòngwalshjíshùgūjìhuíguīhánshù
_version_ 1718182646675144704