Generalized high-dimensional trace regression via nuclear norm regularization

We study the generalized trace regression with a near low-rank regression coefficient matrix, which extends notion of sparsity for regression coefficient vectors. Specifically, given a matrix covariate X, the probability density function of the response Y is f(Y|X)=c(Y)exp(ϕ−1−Yη∗+b(η∗)), where η∗=t...

Full description

Bibliographic Details
Main Authors: Fan, J. (Author), Gong, W. (Author), Zhu, Z. (Author)
Format: Article
Language:English
Published: Elsevier Ltd 2019
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02336nam a2200349Ia 4500
001 10.1016-j.jeconom.2019.04.026
008 220511s2019 CNT 000 0 und d
020 |a 03044076 (ISSN) 
245 1 0 |a Generalized high-dimensional trace regression via nuclear norm regularization 
260 0 |b Elsevier Ltd  |c 2019 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1016/j.jeconom.2019.04.026 
520 3 |a We study the generalized trace regression with a near low-rank regression coefficient matrix, which extends notion of sparsity for regression coefficient vectors. Specifically, given a matrix covariate X, the probability density function of the response Y is f(Y|X)=c(Y)exp(ϕ−1−Yη∗+b(η∗)), where η∗=tr(Θ∗ TX). This model accommodates various types of responses and embraces many important problem setups such as reduced-rank regression, matrix regression that accommodates a panel of regressors, matrix completion, among others. We estimate Θ∗ through minimizing empirical negative log-likelihood plus nuclear norm penalty. We first establish a general theory and then for each specific problem, we derive explicitly the statistical rate of the proposed estimator. They all match the minimax rates in the linear trace regression up to logarithmic factors. Numerical studies confirm the rates we established and demonstrate the advantage of generalized trace regression over linear trace regression when the response is dichotomous. We also show the benefit of incorporating nuclear norm regularization in dynamic stock return prediction and in image classification. © 2019 Elsevier B.V. 
650 0 4 |a High dimensional statistics 
650 0 4 |a High-dimensional statistics 
650 0 4 |a Investments 
650 0 4 |a Logistic regression 
650 0 4 |a Logistic regressions 
650 0 4 |a Matrix algebra 
650 0 4 |a Matrix completion 
650 0 4 |a Matrix completion 
650 0 4 |a Nuclear norm regularization 
650 0 4 |a Nuclear norm regularizations 
650 0 4 |a Probability density function 
650 0 4 |a Regression analysis 
650 0 4 |a Restricted strong convexity 
650 0 4 |a Strong convexities 
650 0 4 |a Trace regression 
650 0 4 |a Trace regression 
700 1 |a Fan, J.  |e author 
700 1 |a Gong, W.  |e author 
700 1 |a Zhu, Z.  |e author 
773 |t Journal of Econometrics