Summary: | In this paper, we propose a novel feature selection model based on subspace learning with the use of a large margin principle. First, we present a new margin metric described by a given instance and its nearest missing and nearest hit, which can be explained as the nearest neighbor with a different label and the same label, respectively. Specifically, for a given instance, the margin is the ratio of the distance of the nearest missing to that of the nearest hit rather than the difference of distances, which contributes to better balance since the distance to the nearest missing is usually much larger than the nearest hit. The proposed model seeks a subspace in which the margin metric is maximized. Moreover, considering that the nearest neighbors of a given sample are uncertain in the presence of many irrelevant features, we treat them as hidden variables and estimate the expectation of margin. To perform the feature selection, an <inline-formula> <tex-math notation="LaTeX">$\ell _{2,1}$ </tex-math></inline-formula>-norm is imposed on the subspace projection matrix to enforce row sparsity. The resulting trace ratio optimization problem, which can be connected to a nonlinear eigenvalue problem, is hard to solve. Thus, we design an efficient iterative algorithm and present a theoretical analysis of the convergence. Finally, we evaluate the proposed method by comparing it against several other state-of-the-art methods. The extensive experiments on real-world datasets show the superiority of our proposed approach.
|