A Robust Method to Measure the Global Feature Importance of Complex Prediction Models
Because machine learning has been widely used in various domains, interpreting internal mechanisms and predictive results of models is crucial for further applications of complex machine learning models. However, the interpretability of complex machine learning models on biased data remains a diffic...
Main Authors: | Xiaohang Zhang, Ling Wu, Zhengren Li, Huayuan Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9314116/ |
Similar Items
-
Measuring Feature Importance of Convolutional Neural Networks
by: Xiaohang Zhang, et al.
Published: (2020-01-01) -
Identifying Node Importance in a Complex Network Based on Node Bridging Feature
by: Lincheng Jiang, et al.
Published: (2018-10-01) -
Supervised Learning via Unsupervised Sparse Autoencoder
by: Jianran Liu, et al.
Published: (2018-01-01) -
On the Interpretability of Machine Learning Models and Experimental Feature Selection in Case of Multicollinear Data
by: Franc Drobnič, et al.
Published: (2020-05-01) -
Investigating Health-Related Features and Their Impact on the Prediction of Diabetes Using Machine Learning
by: Hafiz Farooq Ahmad, et al.
Published: (2021-01-01)