An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack
As the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep neural networks have shown remarkable performance in various domains. On the other hand, together with the development of neur...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8822435/ |
id |
doaj-ef9b8eebef23449798f034b431bcb3e3 |
---|---|
record_format |
Article |
spelling |
doaj-ef9b8eebef23449798f034b431bcb3e32021-03-29T23:15:35ZengIEEEIEEE Access2169-35362019-01-01712498812499910.1109/ACCESS.2019.29387598822435An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion AttackCheolhee Park0https://orcid.org/0000-0002-3637-9951Dowon Hong1https://orcid.org/0000-0001-9690-5055Changho Seo2https://orcid.org/0000-0002-0779-3539Department of Mathematics, Kongju National University, Gongju, South KoreaDepartment of Mathematics, Kongju National University, Gongju, South KoreaDepartment of Convergence Science, Kongju National University, Gongju, South KoreaAs the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep neural networks have shown remarkable performance in various domains. On the other hand, together with the development of neural network models, privacy concerns have been raised. Recently, as privacy breach attacks on training datasets of neural network models have been proposed, research on privacy-preserving neural networks have been conducted. Among the privacy-preserving approaches, differential privacy provides a strict privacy guarantee, and various differentially private mechanisms have been studied for neural network models. However, it is not clear how appropriate privacy parameters should be chosen, considering the model's performance and the degree of privacy guarantee. In this paper, we study how to set appropriate privacy parameters to preserve differential privacy based on the resistance to privacy breach attacks in neural networks. In particular, we focus on the model inversion attack for neural network models, and study how to apply differential privacy as a countermeasure against this attack while retaining the utility of the model. In order to quantify the resistance to the model inversion attack, we introduce a new attack performance metric, instead of a survey-based approach, by leveraging a deep learning model, and capture the relationship between attack probability and the degree of privacy guarantee.https://ieeexplore.ieee.org/document/8822435/Differential privacydifferentially private learningmodel inversion attackprivacy-preserving neural network |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Cheolhee Park Dowon Hong Changho Seo |
spellingShingle |
Cheolhee Park Dowon Hong Changho Seo An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack IEEE Access Differential privacy differentially private learning model inversion attack privacy-preserving neural network |
author_facet |
Cheolhee Park Dowon Hong Changho Seo |
author_sort |
Cheolhee Park |
title |
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack |
title_short |
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack |
title_full |
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack |
title_fullStr |
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack |
title_full_unstemmed |
An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack |
title_sort |
attack-based evaluation method for differentially private learning against model inversion attack |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
As the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep neural networks have shown remarkable performance in various domains. On the other hand, together with the development of neural network models, privacy concerns have been raised. Recently, as privacy breach attacks on training datasets of neural network models have been proposed, research on privacy-preserving neural networks have been conducted. Among the privacy-preserving approaches, differential privacy provides a strict privacy guarantee, and various differentially private mechanisms have been studied for neural network models. However, it is not clear how appropriate privacy parameters should be chosen, considering the model's performance and the degree of privacy guarantee. In this paper, we study how to set appropriate privacy parameters to preserve differential privacy based on the resistance to privacy breach attacks in neural networks. In particular, we focus on the model inversion attack for neural network models, and study how to apply differential privacy as a countermeasure against this attack while retaining the utility of the model. In order to quantify the resistance to the model inversion attack, we introduce a new attack performance metric, instead of a survey-based approach, by leveraging a deep learning model, and capture the relationship between attack probability and the degree of privacy guarantee. |
topic |
Differential privacy differentially private learning model inversion attack privacy-preserving neural network |
url |
https://ieeexplore.ieee.org/document/8822435/ |
work_keys_str_mv |
AT cheolheepark anattackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack AT dowonhong anattackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack AT changhoseo anattackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack AT cheolheepark attackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack AT dowonhong attackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack AT changhoseo attackbasedevaluationmethodfordifferentiallyprivatelearningagainstmodelinversionattack |
_version_ |
1724189938906300416 |