A Differential Privacy Framework for Collaborative Filtering
Focusing on the privacy issues in recommender systems, we propose a framework containing two perturbation methods for differentially private collaborative filtering to prevent the threat of inference attacks against users. To conceal individual ratings and provide valuable predictions, we consider s...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2019-01-01
|
Series: | Mathematical Problems in Engineering |
Online Access: | http://dx.doi.org/10.1155/2019/1460234 |
id |
doaj-d661ef31ca5c4a65b9f2ecccbcb1180b |
---|---|
record_format |
Article |
spelling |
doaj-d661ef31ca5c4a65b9f2ecccbcb1180b2020-11-24T21:56:45ZengHindawi LimitedMathematical Problems in Engineering1024-123X1563-51472019-01-01201910.1155/2019/14602341460234A Differential Privacy Framework for Collaborative FilteringJing Yang0Xiaoye Li1Zhenlong Sun2Jianpei Zhang3College of Computer Science and Technology, Harbin Engineering University, Harbin, Heilongjiang 150001, ChinaCollege of Computer Science and Technology, Harbin Engineering University, Harbin, Heilongjiang 150001, ChinaCollege of Computer Science and Technology, Harbin Engineering University, Harbin, Heilongjiang 150001, ChinaCollege of Computer Science and Technology, Harbin Engineering University, Harbin, Heilongjiang 150001, ChinaFocusing on the privacy issues in recommender systems, we propose a framework containing two perturbation methods for differentially private collaborative filtering to prevent the threat of inference attacks against users. To conceal individual ratings and provide valuable predictions, we consider some representative algorithms to calculate the predicted scores and provide specific solutions for adding Laplace noise. The DPI (Differentially Private Input) method perturbs the original ratings, which can be followed by any recommendation algorithms. By contrast, the DPM (Differentially Private Manner) method is based on the original ratings, which perturbs the measurements during implementation of the algorithms and releases the predicted scores. The experimental results showed that both methods can provide valuable prediction results while guaranteeing DP, which suggests it is a feasible solution and can be competent to make private recommendations.http://dx.doi.org/10.1155/2019/1460234 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Jing Yang Xiaoye Li Zhenlong Sun Jianpei Zhang |
spellingShingle |
Jing Yang Xiaoye Li Zhenlong Sun Jianpei Zhang A Differential Privacy Framework for Collaborative Filtering Mathematical Problems in Engineering |
author_facet |
Jing Yang Xiaoye Li Zhenlong Sun Jianpei Zhang |
author_sort |
Jing Yang |
title |
A Differential Privacy Framework for Collaborative Filtering |
title_short |
A Differential Privacy Framework for Collaborative Filtering |
title_full |
A Differential Privacy Framework for Collaborative Filtering |
title_fullStr |
A Differential Privacy Framework for Collaborative Filtering |
title_full_unstemmed |
A Differential Privacy Framework for Collaborative Filtering |
title_sort |
differential privacy framework for collaborative filtering |
publisher |
Hindawi Limited |
series |
Mathematical Problems in Engineering |
issn |
1024-123X 1563-5147 |
publishDate |
2019-01-01 |
description |
Focusing on the privacy issues in recommender systems, we propose a framework containing two perturbation methods for differentially private collaborative filtering to prevent the threat of inference attacks against users. To conceal individual ratings and provide valuable predictions, we consider some representative algorithms to calculate the predicted scores and provide specific solutions for adding Laplace noise. The DPI (Differentially Private Input) method perturbs the original ratings, which can be followed by any recommendation algorithms. By contrast, the DPM (Differentially Private Manner) method is based on the original ratings, which perturbs the measurements during implementation of the algorithms and releases the predicted scores. The experimental results showed that both methods can provide valuable prediction results while guaranteeing DP, which suggests it is a feasible solution and can be competent to make private recommendations. |
url |
http://dx.doi.org/10.1155/2019/1460234 |
work_keys_str_mv |
AT jingyang adifferentialprivacyframeworkforcollaborativefiltering AT xiaoyeli adifferentialprivacyframeworkforcollaborativefiltering AT zhenlongsun adifferentialprivacyframeworkforcollaborativefiltering AT jianpeizhang adifferentialprivacyframeworkforcollaborativefiltering AT jingyang differentialprivacyframeworkforcollaborativefiltering AT xiaoyeli differentialprivacyframeworkforcollaborativefiltering AT zhenlongsun differentialprivacyframeworkforcollaborativefiltering AT jianpeizhang differentialprivacyframeworkforcollaborativefiltering |
_version_ |
1725857327694741504 |