Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints

Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receive...

Full description

Bibliographic Details
Main Authors: Yuval Dagan, Vitaly Feldman
Format: Article
Language:English
Published: Labor Dynamics Institute 2021-09-01
Series:The Journal of Privacy and Confidentiality
Subjects:
Online Access:https://journalprivacyconfidentiality.org/index.php/jpc/article/view/781
id doaj-ba2badd4ee734858b2d6fed08477cde0
record_format Article
spelling doaj-ba2badd4ee734858b2d6fed08477cde02021-09-15T20:49:45ZengLabor Dynamics InstituteThe Journal of Privacy and Confidentiality2575-85272021-09-01112Interaction is Necessary for Distributed Learning with Privacy or Communication ConstraintsYuval Dagan0Vitaly Feldman1MITApple, USA Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries. https://journalprivacyconfidentiality.org/index.php/jpc/article/view/781Local Differential PrivacyDistributed LearningInteractive ProtocolCommunication-constrained LearningStatistical Queries
collection DOAJ
language English
format Article
sources DOAJ
author Yuval Dagan
Vitaly Feldman
spellingShingle Yuval Dagan
Vitaly Feldman
Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
The Journal of Privacy and Confidentiality
Local Differential Privacy
Distributed Learning
Interactive Protocol
Communication-constrained Learning
Statistical Queries
author_facet Yuval Dagan
Vitaly Feldman
author_sort Yuval Dagan
title Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
title_short Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
title_full Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
title_fullStr Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
title_full_unstemmed Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints
title_sort interaction is necessary for distributed learning with privacy or communication constraints
publisher Labor Dynamics Institute
series The Journal of Privacy and Confidentiality
issn 2575-8527
publishDate 2021-09-01
description Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries.
topic Local Differential Privacy
Distributed Learning
Interactive Protocol
Communication-constrained Learning
Statistical Queries
url https://journalprivacyconfidentiality.org/index.php/jpc/article/view/781
work_keys_str_mv AT yuvaldagan interactionisnecessaryfordistributedlearningwithprivacyorcommunicationconstraints
AT vitalyfeldman interactionisnecessaryfordistributedlearningwithprivacyorcommunicationconstraints
_version_ 1717378598701105152