DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning
Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9448142/ |
id |
doaj-2c91b62aeae84cf5a29d7ebb4ac6d23e |
---|---|
record_format |
Article |
spelling |
doaj-2c91b62aeae84cf5a29d7ebb4ac6d23e2021-06-14T23:00:37ZengIEEEIEEE Access2169-35362021-01-019833408335410.1109/ACCESS.2021.30874069448142DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement LearningBichen Shi0Elias Z. Tragos1https://orcid.org/0000-0001-9566-531XMakbule Gulcin Ozsoy2https://orcid.org/0000-0001-6013-1668Ruihai Dong3https://orcid.org/0000-0002-2509-1370Neil Hurley4Barry Smyth5Aonghus Lawlor6https://orcid.org/0000-0002-6160-4639Insight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandInsight Centre for Data Analytics, University College Dublin, Dublin 4, IrelandTraditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.https://ieeexplore.ieee.org/document/9448142/Recommender systemsreinforcement learningdistributed learningclick through ratio |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Bichen Shi Elias Z. Tragos Makbule Gulcin Ozsoy Ruihai Dong Neil Hurley Barry Smyth Aonghus Lawlor |
spellingShingle |
Bichen Shi Elias Z. Tragos Makbule Gulcin Ozsoy Ruihai Dong Neil Hurley Barry Smyth Aonghus Lawlor DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning IEEE Access Recommender systems reinforcement learning distributed learning click through ratio |
author_facet |
Bichen Shi Elias Z. Tragos Makbule Gulcin Ozsoy Ruihai Dong Neil Hurley Barry Smyth Aonghus Lawlor |
author_sort |
Bichen Shi |
title |
DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning |
title_short |
DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning |
title_full |
DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning |
title_fullStr |
DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning |
title_full_unstemmed |
DARES: An Asynchronous Distributed Recommender System Using Deep Reinforcement Learning |
title_sort |
dares: an asynchronous distributed recommender system using deep reinforcement learning |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2021-01-01 |
description |
Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms. |
topic |
Recommender systems reinforcement learning distributed learning click through ratio |
url |
https://ieeexplore.ieee.org/document/9448142/ |
work_keys_str_mv |
AT bichenshi daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT eliasztragos daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT makbulegulcinozsoy daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT ruihaidong daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT neilhurley daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT barrysmyth daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning AT aonghuslawlor daresanasynchronousdistributedrecommendersystemusingdeepreinforcementlearning |
_version_ |
1721377775850356736 |