Summary: | Traditional Recommender Systems (RS) use central servers to collect user data, compute user profiles and train global recommendation models. Central computation of RS models has great results in performance because the models are trained using all the available information and the full user profiles. However, centralised RS require users to share their whole interaction history with the central server and in general are not scalable as the number of users and interactions increases. Central RSs also have a central point of attack with respect to user privacy, because all user profiles and interactions are stored centrally. In this work we propose DARES, an distributed recommender system algorithm that uses reinforcement learning and is based on the asynchronous advantage actor-critic model (A3C). DARES is developed combining the approaches of A3C and federated learning (FL) and allows users to keep their data locally on their own devices. The system architecture consists of (i) a local recommendation model trained locally on the user devices using their interaction and (ii) a global recommendation model that is trained on a central server using the model updates that are computed on the user devices. We evaluate the proposed algorithm using well-known datasets and we compare its performance against well-known state of the art algorithms. We show that although being distributed and asynchronous, it can achieve comparable and in many cases better performance than current state-of-the-art algorithms.
|