Distributed Reinforcement Learning for Overlay Networks
In this thesis, we study Collaborative Reinforcement Learning (CRL) in the context of Information Retrieval in unstructured distributed systems. Collaborative reinforcement learning is an extension to reinforcement learning to support multiple agents that both share value functions and cooperate to...
Main Author: | |
---|---|
Format: | Others |
Language: | English |
Published: |
KTH, Skolan för informations- och kommunikationsteknik (ICT)
2011
|
Subjects: | |
Online Access: | http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92131 |
id |
ndltd-UPSALLA1-oai-DiVA.org-kth-92131 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-UPSALLA1-oai-DiVA.org-kth-921312013-01-08T13:51:40ZDistributed Reinforcement Learning for Overlay NetworksengMastour Eshgh, Somayeh SadatKTH, Skolan för informations- och kommunikationsteknik (ICT)2011TECHNOLOGYTEKNIKVETENSKAPIn this thesis, we study Collaborative Reinforcement Learning (CRL) in the context of Information Retrieval in unstructured distributed systems. Collaborative reinforcement learning is an extension to reinforcement learning to support multiple agents that both share value functions and cooperate to solve tasks. Specifically, we propose and develop an algorithm for searching in peer to peer systems by using collaborative reinforcement learning. We present a search technique that achieve higher performance than currently available techniques, but is straightforward and practical enough to be easily incorporated into existing systems. Theapproach is profitable because reinforcement learning methods search for good behaviors gradually during the lifetime of the learning peer. However, we must overcome the challenges due to the fundamental partial observability inherent in distributed systems which have highly dynamic nature and changes in their configuration are common practice. Also, we undertake a performance study of the effects that some environment parameters, such as the number of peers, network traffic bandwidth, and partial behavioral knowledge from previous experience, have on the speed and reliability of learning. In the process, we show how CRL can be used to establish and maintain autonomic properties of decentralized distributed systems. This thesis is an empirical study of collaborative reinforcement learning. However, our results contribute to the broader understanding of learning strategies and design of different search policies in distributed systems. Our experimental results confirm the performance improvement of CRL in heterogeneous overlay networks over standard techniques such as random walking. Student thesisinfo:eu-repo/semantics/bachelorThesistexthttp://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92131Trita-ICT-EX ; 222application/pdfinfo:eu-repo/semantics/openAccess |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
TECHNOLOGY TEKNIKVETENSKAP |
spellingShingle |
TECHNOLOGY TEKNIKVETENSKAP Mastour Eshgh, Somayeh Sadat Distributed Reinforcement Learning for Overlay Networks |
description |
In this thesis, we study Collaborative Reinforcement Learning (CRL) in the context of Information Retrieval in unstructured distributed systems. Collaborative reinforcement learning is an extension to reinforcement learning to support multiple agents that both share value functions and cooperate to solve tasks. Specifically, we propose and develop an algorithm for searching in peer to peer systems by using collaborative reinforcement learning. We present a search technique that achieve higher performance than currently available techniques, but is straightforward and practical enough to be easily incorporated into existing systems. Theapproach is profitable because reinforcement learning methods search for good behaviors gradually during the lifetime of the learning peer. However, we must overcome the challenges due to the fundamental partial observability inherent in distributed systems which have highly dynamic nature and changes in their configuration are common practice. Also, we undertake a performance study of the effects that some environment parameters, such as the number of peers, network traffic bandwidth, and partial behavioral knowledge from previous experience, have on the speed and reliability of learning. In the process, we show how CRL can be used to establish and maintain autonomic properties of decentralized distributed systems. This thesis is an empirical study of collaborative reinforcement learning. However, our results contribute to the broader understanding of learning strategies and design of different search policies in distributed systems. Our experimental results confirm the performance improvement of CRL in heterogeneous overlay networks over standard techniques such as random walking. |
author |
Mastour Eshgh, Somayeh Sadat |
author_facet |
Mastour Eshgh, Somayeh Sadat |
author_sort |
Mastour Eshgh, Somayeh Sadat |
title |
Distributed Reinforcement Learning for Overlay Networks |
title_short |
Distributed Reinforcement Learning for Overlay Networks |
title_full |
Distributed Reinforcement Learning for Overlay Networks |
title_fullStr |
Distributed Reinforcement Learning for Overlay Networks |
title_full_unstemmed |
Distributed Reinforcement Learning for Overlay Networks |
title_sort |
distributed reinforcement learning for overlay networks |
publisher |
KTH, Skolan för informations- och kommunikationsteknik (ICT) |
publishDate |
2011 |
url |
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92131 |
work_keys_str_mv |
AT mastoureshghsomayehsadat distributedreinforcementlearningforoverlaynetworks |
_version_ |
1716530840622596096 |