Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data

碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 104 === Strategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we pr...

Full description

Bibliographic Details
Main Authors: Yen-Hua Huang, 黃彥樺
Other Authors: Shou-de Lin
Format: Others
Language:en_US
Published: 2015
Online Access:http://ndltd.ncl.edu.tw/handle/24425905249677578391
id ndltd-TW-104NTU05641002
record_format oai_dc
spelling ndltd-TW-104NTU056410022017-06-03T04:41:37Z http://ndltd.ncl.edu.tw/handle/24425905249677578391 Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data 在未標記資料上利用增強式學習解決影響力最大化之問題 Yen-Hua Huang 黃彥樺 碩士 國立臺灣大學 資訊網路與多媒體研究所 104 Strategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we propose a learning-based framework for influence maximization aiming at outperforming the greedy algorithm in terms of both coverage and efficiency. The proposed reinforcement learning framework combining with a classification model not only alleviates the requirement of the labelled training data, but also allows the influence maximization strategy to be developed gradually and eventually outperforms a basic greedy approach. Shou-de Lin 林守德 2015 學位論文 ; thesis 39 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 104 === Strategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we propose a learning-based framework for influence maximization aiming at outperforming the greedy algorithm in terms of both coverage and efficiency. The proposed reinforcement learning framework combining with a classification model not only alleviates the requirement of the labelled training data, but also allows the influence maximization strategy to be developed gradually and eventually outperforms a basic greedy approach.
author2 Shou-de Lin
author_facet Shou-de Lin
Yen-Hua Huang
黃彥樺
author Yen-Hua Huang
黃彥樺
spellingShingle Yen-Hua Huang
黃彥樺
Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
author_sort Yen-Hua Huang
title Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
title_short Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
title_full Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
title_fullStr Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
title_full_unstemmed Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
title_sort exploiting reinforcement-learning for influence maximization without human-annotated data
publishDate 2015
url http://ndltd.ncl.edu.tw/handle/24425905249677578391
work_keys_str_mv AT yenhuahuang exploitingreinforcementlearningforinfluencemaximizationwithouthumanannotateddata
AT huángyànhuà exploitingreinforcementlearningforinfluencemaximizationwithouthumanannotateddata
AT yenhuahuang zàiwèibiāojìzīliàoshànglìyòngzēngqiángshìxuéxíjiějuéyǐngxiǎnglìzuìdàhuàzhīwèntí
AT huángyànhuà zàiwèibiāojìzīliàoshànglìyòngzēngqiángshìxuéxíjiějuéyǐngxiǎnglìzuìdàhuàzhīwèntí
_version_ 1718455078791151616