Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go
Computer game-playing programs based on deep reinforcement learning have surpassed the performance of even the best human players. However, the huge analysis space of such neural networks and their numerous parameters require extensive computing power. Hence, in this study, we aimed to increase the...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2020-01-01
|
Series: | Mathematical Problems in Engineering |
Online Access: | http://dx.doi.org/10.1155/2020/1397948 |
id |
doaj-816e8690628f421491fbf6d6300631f4 |
---|---|
record_format |
Article |
spelling |
doaj-816e8690628f421491fbf6d6300631f42020-11-25T02:32:38ZengHindawi LimitedMathematical Problems in Engineering1024-123X1563-51472020-01-01202010.1155/2020/13979481397948Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game GoXiali Li0Zhengyu Lv1Bo Liu2Licheng Wu3Zheng Wang4School of Information Engineering, Minzu University of China, Beijing 100081, ChinaSchool of Information Engineering, Minzu University of China, Beijing 100081, ChinaSchool of Information Engineering, Minzu University of China, Beijing 100081, ChinaSchool of Information Engineering, Minzu University of China, Beijing 100081, ChinaDepartment of Management, Taiyuan Normal University, Shanxi 030619, ChinaComputer game-playing programs based on deep reinforcement learning have surpassed the performance of even the best human players. However, the huge analysis space of such neural networks and their numerous parameters require extensive computing power. Hence, in this study, we aimed to increase the network learning efficiency by modifying the neural network structure, which should reduce the number of learning iterations and the required computing power. A convolutional neural network with a maximum-average-out (MAO) unit structure based on piecewise function thinking is proposed, through which features can be effectively learned and the expression ability of hidden layer features can be enhanced. To verify the performance of the MAO structure, we compared it with the ResNet18 network by applying them both to the framework of AlphaGo Zero, which was developed for playing the game Go. The two network structures were trained from scratch using a low-cost server environment. MAO unit won eight out of ten games against the ResNet18 network. The superior performance of the MAO unit compared with the ResNet18 network is significant for the further development of game algorithms that require less computing power than those currently in use.http://dx.doi.org/10.1155/2020/1397948 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xiali Li Zhengyu Lv Bo Liu Licheng Wu Zheng Wang |
spellingShingle |
Xiali Li Zhengyu Lv Bo Liu Licheng Wu Zheng Wang Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go Mathematical Problems in Engineering |
author_facet |
Xiali Li Zhengyu Lv Bo Liu Licheng Wu Zheng Wang |
author_sort |
Xiali Li |
title |
Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go |
title_short |
Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go |
title_full |
Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go |
title_fullStr |
Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go |
title_full_unstemmed |
Improved Feature Learning: A Maximum-Average-Out Deep Neural Network for the Game Go |
title_sort |
improved feature learning: a maximum-average-out deep neural network for the game go |
publisher |
Hindawi Limited |
series |
Mathematical Problems in Engineering |
issn |
1024-123X 1563-5147 |
publishDate |
2020-01-01 |
description |
Computer game-playing programs based on deep reinforcement learning have surpassed the performance of even the best human players. However, the huge analysis space of such neural networks and their numerous parameters require extensive computing power. Hence, in this study, we aimed to increase the network learning efficiency by modifying the neural network structure, which should reduce the number of learning iterations and the required computing power. A convolutional neural network with a maximum-average-out (MAO) unit structure based on piecewise function thinking is proposed, through which features can be effectively learned and the expression ability of hidden layer features can be enhanced. To verify the performance of the MAO structure, we compared it with the ResNet18 network by applying them both to the framework of AlphaGo Zero, which was developed for playing the game Go. The two network structures were trained from scratch using a low-cost server environment. MAO unit won eight out of ten games against the ResNet18 network. The superior performance of the MAO unit compared with the ResNet18 network is significant for the further development of game algorithms that require less computing power than those currently in use. |
url |
http://dx.doi.org/10.1155/2020/1397948 |
work_keys_str_mv |
AT xialili improvedfeaturelearningamaximumaverageoutdeepneuralnetworkforthegamego AT zhengyulv improvedfeaturelearningamaximumaverageoutdeepneuralnetworkforthegamego AT boliu improvedfeaturelearningamaximumaverageoutdeepneuralnetworkforthegamego AT lichengwu improvedfeaturelearningamaximumaverageoutdeepneuralnetworkforthegamego AT zhengwang improvedfeaturelearningamaximumaverageoutdeepneuralnetworkforthegamego |
_version_ |
1715457125544624128 |