A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing
Resource scheduling problems (RSPs) in cloud manufacturing (CMfg) often manifest as dynamic scheduling problems in which scheduling strategies depend on real-time environments and demands. Generally, multiple resources in the CMfg scheduling process cause difficulties in system modeling. To solve th...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8952684/ |
id |
doaj-78fac1c6e7584f9aaae6ea33984904cc |
---|---|
record_format |
Article |
spelling |
doaj-78fac1c6e7584f9aaae6ea33984904cc2021-03-30T01:50:31ZengIEEEIEEE Access2169-35362020-01-0189987999710.1109/ACCESS.2020.29649558952684A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud ManufacturingHuayu Zhu0https://orcid.org/0000-0002-0735-830XMengrong Li1https://orcid.org/0000-0003-3607-9761Yong Tang2https://orcid.org/0000-0003-0776-7354Yanfei Sun3https://orcid.org/0000-0003-0085-1545School of Automation, Nanjing University of Posts and Telecommunications, Nanjing, ChinaSchool of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, ChinaSchool of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing, ChinaSchool of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing, ChinaResource scheduling problems (RSPs) in cloud manufacturing (CMfg) often manifest as dynamic scheduling problems in which scheduling strategies depend on real-time environments and demands. Generally, multiple resources in the CMfg scheduling process cause difficulties in system modeling. To solve this problem, we propose Sharer, a deep reinforcement learning (DRL)-based method that converts scheduling problems with multiple resources into one learning target and learns effective strategies automatically. Our preliminary results show that Sharer is comparable to the latest heuristics, adapts to different conditions, converges quickly, and subsequently learns wise strategies.https://ieeexplore.ieee.org/document/8952684/Cloud manufacturingdeep reinforcement learningreal-timedynamic dataresource schedulingoptimization |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Huayu Zhu Mengrong Li Yong Tang Yanfei Sun |
spellingShingle |
Huayu Zhu Mengrong Li Yong Tang Yanfei Sun A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing IEEE Access Cloud manufacturing deep reinforcement learning real-time dynamic data resource scheduling optimization |
author_facet |
Huayu Zhu Mengrong Li Yong Tang Yanfei Sun |
author_sort |
Huayu Zhu |
title |
A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing |
title_short |
A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing |
title_full |
A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing |
title_fullStr |
A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing |
title_full_unstemmed |
A Deep-Reinforcement-Learning-Based Optimization Approach for Real-Time Scheduling in Cloud Manufacturing |
title_sort |
deep-reinforcement-learning-based optimization approach for real-time scheduling in cloud manufacturing |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Resource scheduling problems (RSPs) in cloud manufacturing (CMfg) often manifest as dynamic scheduling problems in which scheduling strategies depend on real-time environments and demands. Generally, multiple resources in the CMfg scheduling process cause difficulties in system modeling. To solve this problem, we propose Sharer, a deep reinforcement learning (DRL)-based method that converts scheduling problems with multiple resources into one learning target and learns effective strategies automatically. Our preliminary results show that Sharer is comparable to the latest heuristics, adapts to different conditions, converges quickly, and subsequently learns wise strategies. |
topic |
Cloud manufacturing deep reinforcement learning real-time dynamic data resource scheduling optimization |
url |
https://ieeexplore.ieee.org/document/8952684/ |
work_keys_str_mv |
AT huayuzhu adeepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT mengrongli adeepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT yongtang adeepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT yanfeisun adeepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT huayuzhu deepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT mengrongli deepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT yongtang deepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing AT yanfeisun deepreinforcementlearningbasedoptimizationapproachforrealtimeschedulingincloudmanufacturing |
_version_ |
1724186336123944960 |