Research on Design of Machine Dispatching Policy Using Reinforcement Learning

碩士 === 國立臺灣大學 === 電機工程學研究所 === 92 === Semiconductor fabrication is characterized by a variety of products, complex re-entrant flow, machine uncertainty, customer orientation, high investment, and short product life cycle. Effective methods for different lots dispatching that lead to achieve producti...

Full description

Bibliographic Details
Main Authors: Hsin-Yeh Wu, 吳欣曄
Other Authors: Shi-Chung Chang
Format: Others
Language:zh-TW
Published: 2004
Online Access:http://ndltd.ncl.edu.tw/handle/43pwj6
id ndltd-TW-092NTU05442070
record_format oai_dc
spelling ndltd-TW-092NTU054420702019-05-15T19:37:48Z http://ndltd.ncl.edu.tw/handle/43pwj6 Research on Design of Machine Dispatching Policy Using Reinforcement Learning 以增強式學習法設計機台派工法則之研究 Hsin-Yeh Wu 吳欣曄 碩士 國立臺灣大學 電機工程學研究所 92 Semiconductor fabrication is characterized by a variety of products, complex re-entrant flow, machine uncertainty, customer orientation, high investment, and short product life cycle. Effective methods for different lots dispatching that lead to achieve production flexibility and on time delivery still pose significant challenges to both researchers and practitioners. It requires the setup time when changing the processing type in the same machine. In the one hand, it needs appropriate setup for producing different lots flexibly and timely. On the other hand, it can reduce the workload level and waiting time by decreasing setup times. However, the reentrant flow causes the competition among different type lots to the same machine. Hence machine capacity must to be allocated effectively to reach on time delivery and balancing the production flow. We studied the single machine with setup time dispatching problem and adjustable service rate machine problem. The objective of the former is tradeoff average waiting time and setup times, and the latter is tradeoff waiting and service cost. How to choose the next product type and timing to switch the service rate are the challenges we meet. The dispatching policy must be adjusted continuously because the environment changes over time. We tried to solve the problem using Reinforcement Learning (RL). It can interact with environment and find the suitable policy by Reward function and Value function. We assumed that the states have Markov property and formulate dispatching problem as Continuous-time Markov Decision Process (MDP). In the single machine with setup time dispatching problem, we used Policy Iteration (PI) to find the optimal policy on the Stationary job arrival environment. But PI cannot solve Non-stationary problems or unknown system dynamics problems. We referred to the RL Sarsa algorithm [RsA98] to apply to our dispatching problem. It is an on-policy learning that learns the value of the policy that is used to make decisions. And it is conceptually and computationally simple to solve MDP without system dynamics. In the stationary case, RL learned 95% correctness of optimal policy with enough learning step. Furthermore, we applied RL to Non-stationary dispatching environment and compared with Random Policy. The Results showed that RL stabilized the average weighted waiting time but Random Policy did not. RL increased the 30% throughput and decreased switched numbers than Random. This research showed that RL can deal with the dispatching problem that PI cannot. But the learning speed is not effective. We also don’t know the optimal policy in the Non-stationary environment. However, starting with given a Clearing Policy that proposed by Kumar and Seidman, 1991, RL makes less average waiting time than Clearing Policy. In the adjustable service rate machine problem, we considered the tradeoff of the service and waiting cost, and tried to find the timing to switch the service rate. We also formulated this problem as MDP and applied RL to solve. The Results show that RL need to learn 10 million steps for finding optimal switched point. We studied the relationship between parameters, including arrival rate and high service rate cost, and timing to switch. We found that the higher arrival rate and lower high service cost let the switch at fewer waiting jobs. Finally, we found that RL, which had prior knowledge, learned 1000 times faster than one’s had not. Besides, it still requires evaluating for real dispatching learning. Shi-Chung Chang 張時中 2004 學位論文 ; thesis 77 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 電機工程學研究所 === 92 === Semiconductor fabrication is characterized by a variety of products, complex re-entrant flow, machine uncertainty, customer orientation, high investment, and short product life cycle. Effective methods for different lots dispatching that lead to achieve production flexibility and on time delivery still pose significant challenges to both researchers and practitioners. It requires the setup time when changing the processing type in the same machine. In the one hand, it needs appropriate setup for producing different lots flexibly and timely. On the other hand, it can reduce the workload level and waiting time by decreasing setup times. However, the reentrant flow causes the competition among different type lots to the same machine. Hence machine capacity must to be allocated effectively to reach on time delivery and balancing the production flow. We studied the single machine with setup time dispatching problem and adjustable service rate machine problem. The objective of the former is tradeoff average waiting time and setup times, and the latter is tradeoff waiting and service cost. How to choose the next product type and timing to switch the service rate are the challenges we meet. The dispatching policy must be adjusted continuously because the environment changes over time. We tried to solve the problem using Reinforcement Learning (RL). It can interact with environment and find the suitable policy by Reward function and Value function. We assumed that the states have Markov property and formulate dispatching problem as Continuous-time Markov Decision Process (MDP). In the single machine with setup time dispatching problem, we used Policy Iteration (PI) to find the optimal policy on the Stationary job arrival environment. But PI cannot solve Non-stationary problems or unknown system dynamics problems. We referred to the RL Sarsa algorithm [RsA98] to apply to our dispatching problem. It is an on-policy learning that learns the value of the policy that is used to make decisions. And it is conceptually and computationally simple to solve MDP without system dynamics. In the stationary case, RL learned 95% correctness of optimal policy with enough learning step. Furthermore, we applied RL to Non-stationary dispatching environment and compared with Random Policy. The Results showed that RL stabilized the average weighted waiting time but Random Policy did not. RL increased the 30% throughput and decreased switched numbers than Random. This research showed that RL can deal with the dispatching problem that PI cannot. But the learning speed is not effective. We also don’t know the optimal policy in the Non-stationary environment. However, starting with given a Clearing Policy that proposed by Kumar and Seidman, 1991, RL makes less average waiting time than Clearing Policy. In the adjustable service rate machine problem, we considered the tradeoff of the service and waiting cost, and tried to find the timing to switch the service rate. We also formulated this problem as MDP and applied RL to solve. The Results show that RL need to learn 10 million steps for finding optimal switched point. We studied the relationship between parameters, including arrival rate and high service rate cost, and timing to switch. We found that the higher arrival rate and lower high service cost let the switch at fewer waiting jobs. Finally, we found that RL, which had prior knowledge, learned 1000 times faster than one’s had not. Besides, it still requires evaluating for real dispatching learning.
author2 Shi-Chung Chang
author_facet Shi-Chung Chang
Hsin-Yeh Wu
吳欣曄
author Hsin-Yeh Wu
吳欣曄
spellingShingle Hsin-Yeh Wu
吳欣曄
Research on Design of Machine Dispatching Policy Using Reinforcement Learning
author_sort Hsin-Yeh Wu
title Research on Design of Machine Dispatching Policy Using Reinforcement Learning
title_short Research on Design of Machine Dispatching Policy Using Reinforcement Learning
title_full Research on Design of Machine Dispatching Policy Using Reinforcement Learning
title_fullStr Research on Design of Machine Dispatching Policy Using Reinforcement Learning
title_full_unstemmed Research on Design of Machine Dispatching Policy Using Reinforcement Learning
title_sort research on design of machine dispatching policy using reinforcement learning
publishDate 2004
url http://ndltd.ncl.edu.tw/handle/43pwj6
work_keys_str_mv AT hsinyehwu researchondesignofmachinedispatchingpolicyusingreinforcementlearning
AT wúxīnyè researchondesignofmachinedispatchingpolicyusingreinforcementlearning
AT hsinyehwu yǐzēngqiángshìxuéxífǎshèjìjītáipàigōngfǎzézhīyánjiū
AT wúxīnyè yǐzēngqiángshìxuéxífǎshèjìjītáipàigōngfǎzézhīyánjiū
_version_ 1719090973167845376