Summary: | 碩士 === 國立臺灣大學 === 電機工程學研究所 === 92 === Semiconductor fabrication is characterized by a variety of products, complex re-entrant flow, machine uncertainty, customer orientation, high investment, and short product life cycle. Effective methods for different lots dispatching that lead to achieve production flexibility and on time delivery still pose significant challenges to both researchers and practitioners. It requires the setup time when changing the processing type in the same machine. In the one hand, it needs appropriate setup for producing different lots flexibly and timely. On the other hand, it can reduce the workload level and waiting time by decreasing setup times. However, the reentrant flow causes the competition among different type lots to the same machine. Hence machine capacity must to be allocated effectively to reach on time delivery and balancing the production flow. We studied the single machine with setup time dispatching problem and adjustable service rate machine problem. The objective of the former is tradeoff average waiting time and setup times, and the latter is tradeoff waiting and service cost. How to choose the next product type and timing to switch the service rate are the challenges we meet.
The dispatching policy must be adjusted continuously because the environment changes over time. We tried to solve the problem using Reinforcement Learning (RL). It can interact with environment and find the suitable policy by Reward function and Value function. We assumed that the states have Markov property and formulate dispatching problem as Continuous-time Markov Decision Process (MDP).
In the single machine with setup time dispatching problem, we used Policy Iteration (PI) to find the optimal policy on the Stationary job arrival environment. But PI cannot solve Non-stationary problems or unknown system dynamics problems. We referred to the RL Sarsa algorithm [RsA98] to apply to our dispatching problem. It is an on-policy learning that learns the value of the policy that is used to make decisions. And it is conceptually and computationally simple to solve MDP without system dynamics. In the stationary case, RL learned 95% correctness of optimal policy with enough learning step. Furthermore, we applied RL to Non-stationary dispatching environment and compared with Random Policy. The Results showed that RL stabilized the average weighted waiting time but Random Policy did not. RL increased the 30% throughput and decreased switched numbers than Random. This research showed that RL can deal with the dispatching problem that PI cannot. But the learning speed is not effective. We also don’t know the optimal policy in the Non-stationary environment. However, starting with given a Clearing Policy that proposed by Kumar and Seidman, 1991, RL makes less average waiting time than Clearing Policy.
In the adjustable service rate machine problem, we considered the tradeoff of the service and waiting cost, and tried to find the timing to switch the service rate. We also formulated this problem as MDP and applied RL to solve. The Results show that RL need to learn 10 million steps for finding optimal switched point. We studied the relationship between parameters, including arrival rate and high service rate cost, and timing to switch. We found that the higher arrival rate and lower high service cost let the switch at fewer waiting jobs. Finally, we found that RL, which had prior knowledge, learned 1000 times faster than one’s had not. Besides, it still requires evaluating for real dispatching learning.
|