Summary: | With the development of computer technology, computational-intensive and delay-sensitive applications are emerging endlessly, and they are limited by the computing power and battery life of Smart Mobile Devices (SMDs). Mobile edge computing (MEC) is a computation model with great potential to meet application requirements and alleviate burdens on SMDs through computation offloading. However, device mobility and server status variability in the multi-server and multi-task scenario bring challenges to the computation offloading. To cope with these challenges, we first propose a parallel task offloading model and a small area-based edge offloading scheme in MEC. Then, we formulate the optimization problem to minimize the completion time of all tasks, and transform the problem into a deep reinforcement learning-based offloading scheme by Markov decision approach. Furthermore, we present a deep deterministic policy gradient (DDPG) approach for obtaining the offloading strategy. Experimental results demonstrate that the DDPG- based offloading approach improves long-term performance by at least 19% in ultra-low latency, efficient usage of servers, and frequent mobility of SMDs over traditional strategies.
|