Task Migration Based on Reinforcement Learning in Vehicular Edge Computing
Multiaccess edge computing (MEC) has emerged as a promising technology for time-sensitive and computation-intensive tasks. With the high mobility of users, especially in a vehicular environment, computational task migration between vehicular edge computing servers (VECSs) has become one of the most...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2021-01-01
|
Series: | Wireless Communications and Mobile Computing |
Online Access: | http://dx.doi.org/10.1155/2021/9929318 |
id |
doaj-8277c84b05e44c77a6390a3c3ba16430 |
---|---|
record_format |
Article |
spelling |
doaj-8277c84b05e44c77a6390a3c3ba164302021-05-24T00:15:08ZengHindawi-WileyWireless Communications and Mobile Computing1530-86772021-01-01202110.1155/2021/9929318Task Migration Based on Reinforcement Learning in Vehicular Edge ComputingSungwon Moon0Jaesung Park1Yujin Lim2Department of IT EngineeringSchool of Information ConvergenceDepartment of IT EngineeringMultiaccess edge computing (MEC) has emerged as a promising technology for time-sensitive and computation-intensive tasks. With the high mobility of users, especially in a vehicular environment, computational task migration between vehicular edge computing servers (VECSs) has become one of the most critical challenges in guaranteeing quality of service (QoS) requirements. If the vehicle’s tasks unequally migrate to specific VECSs, the performance can degrade in terms of latency and quality of service. Therefore, in this study, we define a computational task migration problem for balancing the loads of VECSs and minimizing migration costs. To solve this problem, we adopt a reinforcement learning algorithm in a cooperative VECS group environment that can collaborate with VECSs in the group. The objective of this study is to optimize load balancing and migration cost while satisfying the delay constraints of the computation task of vehicles. Simulations are performed to evaluate the performance of the proposed algorithm. The results show that compared to other algorithms, the proposed algorithm achieves approximately 20–40% better load balancing and approximately 13–28% higher task completion rate within the delay constraints.http://dx.doi.org/10.1155/2021/9929318 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Sungwon Moon Jaesung Park Yujin Lim |
spellingShingle |
Sungwon Moon Jaesung Park Yujin Lim Task Migration Based on Reinforcement Learning in Vehicular Edge Computing Wireless Communications and Mobile Computing |
author_facet |
Sungwon Moon Jaesung Park Yujin Lim |
author_sort |
Sungwon Moon |
title |
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing |
title_short |
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing |
title_full |
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing |
title_fullStr |
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing |
title_full_unstemmed |
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing |
title_sort |
task migration based on reinforcement learning in vehicular edge computing |
publisher |
Hindawi-Wiley |
series |
Wireless Communications and Mobile Computing |
issn |
1530-8677 |
publishDate |
2021-01-01 |
description |
Multiaccess edge computing (MEC) has emerged as a promising technology for time-sensitive and computation-intensive tasks. With the high mobility of users, especially in a vehicular environment, computational task migration between vehicular edge computing servers (VECSs) has become one of the most critical challenges in guaranteeing quality of service (QoS) requirements. If the vehicle’s tasks unequally migrate to specific VECSs, the performance can degrade in terms of latency and quality of service. Therefore, in this study, we define a computational task migration problem for balancing the loads of VECSs and minimizing migration costs. To solve this problem, we adopt a reinforcement learning algorithm in a cooperative VECS group environment that can collaborate with VECSs in the group. The objective of this study is to optimize load balancing and migration cost while satisfying the delay constraints of the computation task of vehicles. Simulations are performed to evaluate the performance of the proposed algorithm. The results show that compared to other algorithms, the proposed algorithm achieves approximately 20–40% better load balancing and approximately 13–28% higher task completion rate within the delay constraints. |
url |
http://dx.doi.org/10.1155/2021/9929318 |
work_keys_str_mv |
AT sungwonmoon taskmigrationbasedonreinforcementlearninginvehicularedgecomputing AT jaesungpark taskmigrationbasedonreinforcementlearninginvehicularedgecomputing AT yujinlim taskmigrationbasedonreinforcementlearninginvehicularedgecomputing |
_version_ |
1721429238687465472 |