Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks
In the existing network-layered architectural stack of Cognitive Radio Ad Hoc Network (CRAHN), channel selection is performed at the Medium Access Control (MAC) layer. However, routing is done on the network layer. Due to this limitation, the Secondary/Unlicensed Users (SUs) need to access the chann...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2020-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2020/8257168 |
id |
doaj-ac35206bb37e46108481d2b6233ae4c9 |
---|---|
record_format |
Article |
spelling |
doaj-ac35206bb37e46108481d2b6233ae4c92020-11-25T03:51:33ZengHindawi-WileyComplexity1076-27871099-05262020-01-01202010.1155/2020/82571688257168Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio NetworksTauqeer Safdar Malik0Mohd Hilmi Hasan1Department of Computer Science, Air University Multan Campus, Multan 60000, PakistanCentre for Research in Data Science, Department of Computer and Information Sciences, Universiti Teknologi Petronas, Seri Iskandar 32610, Perak, MalaysiaIn the existing network-layered architectural stack of Cognitive Radio Ad Hoc Network (CRAHN), channel selection is performed at the Medium Access Control (MAC) layer. However, routing is done on the network layer. Due to this limitation, the Secondary/Unlicensed Users (SUs) need to access the channel information from the MAC layer whenever the channel switching event occurred during the data transmission. This issue delayed the channel selection process during the immediate routing decision for the channel switching event to continue the transmission. In this paper, a protocol is proposed to implement the channel selection decisions at the network layer during the routing process. The decision is based on past and expected future routing decisions of Primary Users (PUs). A learning agent operating in the cross-layer mode of the network-layered architectural stack is implemented in the spectrum mobility manager to pass the channel information to the network layer. This information is originated at the MAC layer. The channel selection is performed on the basis of reinforcement learning algorithms such as No-External Regret Learning, Q-Learning, and Learning Automata. This leads to minimizing the channel switching events and user interferences in the Reinforcement Learning- (RL-) based routing protocol. Simulations are conducted using Cognitive Radio Cognitive Network simulator based on Network Simulator (NS-2). The simulation results showed that the proposed routing protocol performed better than all the other comparative routing protocols in terms of number of channel switching events, average data rate, packet collision, packet loss, and end-to-end delay. The proposed routing protocol implies the improved Quality of Service (QoS) of the delay sensitive and real-time networks such as Cellular and Tele Vision (TV) networks.http://dx.doi.org/10.1155/2020/8257168 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Tauqeer Safdar Malik Mohd Hilmi Hasan |
spellingShingle |
Tauqeer Safdar Malik Mohd Hilmi Hasan Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks Complexity |
author_facet |
Tauqeer Safdar Malik Mohd Hilmi Hasan |
author_sort |
Tauqeer Safdar Malik |
title |
Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks |
title_short |
Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks |
title_full |
Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks |
title_fullStr |
Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks |
title_full_unstemmed |
Reinforcement Learning-Based Routing Protocol to Minimize Channel Switching and Interference for Cognitive Radio Networks |
title_sort |
reinforcement learning-based routing protocol to minimize channel switching and interference for cognitive radio networks |
publisher |
Hindawi-Wiley |
series |
Complexity |
issn |
1076-2787 1099-0526 |
publishDate |
2020-01-01 |
description |
In the existing network-layered architectural stack of Cognitive Radio Ad Hoc Network (CRAHN), channel selection is performed at the Medium Access Control (MAC) layer. However, routing is done on the network layer. Due to this limitation, the Secondary/Unlicensed Users (SUs) need to access the channel information from the MAC layer whenever the channel switching event occurred during the data transmission. This issue delayed the channel selection process during the immediate routing decision for the channel switching event to continue the transmission. In this paper, a protocol is proposed to implement the channel selection decisions at the network layer during the routing process. The decision is based on past and expected future routing decisions of Primary Users (PUs). A learning agent operating in the cross-layer mode of the network-layered architectural stack is implemented in the spectrum mobility manager to pass the channel information to the network layer. This information is originated at the MAC layer. The channel selection is performed on the basis of reinforcement learning algorithms such as No-External Regret Learning, Q-Learning, and Learning Automata. This leads to minimizing the channel switching events and user interferences in the Reinforcement Learning- (RL-) based routing protocol. Simulations are conducted using Cognitive Radio Cognitive Network simulator based on Network Simulator (NS-2). The simulation results showed that the proposed routing protocol performed better than all the other comparative routing protocols in terms of number of channel switching events, average data rate, packet collision, packet loss, and end-to-end delay. The proposed routing protocol implies the improved Quality of Service (QoS) of the delay sensitive and real-time networks such as Cellular and Tele Vision (TV) networks. |
url |
http://dx.doi.org/10.1155/2020/8257168 |
work_keys_str_mv |
AT tauqeersafdarmalik reinforcementlearningbasedroutingprotocoltominimizechannelswitchingandinterferenceforcognitiveradionetworks AT mohdhilmihasan reinforcementlearningbasedroutingprotocoltominimizechannelswitchingandinterferenceforcognitiveradionetworks |
_version_ |
1715102406257147904 |