Effect of reinforcement learning on routing of cognitive radio ad-hoc networks

Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network perf...

Full description

Main Authors: Safdar, T., Hasbulah, H.B., Rehan, M.
Format: Conference or Workshop Item
Institution: Universiti Teknologi Petronas
Record Id / ISBN-0: utp-eprints.30906 /
Published: Institute of Electrical and Electronics Engineers Inc. 2016
Online Access: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84995665385&doi=10.1109%2fISMSC.2015.7594025&partnerID=40&md5=9bbe3b49b2bca53a82af02651d52f1d2
http://eprints.utp.edu.my/30906/
Tags: Add Tag
No Tags, Be the first to tag this record!
id utp-eprints.30906
recordtype eprints
spelling utp-eprints.309062022-03-25T07:43:21Z Effect of reinforcement learning on routing of cognitive radio ad-hoc networks Safdar, T. Hasbulah, H.B. Rehan, M. Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol. © 2015 IEEE. Institute of Electrical and Electronics Engineers Inc. 2016 Conference or Workshop Item NonPeerReviewed https://www.scopus.com/inward/record.uri?eid=2-s2.0-84995665385&doi=10.1109%2fISMSC.2015.7594025&partnerID=40&md5=9bbe3b49b2bca53a82af02651d52f1d2 Safdar, T. and Hasbulah, H.B. and Rehan, M. (2016) Effect of reinforcement learning on routing of cognitive radio ad-hoc networks. In: UNSPECIFIED. http://eprints.utp.edu.my/30906/
institution Universiti Teknologi Petronas
collection UTP Institutional Repository
description Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol. © 2015 IEEE.
format Conference or Workshop Item
author Safdar, T.
Hasbulah, H.B.
Rehan, M.
spellingShingle Safdar, T.
Hasbulah, H.B.
Rehan, M.
Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
author_sort Safdar, T.
title Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
title_short Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
title_full Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
title_fullStr Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
title_full_unstemmed Effect of reinforcement learning on routing of cognitive radio ad-hoc networks
title_sort effect of reinforcement learning on routing of cognitive radio ad-hoc networks
publisher Institute of Electrical and Electronics Engineers Inc.
publishDate 2016
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-84995665385&doi=10.1109%2fISMSC.2015.7594025&partnerID=40&md5=9bbe3b49b2bca53a82af02651d52f1d2
http://eprints.utp.edu.my/30906/
_version_ 1741197487398453248
score 11.62408