A Reinforcement Learning Approach for Interference Management in Heterogeneous Wireless Networks

Akindele Segun Afolabi, Shehu Ahmed, Olubunmi Adewale Akinola


Due to the increased demand for scarce wireless bandwidth, it has become insufficient to serve the network user equipment using macrocell base stations only. Network densification through the addition of low power nodes (picocell) to conventional high power nodes addresses the bandwidth dearth issue, but unfortunately introduces unwanted interference into the network which causes a reduction in throughput. This paper developed a reinforcement learning model that assisted in coordinating interference in a heterogeneous network comprising macro-cell and pico-cell base stations. The learning mechanism was derived based on Q-learning, which consisted of agent, state, action, and reward. The base station was modeled as the agent, while the state represented the condition of the user equipment in terms of Signal to Interference Plus Noise Ratio. The action was represented by the transmission power level and the reward was given in terms of throughput. Simulation results showed that the proposed Q-learning scheme improved the performances of average user equipment throughput in the network. In particular, multi-agent systems with a normal learning rate increased the throughput of associated user equipment by a whooping 212.5% compared to a macrocell-only scheme.


Heterogeneous Network, Q-Learning, Macrocell, Picocell, Interference

Full Text:


International Journal of Interactive Mobile Technologies (iJIM) – eISSN: 1865-7923
Creative Commons License
Scopus logo IET Inspec logo DBLP logo EBSCO logo Ulrich's logo MAS logo